Quantcast
Channel: Openstack RDO && KVM Hypervisor
Viewing all 297 articles
Browse latest View live

How would work changing "enable_isolated_metadata" from false to true && `openstack-service restart neutron` on the fly on RDO Liberty ?

$
0
0
Posting is addressing  question been asked at ask.openstack.org ([1])
Question :-
  
Can meta-data co-exist in qrouter and qdhcp namespace at the same time
so that LANs without Routers involved can access meta-data ?

Answer is as follows :-

All private networks (having neutron router) created  before or after this change will continue provide metadata via neutron-ns-metadata-proxy running in corresponding qrouter-namespace for theirs VMs.

Any  isolated tenants network  been created after update will provide metadata via neutron-ns-metadata-proxy running in corresponding qdhcp-namespace for theirs VMs. See  http://techbackground.blogspot.com/2013/06/metadata-via-dhcp-namespace.html

******************************************************************
For routable qdhcp-namespace created before update dhcp_agent.ini
******************************************************************
[root@vfedora22wks ~(keystone_admin)]# ip netns exec \
qdhcp-e86eebdb-71bd-4929-937c-2ab57db30e18 netstat -4 -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address
State PID/Program name
tcp 0 0 50.0.0.10:53 0.0.0.0:*
LISTEN 6773/dnsmasq
tcp 0 0 169.254.169.254:53 0.0.0.0:*
LISTEN 6773/dnsmasq
tcp 0 0 50.0.0.10:42011 50.0.0.15:22
ESTABLISHED 2784/ssh

So it still gets access to metadata via qrouter's ns-metadata-proxy

******************************************************************************
For isolated qdhcp-namespaces /bin/neutron-ns-metadata-proxy
gets started in corresponding qdhcp-namespace
******************************************************************************
[root@vfedora22wks ~(keystone_admin)]# ip netns exec \
qdhcp-e0f08063-2002-4cc9-b7b1-611925ad01e5 netstat -4 -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
6333/python2

tcp 0 0 30.0.0.10:53 0.0.0.0:* LISTEN
6771/dnsmasq
tcp 0 0 169.254.169.254:53 0.0.0.0:* LISTEN
6771/dnsmasq


[root@vfedora22wks ~(keystone_admin)]# ps -f --pid 6333 | fold -s -w 82
UID PID PPID C STIME TTY TIME CMD
neutron 6333 1 0 20:38 ? 00:00:00 /usr/bin/python2
/bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/e0f08063-2002-4cc9-b7b1-611925ad01e5.pid
--metadata_proxy_socket=/var/lib/neutron/metadata_proxy <====
--network_id=e0f08063-2002-4cc9-b7b1-611925ad01e5 --state_path=/var/lib/neutron
--metadata_port=80 --metadata_proxy_user=983 --metadata_proxy_group=977 --verbose
--log-file=neutron-ns-metadata-proxy-e0f08063-2002-4cc9-b7b1-611925ad01e5.log
--log-dir=/var/log/neutron


For private_network having neutron router and created immediately after update
"enable_isolated_metadata=True" and service restart
 
[root@vfedora22wks ~(keystone_admin)]# ip netns exec \
qdhcp-6e4646d8-2c5f-4adc-a4dc-51884f090d09 netstat -4 -anpt

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:*
LISTEN 8654/python2

tcp 0 0 60.0.0.10:53 0.0.0.0:*
LISTEN 8626/dnsmasq
tcp 0 0 169.254.169.254:53 0.0.0.0:*
LISTEN 8626/dnsmasq

[root@vfedora22wks ~(keystone_admin)]# ps -f --pid 8654 | fold -s -w 82
UID PID PPID C STIME TTY TIME CMD
neutron 8654 1 0 20:43 ? 00:00:00 /usr/bin/python2
/bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/6e4646d8-2c5f-4adc-a4dc-51884f090d09.pid
--metadata_proxy_socket=/var/lib/neutron/metadata_proxy <=====
--network_id=6e4646d8-2c5f-4adc-a4dc-51884f090d09 --state_path=/var/lib/neutron
--metadata_port=80 --metadata_proxy_user=983 --metadata_proxy_group=977 --verbose
--log-file=neutron-ns-metadata-proxy-6e4646d8-2c5f-4adc-a4dc-51884f090d09.log
--log-dir=/var/log/neutron

However,I have noticed that nodes restart disables neutron-ns-metadata-proxy for route-able tenant's networks, e.g. neutron-router port exists on corresponding network. 
VM's metadata request is routed via qdhcp-namespace to qrouter-namespace
VMs get metadata from neutron-ns-metadata-proxy running in qrouter-namespace.
For isolated qdhcp-namespaces node reboot still keeps neutron-ns-metadata-proxy in corresponding qdhcp-namespace
*******************************************************************************************************
All Nodes rebooted neutron-ns-metadata-proxy is no longer kept in route-able qdhcp-namespaces. VMs are serving via neutron-ns-metadata-proxy running in qrouter-namespace.
*******************************************************************************************************


 Launching CirrOS instance via isolated network

Verification neutron-ns-metadata-proxy status
 



    
Launching VF22Devs07  VM via route-able tenant's network demo_network
Verification neutron-ns-metadata-proxy status













Multiple external networks with a single L3 agent testing on RDO Liberty per Lars Kellogg-Stedman

$
0
0
Following bellow is supposed to test in multi node environment
Multiple external networks with a single L3 agent by Lars Kellogg-Stedman

However, current post contains an attempt to analyze and understand how traffic to/from external network flows through br-int when provider external networks has been involved

I was also hit by  Bug  neutron-openvswitch-agent is crashing with "invalid literal for int() with base 10" error
and patch https://review.openstack.org/#/c/225001/   was also applied

Basic 3 VM node setup was done per https://www.linux.com/community/blogs/133-general-linux/854587-rdo-liberty-beta-set-up-for-three-vm-nodes-controllernetworkcompute-ml2aovsavxlan-on-centos71/

Nested KVM was enable for all VM hosting RDO Liberty nodes.

Create to two Libvirt sub-nets external3,external4 on KVM Virtualization Host (F22)

[root@fedora22wksr ~]# cat external3.xml
<network>
   <name>external3</name>
   <uuid>d0e9964b-f95d-40c2-b749-b609aed52cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr6' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.3.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.3.0.1' end='10.3.0.254' />
     </dhcp>
   </ip>
</network>
[root@fedora22wksr ~]# cat external4.xml
<network>
   <name>external4</name>
   <uuid>d0e9964b-f97d-40c2-b749-b609aed52cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr7' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.4.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.4.0.1' end='10.4.0.254' />
     </dhcp>
   </ip>
</network>

Shutdown VM hosting Network Node and add two VNICs eth3 belongs
external3 , eth4 belongs  external4
Startup VM and create corresponding files ifcfg-eth3,ifcfg-eth4 with static
IP addresses.

# service network restart

or reboot Nerwork Node.

*************************
On Network Node
*************************
# ovs-vsctl add-br br-eth3
# ovs-vsctl add-port br-eth3 eth3
# ovs-vsctl add-br br-eth4
# ovs-vsctl add-port br-eth4 eth4

******************************
Update l3_agent.ini file
******************************
external_network_bridge =
external_network_id =

***********************************************************************
Update /etc/neutron/plugins/ml2/openvswitch_agent.ini
***********************************************************************
[ovs]
network_vlan_ranges = physnet3,physnet4
bridge_mappings = physnet3:br-eth3,physnet4:br-eth4

Then copy  /etc/neutron/plugins/ml2/openvswitch_agent.ini
to /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

************************************************************************
SSH to Controller 192.169.142.127 and update ml2_conf.ini
************************************************************************
[ml2]
type_drivers = local,flat,gre,vxlan

[ml2_type_flat]
flat_networks = *

# openstack-service restart on Controller

**********************************************************
Get back to VM hosting Network Node
**********************************************************
# openstack-service restart neutron
# systemctl | grep neutron

[root@ip-192-169-142-147 ~]# systemctl| grep neutron
neutron-dhcp-agent.service                                                          loaded active running   OpenStack Neutron DHCP Agent
neutron-l3-agent.service                                                            loaded active running   OpenStack Neutron Layer 3 Agent
neutron-metadata-agent.service                                                      loaded active running   OpenStack Neutron Metadata Agent
neutron-openvswitch-agent.service                                                   loaded active running   OpenStack Neutron Open vSwitch Agent
neutron-ovs-cleanup.service                                                         loaded active exited    OpenStack Neutron Open vSwitch Cleanup Utility

****************************************
External networks creation
****************************************
# source keystonerc_admin
# neutron net-create external3 -- --router:external  \
  --provider:network_type=flat \
  --provider:physical_network=physnet3

# neutron net-create external4 -- --router:external  \
  --provider:network_type=flat \
  --provider:physical_network=physnet4

# neutron subnet-create --disable-dhcp external3 10.3.0.0/24
# neutron subnet-create --disable-dhcp external4 10.4.0.0/24
*************************************************
Then login as demo and create
*************************************************
RouterExt3 with gateway to external3
RouterExt4 with gateway to external4

Then create private networks demo-network4,demo_network5
Attach first to RouterExt4 , second to RouterExt3


 
    
 
 On Network Node
 
[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-list| grep Ext
| 1e9bad93-2d5d-43fc-aed0-fc3745fe4d10 | RouterExt3 | {"network_id": "fffafde8-c6eb-4b20-b26d-63944300a6bf", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "a038ae4d-9ea7-466e-bf4e-fab65981151c", "ip_address": "10.3.0.2"}]} | False | False |
| f47a87d9-c789-47a8-bdb1-8117990c49be | RouterExt4 | {"network_id": "2130df5b-5483-4cb8-a6b6-2a32eb7d882a", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "eea125fc-6236-4570-9d3e-f4489671d2bb", "ip_address": "10.4.0.2"}]} | False | False |
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-1e9bad93-2d5d-43fc-aed0-fc3745fe4d10 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

qg-615baaa8-a6
: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.3.0.2 netmask 255.255.255.0 broadcast 10.3.0.255
inet6 fe80::f816:3eff:fea7:98be prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:a7:98:be txqueuelen 0 (Ethernet)
RX packets 810478 bytes 1101227298 (1.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 449476 bytes 34585959 (32.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

qr-45110e77-5b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 160.0.0.1 netmask 255.255.255.0 broadcast 160.0.0.255
inet6 fe80::f816:3eff:fe55:5c68 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:55:5c:68 txqueuelen 0 (Ethernet)
RX packets 449433 bytes 34589519 (32.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 810390 bytes 1101224102 (1.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-f47a87d9-c789-47a8-bdb1-8117990c49be ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

qg-54aa0373-dd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.4.0.2 netmask 255.255.255.0 broadcast 10.4.0.255
inet6 fe80::f816:3eff:fe2e:35ee prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:2e:35:ee txqueuelen 0 (Ethernet)
RX packets 802750 bytes 1088213425 (1.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 447723 bytes 34699912 (33.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

qr-a99aa111-1d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 150.0.0.1 netmask 255.255.255.0 broadcast 150.0.0.255
inet6 fe80::f816:3eff:fe9b:3a9a prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:9b:3a:9a txqueuelen 0 (Ethernet)
RX packets 448277 bytes 34759884 (33.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 802834 bytes 1088249558 (1.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 
***************************************************************************************************
Notice that qg-xxxxxxx interfaces from both qrouter-namespaces are attached to br-int
While using provider external networks,traffic to/from external network flows through br-int. 
br-int and br-eth3 will be connected using veth pair int-br-eth3 and phy-br-eth3.
br-int and br-eth4 will be connected using veth pair int-br-eth4 and phy-br-eth4.
This will be automatically created by neutron-openvswitch-agent
based on the bridge_mappings configured earlier.

***************************************************************************************************
[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
38e920e3-da61-4a1b-876a-052a49d777a2
Bridge br-tun
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "vxlan-0a000089"
Interface "vxlan-0a000089"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
Port br-tun
Interface br-tun
type: internal
Bridge "br-eth4"
Port "br-eth4"
Interface "br-eth4"
type: internal
Port "phy-br-eth4"
Interface "phy-br-eth4"
type: patch
options: {peer="int-br-eth4"}
Port "eth4"
Interface "eth4"
Bridge br-int
fail_mode: secure
Port "tap709fbf6f-ab"
tag: 13
Interface "tap709fbf6f-ab"
type: internal
Port br-int
Interface br-int
type: internal
Port "qr-a99aa111-1d"
tag: 13
Interface "qr-a99aa111-1d"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qg-54aa0373-dd"
tag: 14
Interface "qg-54aa0373-dd"
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "int-br-eth3"
Interface "int-br-eth3"
type: patch
options: {peer="phy-br-eth3"}
Port "qg-615baaa8-a6"
tag: 15
Interface "qg-615baaa8-a6"
type: internal
Port "tap06adaf37-d4"
tag: 17
Interface "tap06adaf37-d4"
type: internal
Port "qr-45110e77-5b"
tag: 17
Interface "qr-45110e77-5b"
type: internal
Port "int-br-eth4"
Interface "int-br-eth4"
type: patch
options: {peer="phy-br-eth4"}
Bridge br-ex
Port "eth2"
Interface "eth2"
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Bridge "br-eth3"
Port "eth3"
Interface "eth3"
Port "phy-br-eth3"
Interface "phy-br-eth3"
type: patch
options: {peer="int-br-eth3"}
Port "br-eth3"
Interface "br-eth3"
type: internal
ovs_version: "2.3.1" 
 
 




 
 

RDO Liberty (RC2) DVR Neutron workflow on CentOS 7.1

$
0
0
Per http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-ovs-dvr.html 
DVR is supposed to address following problems which has traditional 3 Node
deployment schema:-

Problem 1: Intra VM traffic flows through the Network Node
In this case even VMs traffic that belong to the same tenant
on a different subnet has to hit the Network Node to get routed
between the subnets. This would affect Performance.

Problem 2: VMs with FloatingIP also receive and send packets
through the Network Node Routers.
FloatingIP (DNAT) translation done at the Network Node and also
the external network gateway port is available only at the Network Node.
So any traffic that is intended for the External Network from
the VM will have to go through the Network Node.

In this case the Network Node becomes a single point of failure
and also the traffic load will be heavy in the Network Node.
This would affect the performance and scalability.


Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, 

   Neutron (using Open vSwitch plugin &amp;&amp; VXLAN )

- (2x) Compute node: Nova (nova-compute),
         Neutron (openvswitch-agent,l3-agent,metadata-agent )


Three CentOS 7.1 VMs (4 GB RAM, 4 VCPU, 2 VNICs ) has been built for testing
at Fedora 22 KVM Hypervisor. Two libvirt sub-nets were used first "openstackvms" for emulating External &amp;&amp; Mgmt Networks 192.169.142.0/24 gateway virbr1 (192.169.142.1) and  "vteps" 10.0.0.0/24 to support two VXLAN tunnels between Controller and Compute Nodes.

# cat openstackvms.xml

<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>

# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-define openstackvms.xml
# virsh net-start  openstackvms
# virsh net-autostart  openstackvms

Second libvirt sub-net maybe defined and started same way.


ip-192-169-142-127.ip.secureserver.net - Controller/Network Node
ip-192-169-142-137.ip.secureserver.net - Compute Node
ip-192-169-142-147.ip.secureserver.net - Compute Node

********************************
On each deployment node
********************************
Per http://beta.rdoproject.org/testday/rdo-test-day-liberty-02/
yum -y install yum-plugin-priorities
cd /etc/yum.repos.d/
wget http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo
wget http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo 
 
*********************
Answer File :-
*********************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
********************************************************
# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.1(X)7"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.255"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

# cat ifcfg-eth0
DEVICE="eth0"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
***********
Then
***********
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

Reboot

**********************************
General information   ( [3] )
**********************************
Enabling l2pop :-

On the Neutron API node, in the conf file you pass
to the Neutron service (plugin.ini/ml2_conf.ini):
[ml2]
mechanism_drivers = openvswitch,l2population

On each compute node, in the conf file you pass
to the OVS agent (plugin.ini/ml2_conf.ini):
[agent]
l2_population = True

Enable the ARP responder:
On each compute node, in the conf file
you pass to the OVS agent (plugin.ini/ml2_conf.ini):
[agent]
arp_responder = True

*****************************************
On Controller update neutron.conf
*****************************************
router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

 [root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
[AGENT]

*********************************
On each Compute Node
*********************************

[root@ip-192-169-142-147 neutron]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr
[AGENT]


 [root@ip-192-169-142-147 neutron]# cat metadata_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
auth_url = http://192.169.142.127:5000/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
nova_metadata_protocol = http
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =4
metadata_backlog = 4096
cache_url = memory://?default_ttl=5
[AGENT]

[root@ip-192-169-142-147 ml2]# pwd
/etc/neutron/plugins/ml2

[root@ip-192-169-142-147 ml2]# cat ml2_conf.ini | grep -v ^$ | grep -v ^#
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[ml2_type_geneve]
[securitygroup]
enable_security_group = True
# On Compute nodes
[agent]
l2_population=True

********************************************************************************
Please, be asvised that command like ( [ 2 ] ) :-
# rsync -av root@192.169.142.127:/etc/neutron/plugins/ml2 /etc/neutron/plugins
been run on Liberty Compute Node 192.169.142.147 will overwrite file
/etc/neutron/plugins/ml2/openvswitch_agent.ini
So, local_ip after this command should be turned backed to it's initial value.
********************************************************************************
 [root@ip-192-169-142-147 ml2]# cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

*********************************************************************************
Create under plugins directory "openvswitch" and copy
/etc/neutron/plugins/ml2/openvswitch_agent.ini  to
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
# chgrp neutron  /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
otherwise neutron-ovs-cleanup.service won't start on Compute node.
*********************************************************************************

On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started.
# yum install  openstack-neutron-ml2  
# systemctl start neutron-l3-agent
# systemctl start neutron-metadata-agent
# systemctl enable neutron-l3-agent
# systemctl enable neutron-metadata-agent


[root@ip-192-169-142-147 ~]# systemctl | grep openstack
openstack-ceilometer-compute.service                                                loaded active running   OpenStack ceilometer compute agent
openstack-nova-compute.service                                                      loaded active running   OpenStack Nova Compute Server

[root@ip-192-169-142-147 ~]# systemctl | grep neutron
neutron-l3-agent.service                                                            loaded active running   OpenStack Neutron Layer 3 Agent
neutron-metadata-agent.service                                                      loaded active running   OpenStack Neutron Metadata Agent
neutron-openvswitch-agent.service                                                   loaded active running   OpenStack Neutron Open vSwitch Agent
neutron-ovs-cleanup.service                                                         loaded active exited    OpenStack Neutron Open vSwitch Cleanup Utility

******************************************************************************************************** 
When floating IP gets assigned to  VM ,  what actually happens ( [1] ) :-
The same explanation may be found in ([4]) , the only style would not be in step by step manner, in particular it contains detailed description of reverse network flow and ARP Proxy functionality.
********************************************************************************************************

1.The fip-<netid> namespace is created on the local compute node (if it does not already exist)
2.A new port rfp-<portid> gets created on the qrouter-<routerid> namespace (if it does not already exist)
3.The rfp port on the qrouter namespace is assigned the associated floating IP address
4.The fpr port on the fip namespace gets created and linked via point-to-point  network to the rfp port of the qrouter namespace
5.The fip namespace gateway port fg-<portid> is assigned an additional address
  from the public network range to set up  ARP proxy point
6.The fg-<portid> is configured as a Proxy ARP

***************************************
Network flow itself  ( [1] ):
***************************************

1.The VM, initiating transmission, sends a packet via default gateway
   and br-int forwards the traffic to the local DVR gateway port (qr-<portid>).
2.DVR routes the packet using the routing table to the rfp-<portid> port
3.The packet is applied NAT rule, replacing the source-IP of VM to
   the assigned floating IP, and then it gets sent through the rfp-<portid> port,
   which connects to the fip namespace via point-to-point network
   169.254.31.28/31
4. The packet is received on the fpr-<portid> port in the fip namespace
and then routed outside through the fg-<portid> port


 *********************************************************
In case of particular deployment :-
*********************************************************

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron net-list
+--------------------------------------+--------------+-------------------------------------------------------+
| id                                   | name         | subnets                                               |
+--------------------------------------+--------------+-------------------------------------------------------+
| 1b202547-e1de-4c35-86a9-3119d6844f88 | public       | e6473e85-5a4c-4eea-a42b-3a63def678c5 192.169.142.0/24 |
| 267c9192-29e2-41e2-8db4-826a6155dec9 | demo_network | 89704ab3-5535-4c87-800e-39255a0a11d9 50.0.0.0/24      |
+--------------------------------------+--------------+------------------------------------------



[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
fip-1b202547-e1de-4c35-86a9-3119d6844f88
qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf

[root@ip-192-169-142-147 ~]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ip rule
0:    from all lookup local
32766:    from all lookup main
32767:    from all lookup default
57480:    from 50.0.0.15 lookup 16
57481:    from 50.0.0.13 lookup 16

838860801:    from 50.0.0.1/24 lookup 838860801


[root@ip-192-169-142-147 ~]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ip route show table 16
default via 169.254.31.29 dev rfp-51ed47a7-3
 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ip route
50.0.0.0/24 dev qr-b0a8a232-ab  proto kernel  scope link  src 50.0.0.1
169.254.31.28/31 dev rfp-51ed47a7-3  proto kernel  scope link  src 169.254.31.28 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf iptables-save -t nat | grep "^-A"|grep l3-agent

-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A neutron-l3-agent-OUTPUT -d 192.169.142.153/32 -j DNAT --to-destination 50.0.0.13
-A neutron-l3-agent-OUTPUT -d 192.169.142.156/32 -j DNAT --to-destination 50.0.0.15

-A neutron-l3-agent-POSTROUTING ! -i rfp-51ed47a7-3 ! -o rfp-51ed47a7-3 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 192.169.142.153/32 -j DNAT --to-destination 50.0.0.13
-A neutron-l3-agent-PREROUTING -d 192.169.142.156/32 -j DNAT --to-destination 50.0.0.15

-A neutron-l3-agent-float-snat -s 50.0.0.13/32 -j SNAT --to-source 192.169.142.153
-A neutron-l3-agent-float-snat -s 50.0.0.15/32 -j SNAT --to-source 192.169.142.156
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec fip-1b202547-e1de-4c35-86a9-3119d6844f88 ip route

default via 192.169.142.1 dev fg-58e0cabf-07
169.254.31.28/31 dev fpr-51ed47a7-3  proto kernel  scope link  src 169.254.31.29
192.169.142.0/24 dev fg-58e0cabf-07  proto kernel  scope link  src 192.169.142.154
192.169.142.153 via 169.254.31.28 dev fpr-51ed47a7-3
192.169.142.156 via 169.254.31.28 dev fpr-51ed47a7-3
 

   [root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-b0a8a232-ab: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fe23:586c  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:23:58:6c  txqueuelen 0  (Ethernet)
        RX packets 88594  bytes 6742614 (6.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 173961  bytes 234594118 (223.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

rfp-51ed47a7-3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.31.28  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::282e:4bff:fe52:3bca  prefixlen 64  scopeid 0x20<link>
        ether 2a:2e:4b:52:3b:ca  txqueuelen 1000  (Ethernet)
        RX packets 173514  bytes 234542852 (223.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 87837  bytes 6670792 (6.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
fe2f4449-82fc-45e9-8827-6c6d9c8cc92d
    Bridge br-int
        fail_mode: secure
        Port "qr-b0a8a232-ab"
            tag: 1
            Interface "qr-b0a8a232-ab"

                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo19855b4d-3b"
            tag: 1
            Interface "qvo19855b4d-3b"
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port "qvobd487c99-41"
            tag: 1
            Interface "qvobd487c99-41"
    Bridge br-ex
        Port "fg-58e0cabf-07"
            Interface "fg-58e0cabf-07"

                type: internal
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a00007f"
            Interface "vxlan-0a00007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.127"}
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.4.0"

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec fip-1b202547-e1de-4c35-86a9-3119d6844f88 ifconfig
fg-58e0cabf-07: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.169.142.154  netmask 255.255.255.0  broadcast 192.169.142.255
        inet6 fe80::f816:3eff:fe15:efff  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:15:ef:ff  txqueuelen 0  (Ethernet)
        RX packets 173587  bytes 234547834 (223.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 87751  bytes 6665500 (6.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

fpr-51ed47a7-3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.31.29  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::a805:e5ff:fe38:3bb1  prefixlen 64  scopeid 0x20<link>
        ether aa:05:e5:38:3b:b1  txqueuelen 1000  (Ethernet)
        RX packets 87841  bytes 6671008 (6.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 173518  bytes 234543068 (223.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

**********************************************************************************
I've described North-South network traffic in details due to it is my major concern.
Regarding East-West traffic via distributed routers  see ( [4] ) and ( [1] ).
**********************************************************************************
**************
On Controller
**************

  

   
 


Switching to Dashboard Spice Console on RDO Liberty (RC3) AIO installation on CentOS 7.1

$
0
0
Current post briefly describes conversion to dashboard Spice console along with
enabling spice console features as sound and cut&&paste via slightly updated
patches of Y.Kawada ( and converted from pdf to raw format). To get this features working using any spice-gtk tools ( spicy, virt-manager )  requires ports 5900,...,590(X) to be opened via ipv4 iptables firewall on node running openstack-nova-compute.

***********************************
Set up delorean repos
***********************************
yum -y install yum-plugin-priorities
cd /etc/yum.repos.d/
wget http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo
wget http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo
yum -y install openstack-packstack
*******************************************************************************
Run packstack AIO install and set up "br-ex" for external network as required
*******************************************************************************
[root@CentOS71503Server ~(keystone_admin)]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.168.1.112"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.168.1.255"
GATEWAY="192.168.1.1"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no


[root@CentOS71503Server ~(keystone_admin)]# cat ifcfg-enp3s0
DEVICE="enp3s0"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
**************
Next:-
**************
[root@CentOS71503Server ~(keystone_admin)]# chkconfig network on
[root@CentOS71503Server ~(keystone_admin)]# systemctl stop NetworkManager
[root@CentOS71503Server ~(keystone_admin)]# systemctl disable NetworkManager
[root@CentOS71503Server ~(keystone_admin)]# service network restart
( or reboot )

***********************************************************
Install nova-spicehtml5proxy package
***********************************************************

[root@CentOS71503Server ~(keystone_admin)]# yum -y install epel-release
[root@CentOS71503Server ~(keystone_admin)]# yum  -y install spice-html5
[root@CentOS71503Server ~(keystone_admin)]# yum -y install  openstack-nova-spicehtml5proxy

************************************
Update /etc/nova/nova.conf :-
************************************

[DEFAULT]

. . . . .
web=/usr/share/spice-html5
. . . . . .
spicehtml5proxy_host=0.0.0.0 
spicehtml5proxy_port=6082    
. . . . . . .
# Disable VNC
vnc_enabled=false
. . . . . . .
[spice]


html5proxy_base_url=http://192.168.1.112:6082/spice_auto.html
server_proxyclient_address=127.0.0.1
server_listen=0.0.0.0
enabled=true
agent_enabled=true
keymap=en-us

:wq


# service httpd restart
# service openstack-nova-compute restart
# service openstack-nova-spicehtml5proxy start
# systemctl enable openstack-nova-spicehtml5proxy

*********************************************************************
Update /etc/sysconfig/iptables on Compute :-
*********************************************************************

-A INPUT -p tcp -m state --state NEW -m tcp --dport 6082 -j ACCEPT

[root@ip-192-169-142-137 sysconfig]# service iptables restart

I've  got sound working on CentOS 7.1 Cloud VM ,"GNOME Desktop" installed, and F22 Cloud VM, "MATE Desktop"&& "workstation-product-environment"
installed followed by  `systemctl set-default graphical.target`.
with slightly updated patch of Y.Kawada , self.type set to "ich6" for Linux guests.
Raw text of Y.Kawada patches (been written for IceHouse Openstack release) may be obtained here
Patch virt/libvirt/config.py and virt/libvirt/driver.py and restart nova-compute

# python -m py_compile config.py
# python -m py_compile driver.py
# systemctl restart openstack-nova-compute

Ports 5900,..,5910 should be open on RDO Liberty AIO host via ipv4 iptables
firewall for Virt-Manager connections


  
  

   [root@CentOS71503Server ~(keystone_admin)]# virsh  list
 Id    Name                           State
----------------------------------------------------
 4     instance-00000003              running
 6     instance-00000002              running
 7     instance-00000004              running

[root@CentOS71503Server ~(keystone_admin)]# virsh  dumpxml instance-00000003
<domain type='kvm' id='4'>
  <name>instance-00000003</name>
  <uuid>fcbef681-bae7-4503-b56a-24f2d67d92f0</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
     <nova:package version="12.0.0-rc3.dev1.el7.centos"/>
      <nova:name>CentOS71Devs</nova:name>
      <nova:creationTime>2015-10-17 15:32:51</nova:creationTime>
      <nova:flavor name="m1.medium">
        <nova:memory>4096</nova:memory>
        <nova:disk>40</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>2</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="8ba153fc5fe947e499edf0a802395f5e">demo</nova:user>
        <nova:project uuid="248d19ccd0b94cf9b8fcc25f4f083c08">demo</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="65f257cb-0899-4bf3-a888-505d0bd4cc13"/>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <shares>2048</shares>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
    <sysinfo type='smbios'>
      <system>
        <entry name='manufacturer'>Fedora Project</entry>
        <entry name='product'>OpenStack Nova</entry>
        <entry name='version'>12.0.0-rc3.dev1.el7.centos</entry>
        <entry name='serial'>935292d3-1d87-4641-9bd7-ba80176e0135</entry>
        <entry name='uuid'>fcbef681-bae7-4503-b56a-24f2d67d92f0</entry>
        <entry name='family'>Virtual Machine</entry>
      </system>
    </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-model'>
    <model fallback='allow'/>
    <topology sockets='2' cores='1' threads='1'/>
  </cpu>
  <clock offset='utc'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/nova/instances/fcbef681-bae7-4503-b56a-24f2d67d92f0/disk'/>
      <backingStore type='file' index='1'>
        <format type='raw'/>
        <source file='/var/lib/nova/instances/_base/703e32fcfd0130d2798525edf8aec7c4cf282c8c'/>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='fa:16:3e:03:f9:a1'/>
      <source bridge='qbr4556687f-f8'/>
      <target dev='tap4556687f-f8'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='file'>
      <source path='/var/lib/nova/instances/fcbef681-bae7-4503-b56a-24f2d67d92f0/console.log'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='1'/>
      <alias name='serial1'/>
    </serial>
    <console type='file'>
      <source path='/var/lib/nova/instances/fcbef681-bae7-4503-b56a-24f2d67d92f0/console.log'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
   <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>

    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' port='5901' autoport='yes' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <sound model='ich6'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>

    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
      <stats period='10'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c320,c750</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c320,c750</imagelabel>
  </seclabel>
</domain>
 
******************************************************************************
Testing Windows 2012 Evaluation server was hit by bug
glance image-list fails with 'Expected endpoint'
******************************************************************************

Patch https://review.openstack.org/#/c/232462/  was applied  followed by `openstack-service restart glance` to complete command:-

# . keystonerc_admin
# gunzip -cd windows_server_2012_r2_standard_eval_kvm_20140607.qcow2.gz | \
glance image-create --property hypervisor_type=kvm  \
--name "Windows Server 2012 R2 Std Eval" \
--container-format bare --disk-format qcow2
Same self.type="ich6" works for Windows 2012 Evaluation Server as well

 

RDO Liberty Set up for three Nodes (Controller+Network+Compute) ML2&OVS&VXLAN on CentOS 7.1

$
0
0
As advertised officially
 In addition to the comprehensive OpenStack services, libraries and clients, this release also provides Packstack, a simple installer for proof-of-concept installations, as small as a single all-in-one box and RDO Manager an OpenStack deployment and management tool for production environments based on the OpenStack TripleO project


  In posting bellow I intend to demonstrate that packstack on Liberty  is not so much limited as told above. It still handles Multi Node Deployments, which might require some post installation actions to be performed (as VRRP or DVR post-configuration for instance). The real issue for packstack is HA Controller setup. Here RDO Manager is supposed to get a significant advantage, replacing with comprehensive CLI a lot of manual configuration.




   Following bellow is brief instruction  for three node deployment test Controller&&Network&&Compute for RDO Liberty, which was performed on Fedora 22 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,4 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep's external subnets), Compute Node VM two VNICS (management,vtep's subnets)

SELINUX stays in enforcing mode.

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related
with VM serves as RDO Kilo Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.

Three Libvirt networks created

# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>


[root@vfedora22wks ~]# cat public.xml
<network>
   <name>public</name>
   <uuid>d1e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
  </ip>
 </network>


[root@vfedora22wks ~]# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d2e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-list
 Name                 State      Autostart     Persistent
--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms    active        yes           yes
 public                active        yes           yes
 vteps                 active         yes          yes
*********************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet "public" serves for simulation external network  Network Node attached to public,latter on "eth2" interface (belongs to "public") is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.225 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.

***********************************************************************************
3.Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet. ***********************************************************************************

*********************
Answer-file :-
*********************
[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer-fileRHTest.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
# In case of two Compute nodes
# CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.157
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitchCONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
# This is VXLAN tunnel endpoint interface
# It should be assigned IP from vteps network
# before running packstack
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4
**************************************
At this point run on Controller:-
**************************************
Keep SELINUX=enforcing ( RDO Kilo is supposed to handle this)
 # yum install centos-release-openstack-liberty
# yum install openstack-packstack
 # packstack --answer-file=./answer3Node.txt  
**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.232"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

  OVS PORT should be eth2 (third Ethernet interface on Network Node)
  Libvirt bridge VIRBR2 in real deployment is a your router to External
  network. OVS BRIDGE br-ex should have IP belongs to External network 

*******************
On Controller :-
*******************


  


[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -lntp |  grep 35357
tcp6       0      0 :::35357                :::*                    LISTEN      7047/httpd
       
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 7047
root      7047     1  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
keystone  7089  7047  0 11:22 ?        00:00:07 keystone-admin  -DFOREGROUND
keystone  7090  7047  0 11:22 ?        00:00:02 keystone-main   -DFOREGROUND
apache    7092  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7093  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7094  7047  0 11:22 ?        00:00:03 /usr/sbin/httpd -DFOREGROUND
apache    7095  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7096  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7097  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7098  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7099  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7100  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7101  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7102  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
root     28963 17739  0 12:51 pts/1    00:00:00 grep --color=auto 7047
********************
On Network Node
********************

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| 217fb0f5-8dd1-4361-aae7-cc9a7d18d6e4 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 5dabfc17-db64-470c-9f01-8d2297d155f3 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 5e3c6e2f-3f6d-4ede-b058-bc1b317d4ee1 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| f0f02931-e7e6-4b01-8b87-46224cb71e6d | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| f16a5d9d-55e6-47c3-b509-ca445d05d34d | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
9221d1c1-008a-464a-ac26-1e0340407714
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
        Port "qg-1deeaf96-e8"
            Interface "qg-1deeaf96-e8"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "qr-1909e3bb-fd"
            tag: 2
            Interface "qr-1909e3bb-fd"
                type: internal
        Port "tapfdf24cad-f8"
            tag: 2
            Interface "tapfdf24cad-f8"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    ovs_version: "2.4.0"

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[    2.233302] device ovs-system entered promiscuous mode
[    2.273206] device br-int entered promiscuous mode
[    2.274981] device qr-838ad1f3-7d entered promiscuous mode
[    2.276333] device tap0f21eab4-db entered promiscuous mode
[    2.312740] device br-tun entered promiscuous mode
[    2.314509] device qg-2b712b60-d0 entered promiscuous mode
[    2.315921] device br-ex entered promiscuous mode
[    2.316022] device eth2 entered promiscuous mode
[   10.704329] device qr-838ad1f3-7d left promiscuous mode
[   10.729045] device tap0f21eab4-db left promiscuous mode
[   10.761844] device qg-2b712b60-d0 left promiscuous mode
[  224.746399] device eth2 left promiscuous mode
[  232.173791] device eth2 entered promiscuous mode
[  232.978909] device tap0f21eab4-db entered promiscuous mode
[  233.690854] device qr-838ad1f3-7d entered promiscuous mode
[  233.895213] device qg-2b712b60-d0 entered promiscuous mode
[ 1253.611501] device qr-838ad1f3-7d left promiscuous mode
[ 1254.017129] device qg-2b712b60-d0 left promiscuous mode
[ 1404.697825] device tapfdf24cad-f8 entered promiscuous mode
[ 1421.812107] device qr-1909e3bb-fd entered promiscuous mode
[ 1422.045593] device qg-1deeaf96-e8 entered promiscuous mode
[ 6111.042488] device tap0f21eab4-db left promiscuous mode



[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip route
default via 172.24.4.225 dev qg-1deeaf96-e8
50.0.0.0/24 dev qr-1909e3bb-fd  proto kernel  scope link  src 50.0.0.1
172.24.4.224/28 dev qg-1deeaf96-e8  proto kernel  scope link  src 172.24.4.227 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-1deeaf96-e8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.24.4.227  netmask 255.255.255.240  broadcast 172.24.4.239
        inet6 fe80::f816:3eff:fe93:12de  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:93:12:de  txqueuelen 0  (Ethernet)
        RX packets 864432  bytes 1185656986 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 382639  bytes 29347929 (27.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-1909e3bb-fd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:feae:d1e0  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:ae:d1:e0  txqueuelen 0  (Ethernet)
        RX packets 382969  bytes 29386380 (28.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 864601  bytes 1185686714 (1.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-153edd99-9152-49ad-a445-7280aa9df187 ip route
default via 50.0.0.1 dev tapfdf24cad-f8
50.0.0.0/24 dev tapfdf24cad-f8  proto kernel  scope link  src 50.0.0.10 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-153edd99-9152-49ad-a445-7280aa9df187 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapfdf24cad-f8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 50.0.0.10  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fe98:c66  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:98:0c:66  txqueuelen 0  (Ethernet)
        RX packets 63  bytes 6445 (6.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14  bytes 2508 (2.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
16: qr-1909e3bb-fd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:ae:d1:e0 brd ff:ff:ff:ff:ff:ff
    inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-1909e3bb-fd
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feae:d1e0/64 scope link
       valid_lft forever preferred_lft forever
17: qg-1deeaf96-e8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:93:12:de brd ff:ff:ff:ff:ff:ff
    inet 172.24.4.227/28 brd 172.24.4.239 scope global qg-1deeaf96-e8
       valid_lft forever preferred_lft forever
    inet 172.24.4.229/32 brd 172.24.4.229 scope global qg-1deeaf96-e8
       valid_lft forever preferred_lft forever
    inet 172.24.4.230/32 brd 172.24.4.230 scope global qg-1deeaf96-e8
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe93:12de/64 scope link
       valid_lft forever preferred_lft forever

RDO Liberty VRRP (quick test) on CentOS 7.1

$
0
0
 Sample bellow demonstrates uninterrupted access to cloud VM running on second Compute node, when Controller and L3-router enabled first Compute node are swapping MASTER and BACKUP roles (as members of keepalived pair), providing via HA Neutron router
Convert DVR configuration been built in RDO Liberty DVR Neutron workflow on CentOS 7.1  in the same way as it was done in ([ 1 ]) .

Setup configuration
- Controller node: Nova, Keystone, Cinder, Glance, 

   Neutron (using Open vSwitch plugin &amp;&amp; VXLAN )
- (2x) Compute node: Nova (nova-compute),
         Neutron (openvswitch-agent,l3-agent,metadata-agent )




*****************************************************
On Controller and first Compute Node
*****************************************************
# yum install keepalived
*************************************************************************
Stop and disable neutron-l3-agent on Second Compute Node
Update /etc/neutron/neutron.conf as follows
**************************************************************************
[DEFAULT]
 router_distributed = False
 l3_ha = True
 max_l3_agents_per_router = 2

*****************************************************************
Switch agent_mode to legacy on all nodes
Update /etc/neutron/plugins/ml2/openvswitch_agent.ini
*****************************************************************
[agent]
enable_distributed_routing = False

All nodes restart

************************************************************
Create HA router belongs tenant demo
*************************************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from keystoneclient.v2_0 import client
>>> token = '3ad2de159f9649afb0c342ba57e637d9'
>>> endpoint = 'http://192.169.142.127:35357/v2.0'
>>> keystone = client.Client(token=token, endpoint=endpoint)
>>> keystone.tenants.list()
[<Tenant {u'enabled': True, u'description': u'default tenant', u'name': u'demo', u'id': u'0d166b0ff5fb40a2bf6453e81b27962e'}>, <Tenant {u'enabled': True, u'description': u'admin tenant', u'name': u'admin', u'id': u'21e6a247384f4208a70983d852562cc7'}>, <Tenant {u'enabled': True, u'description': u'Tenant for the openstack services', u'name': u'services', u'id': u'ea97cf808f664f7f8d8810ab164de9ec'}>]
>>>

# neutron router-create --ha True --tenant_id  0d166b0ff5fb40a2bf6453e81b27962 RouterHA

Attach public && private networg to RouterHA


# neutron net-list

 +--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
| id                                   | name                                               | subnets                                               |
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
| 1b202547-e1de-4c35-86a9-3119d6844f88 | public               |
 e6473e85-5a4c-4eea-a42b-3a63def678c5 192.169.142.0/24 |
| 596eb520-da47-41a7-bfc1-8ace58d7ee98 | HA network tenant 0d166b0ff5fb40a2bf6453e81b27962e | c7d12fde-47f4-4744-bc88-78a4a7e91755 169.254.192.0/18 |
| 267c9192-29e2-41e2-8db4-826a6155dec9 | demo_network                                       | 89704ab3-5535-4c87-800e-39255a0a11d9 50.0.0.0/24      |
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+

# neutron router-port-list RouterHA

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-port-list RouterHA
+--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name                                            | mac_address       | fixed_ips                                                                              |
+--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+
| 0a823561-8ce6-4c7d-8943-525e74f61210 |                                                 | fa:16:3e:14:ca:12 | {"subnet_id": "e6473e85-5a4c-4eea-a42b-3a63def678c5", "ip_address": "192.169.142.159"} |
| 1981fd35-3025-45ff-a6e5-ab5bc7d8af3e | HA port tenant 0d166b0ff5fb40a2bf6453e81b27962e | fa:16:3e:b8:d6:14 | {"subnet_id": "c7d12fde-47f4-4744-bc88-78a4a7e91755", "ip_address": "169.254.192.2"}   |
| 4b4ac14c-a3a9-4fc0-9c3a-36d0ae1f4b11 | HA port tenant 0d166b0ff5fb40a2bf6453e81b27962e | fa:16:3e:c5:b2:4b | {"subnet_id": "c7d12fde-47f4-4744-bc88-78a4a7e91755", "ip_address": "169.254.192.1"}   |

| 6d989cb9-dfc8-4e08-8629-3c1186268511 |                                                 | fa:16:3e:cf:e2:a0 | {"subnet_id": "89704ab3-5535-4c87-800e-39255a0a11d9", "ip_address": "50.0.0.1"}        |
+--------------------------------------+-------------------------------------------------+-------------------+----------------------------------------------------------------------------------------+

***************************************************************************
Start up configuration. Compute Node 1 is in MASTER STATE
***************************************************************************

[boris@fedora21wks01 Downloads]$ ping 192.169.142.153
PING 192.169.142.153 (192.169.142.153) 56(84) bytes of data.
64 bytes from 192.169.142.153: icmp_seq=2 ttl=63 time=0.608 ms
64 bytes from 192.169.142.153: icmp_seq=3 ttl=63 time=0.402 ms
64 bytes from 192.169.142.153: icmp_seq=4 ttl=63 time=0.452 ms



  *************************************************************************
  Compute Node 1 shutdown . Controller went to MASTER STATE
  *************************************************************************
  Pinging running VM (FIP is 192.169.142.153 ) 

  [boris@fedora21wks01 Downloads]$ ping 192.169.142.153
  PING 192.169.142.153 (192.169.142.153) 56(84) bytes of data.
  64 bytes from 192.169.142.153: icmp_seq=10 ttl=63 time=0.568 ms
  64 bytes from 192.169.142.153: icmp_seq=12 ttl=63 time=0.724 ms
  64 bytes from 192.169.142.153: icmp_seq=13 ttl=63 time=0.448 ms


   [root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-  3d4a0d41-5838-49bd-b691-ecc9946d6e19 ip a |grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-1981fd35-30
    inet 169.254.0.1/24 scope global ha-1981fd35-30
    inet 50.0.0.1/24 scope global qr-6d989cb9-df
    inet 192.169.142.153/32 scope global qg-0a823561-8c
    inet 192.169.142.159/24 scope global qg-0a823561-8c

   *******************************************
   Compute Node 1 brought up again
   *******************************************
  
   ********************************************
   Controller has been rebooted
   ********************************************


    *******************************************************************
    Now Compute Node 1 become MASTER again 
    *******************************************************************
   [root@ip-192-169-142-147 ~]# systemctl restart  neutron-l3-agent

  [boris@fedora21wks01 Downloads]$ ping 192.169.142.153
  PING 192.169.142.153 (192.169.142.153) 56(84) bytes of data.
  64 bytes from 192.169.142.153: icmp_seq=22 ttl=63 time=0.640 ms
  64 bytes from 192.169.142.153: icmp_seq=23 ttl=63 time=0.553 ms
  64 bytes from 192.169.142.153: icmp_seq=24 ttl=63 time=0.516 ms
 

On Controller :-

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-3d4a0d41-5838-49bd-b691-ecc9946d6e19 ip a |grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-1981fd35-30
[root@ip-192-169-142-127 ~(keystone_admin)]# ssh 192.169.142.147
Last login: Sun Oct 25 12:50:43 2015

On Compute :-

[root@ip-192-169-142-147 ~]# ip netns exec qrouter-3d4a0d41-5838-49bd-b691-ecc9946d6e19 ip a |grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-4b4ac14c-a3
    inet 169.254.0.1/24 scope global ha-4b4ac14c-a3
    inet 50.0.0.1/24 scope global qr-6d989cb9-df
    inet 192.169.142.153/32 scope global qg-0a823561-8c
    inet 192.169.142.159/24 scope global qg-0a823561-8c

*********************************
Keepalived status :-
*********************************



VRRP four nodes setup on RDO Liberty (CentOS 7.1)

$
0
0
UPDATE 10/28/2015

    I clearly understand that following advanced RH's technology as RDO Manager, I am supposed to set up 1 VM for undercloud, 3 VMs for HA overcloud Controller and 2 VMs for overcloud Compute Nodes. So, 6 VMs should be installed on desktop  box. One person wrote (on RDO Mailing list) that the only problem is 32 GB of RAM and he is ready to go with i7 4770 Haswell CPU. Doing sample bellow (just 4 VMs) on 32 GB desktop with i7 4790 I was experiencing performance issues not due to memory swap, but due to 4 Core limitation for any i7 Haswell CPU. I am forced to run packstack for POC tasks due to insufficient power of desktop Haswell CPUs. Actually, CPU like Intel® Xeon® Processor E5-2690 (8 Cores,16 threads) would  allow to test virtual configs of RDO Manager.

END UPDATE

   Sample bellow demonstrates uninterrupted access, providing via HA Neutron router,  to cloud VMs  running on Compute node, when two installed Network Nodes node are swapping MASTER and BACKUP roles (as members of keepalived pair).

Due to Lxer's  moderators posted (sic) before  "uninterrupted access", I have to notice that some downtime obviously always occurs ( several minutes ) required for BACKUP box comes to MASTER state.

    Following bellow is brief instruction for 4 node deployment test Controller & 2xNetwork & Compute on RDO Liberty (CentOS 7.1), which was performed on Fedora 21 host with KVM/Libvirt Hypervisor  (32 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) .Four VMs (4 GB RAM, 4 VCPUS)  have been setup.
  Controller VM one (management subnet) VNIC, 2xNetwork Nodes VM three VNICS (management,vtep's external subnets),  Compute Node VM two VNICS (management,vtep's subnets)

Setup :-

192.169.142.127 - Controller Node
192.169.142.147,192.169.142.157 - Network Nodes
192.169.142.137 - Compute Node


*******************************************
Three Libvirt networks created
*******************************************

# cat openstackvms.xml

<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>


# cat external.xml

<network>
   <name>external</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
   </ip>
 </network>

# cat vteps.xml

<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>


# virsh net-list

 Name                 State      Autostart     Persistent

--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms          active        yes           yes
 external              active        yes           yes
 vteps                 active        yes          yes


*********************************************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 4 VM are attached to this subnet
*********************************************************************************************************
2. Second Libvirt subnet "external" serves for simulation external network 
Network Nodes attached to "external",latter on "eth2" interfaces (belongs to "external") which are supposed to be converted into OVS ports of br-ex(s) on Network Nodes. This Libvirt subnet via bridge virbr2 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
*********************************************************************************************************
3. Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Nodes are attached to this subnet.
*********************************************************************************************************

***************************************
Answer file (answer4Node.txt)
***************************************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_SAHARA_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147,192.169.142.157

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_SAHARA_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.127
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=33cade531a764c858e4e6c22488f379f
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=09e304c52d714220
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_REDIS_MASTER_HOST=192.169.142.127
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=02f168ee8edd44e4
**************************************
At this point run on Controller:-
**************************************
 # yum -y  install centos-release-openstack-liberty
 # yum -y  install openstack-packstack
 # packstack --answer-file=./answer4Node.txt

***********************************************************
Upon completion on Network node 192.169.142.147
***********************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.229"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***********************************************************
Upon completion on Network node 192.169.142.157
***********************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.230"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on both Network Nodes :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

*****************************************************
On each Network Node
*****************************************************
# systemctl start keepalived
# systemctl enable keepalived
****************************************************************************
On Controller and both Network Nodes
Update /etc/neutron/neutron.conf as follows
****************************************************************************
[DEFAULT]
 router_distributed = False
 l3_ha = True
 max_l3_agents_per_router = 2
dhcp_agents_per_network  = 2
*****************************************************************************

All nodes restart


******************************************************************
Creating HA Neutron Router belongs tenant demo
******************************************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from keystoneclient.v2_0 import client
>>> token = '3ad2de159f9649afb0c342ba57e637d9'
>>> endpoint = 'http://192.169.142.127:35357/v2.0'
>>> keystone = client.Client(token=token, endpoint=endpoint)
>>>  keystone.tenants.list()
>>> keystone.tenants.list()
[<Tenant {u'enabled': True, u'description': u'Tenant for the openstack services', u'name': u'services', u'id': u'20d1f633cb384e07b9019cb01ee9f02c'}>, <Tenant {u'enabled': True, u'description': u'admin tenant', u'name': u'admin', u'id': u'cce9a541723a4c26b70b746bab051f6c'}>, <Tenant {u'enabled': True, u'description': u'default tenant', u'name': u'demo', u'id': u'd9d06a467fb54b6e9612cbb1a245c370'}>]
>>>

# neutron router-create --ha True --tenant_id  d9d06a467fb54b6e9612cbb1a245c370 RouterHA

Attach demo_network and external network to RouterHA


[root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterHA
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| id                                   | host                                   | admin_state_up | alive | ha_state |
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| 1e8aec09-e4a4-473a-91c7-9771e0499b1c | ip-192-169-142-157.ip.secureserver.net | True           | :-)   | active   |
| 33b5ec51-33b6-49ee-b5bf-1c66c283b818 | ip-192-169-142-147.ip.secureserver.net | True           | :-)   | standby  |
+--------------------------------------+----------------------------------------+----------------+--

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list | grep "147"
| 30c38f80-4dee-4144-a2aa-a088629f33fb | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
| 33b5ec51-33b6-49ee-b5bf-1c66c283b818 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 8390e450-c5ff-4697-aff3-7cfd66873055 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| d01a0e08-31ab-41d9-bf4b-11888d82bc41 | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list | grep "157"
| 1e8aec09-e4a4-473a-91c7-9771e0499b1c | L3 agent           | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 84ce6181-1eaa-445b-8f14-e865c3658bad | DHCP agent         | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| bf54ed7a-e478-4e0f-b38a-612cc89af26c | Open vSwitch agent | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| f1a1d7fc-6cc2-44c0-9254-367d9dcbb74c | Metadata agent     | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-show RouterHA
+-----------------------+--------------------------------------------------------------------------------------------+
| Field  | Value                |
+-----------------------+---------------------------------------------------------------------------------------------+
| admin_state_up | True
| distributed        | False
| external_gateway_info | {"network_id": "b87a1cdf-8635-424b-b986-347aa1b2d4a7", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "65472e1a-f6ff-4549-b7e8-ab2010b88c69", "ip_address": "172.24.4.227"}]} |
| ha | True                   |
| id  | a4bdf550-76a5-4069-9d03-075b8668f3c5                  |
| name  | RouterHA                 |
| routes                |
| status   | ACTIVE              |
|tenant_id  | d9d06a467fb54b6e9612cbb1a245c370
+-----------------------+---------------------------------------------------------------------------------------------------+

Verify VRRP advertisements from the master node HA interface IP address on corresponding network interface:


   Verification status of neutron services  on each one of Network Nodes
    


    Running VMs
   
 
   Connectivity verification


   Current MASTER is 192.169.142.157

 
   MASTER 192.169.142.157 stopped , 192.169.142.147 changing state from
   BACKUP to MASTER   

 
    Connectivity verification to 172.24.4.231

 
  192.169.142.157 brought up again

  

   192.169.142.157 goes to MASTER State again due to 192.169.142.147 shutdown.
  

   **************************************
   Network node 192.169.142.147
   **************************************
   [root@ip-192-169-142-147 ~]# ovs-vsctl show
5b798479-567a-4d14-bbb7-d014e001307c
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a00009d"
            Interface "vxlan-0a00009d"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.157"}

        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}

        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        fail_mode: secure
        Port "tap9b85b5b7-4c"
            tag: 2
            Interface "tap9b85b5b7-4c"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-299a4e77-af"
            tag: 2
            Interface "qr-299a4e77-af"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "ha-3c63186b-f7"
            tag: 1
            Interface "ha-3c63186b-f7"
                type: internal
    Bridge br-ex
        Port "qg-c88a6f64-88"
            Interface "qg-c88a6f64-88"
                type: internal
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    ovs_version: "2.4.0"
**************************************
Network node 192.169.142.157
**************************************
[root@ip-192-169-142-157 ~]# ovs-vsctl show
15fa30fd-6900-4de7-ac1b-69760ccdfa4f
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
        Port "qg-c88a6f64-88"
            Interface "qg-c88a6f64-88"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.157", out_key=flow, remote_ip="10.0.0.137"}

        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000093"
            Interface "vxlan-0a000093"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.157", out_key=flow, remote_ip="10.0.0.147"}

    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "ha-083e9c72-69"
            tag: 2
            Interface "ha-083e9c72-69"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-299a4e77-af"
            tag: 1
            Interface "qr-299a4e77-af"
                type: internal
    ovs_version: "2.4.0"


Storage Node (LVMiSCSI) deployment for RDO Liberty on CentOS 7.1

$
0
0
  Posting bellow via straightforward  RDO Liberty deployment demonstrates
that Storage Node might work as traditional iSCSI Target Server and each
Compute Node is actually iSCSI initiator client. This functionality is provided
by tuning Cinder && Glance Services running on Storage Node. It's also
important to understand that Storage node should be brought up after Controller
to be able connect it's services to AMQP broker running on Controller (rabbitmq-server).

  Following bellow is brief instruction for 4 node deployment test Controller & Network & Compute & Storage  on RDO Liberty (CentOS 7.1), which was performed on Fedora 21 host with KVM/Libvirt Hypervisor  (32 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) .Four VMs (4 GB RAM, 4 VCPUS)  have been setup.  Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep's external subnets),  Compute Node VM two VNICS (management,vtep's subnets), Storage Node VM one VNIC (management)
Setup :-

192.169.142.127 - Controller Node
192.169.142.147 - Network Nodes
192.169.142.137 - Compute Node
192.169.142.157 - Storage Node (LVMiSCSI)

*******************************************
Three Libvirt networks created
*******************************************

# cat openstackvms.xml

<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>


# cat external.xml

<network>
   <name>external</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
   </ip>
 </network>

# cat vteps.xml

<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>


# virsh net-list

 Name                 State      Autostart     Persistent

--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms   active        yes           yes
 external            active        yes           yes
 vteps                 active        yes          yes


*********************************************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 4 VM are attached to this subnet
*********************************************************************************************************
2. Second Libvirt subnet "external" serves for simulation external network 
Network Nodes attached to "external",latter on "eth2" interfaces (belongs to "external") which are supposed to be converted into OVS ports of br-ex(s) on Network Nodes. This Libvirt subnet via bridge virbr2 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
*********************************************************************************************************
3. Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Nodes are attached to this subnet.
*********************************************************************************************************


***************************************
Answer file (answer4Node.txt)
***************************************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_SAHARA_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=y
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.157
CONFIG_SAHARA_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.127
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=33cade531a764c858e4e6c22488f379f
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=09e304c52d714220
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_REDIS_MASTER_HOST=192.169.142.127
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=02f168ee8edd44e4
**************************************
At this point run on Controller:-
**************************************
 # yum -y  install centos-release-openstack-liberty
 # yum -y  install openstack-packstack
 # packstack --answer-file=./answer4Node.txt

During packstack run I was hit by failure :-
192.169.142.157_glance.pp:                        [ ERROR ]           
Applying Puppet manifests                         [ ERROR ]

ERROR : Error appeared during Puppet run: 192.169.142.157_glance.pp
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-glance' returned 1: No Presto metadata available for centos-openstack-liberty

You will find full trace in log /var/tmp/packstack/20151102-123133-fcqwvl/manifests/192.169.142.157_glance.pp.log

Regardless openstack-glance was already installed on 192.169.142.157 ,
I've just restarted packstack , which completed with no problems

***********************************************************
Upon completion on Network node 192.169.142.147
***********************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.230"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************************************************
When OVS br-ex configuration is done, proceed as follows
***************************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

********************************************
Status of Storage Node upon completion
********************************************

[root@ip-192-169-142-157 ~(keystone_admin)]# openstack-status
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:           active
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:              active
openstack-swift-container:            active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:        active
openstack-cinder-volume:            active
openstack-cinder-backup:            active
== Support services ==
dbus:                                   active
target:                                 active
memcached:                        active

*****************************************************************************
Next step ( device /dev/vdb 25 GB was precreated during VM setup)
Launch gparted  via EPEL 7 and create unformatted /dev/sdb1 ( 25 GB)
*****************************************************************************
# pvcreate /dev/vdb1
# vgcreate cinder-volumes51  /dev/vdb1

***********************************************************
Update /etc/cinder/cinder.conf
***********************************************************
First entry id [DEFAULT] section

################
enabled_backends=lvm51
################

At the bottom of file add

[lvm51]
iscsi_helper=lioadm
volume_group=cinder-volumes51
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
iscsi_ip_address=192.169.142.157
volume_backend_name=LVM_iSCSI51

**********
Then
**********

[root@ip-192-169-142-157 ~(keystone_admin)]# cinder type-create lvms
[root@ip-192-169-142-157 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI51


Reboot VM and check /var/log/cinder/volume.log for entry


2015-11-02 13:39:09.131 5087 INFO cinder.volume.manager [req-f6ce2cdc-faad-4cf8-9999-6b8c7876bd34 - - - - -] Starting volume driver LVMISCSIDriver (3.0.0)
2015-11-02 13:39:09.136 1058 WARNING oslo_config.cfg [req-bb5acb39-d58d-4999-aea6-96a8fa414c78 - - - - -] Option "amqp_durable_queues" from group "DEFAULT" is deprecated. Use option "amqp_durable_queues" from group "oslo_messaging_rabbit".
2015-11-02 13:39:09.151 1058 WARNING oslo_config.cfg [req-bb5acb39-d58d-4999-aea6-96a8fa414c78 - - - - -] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from group "oslo_concurrency".
2015-11-02 13:39:10.032 5087 WARNING oslo_config.cfg [req-f6ce2cdc-faad-4cf8-9999-6b8c7876bd34 - - - - -] Option "amqp_durable_queues" from group "DEFAULT" is deprecated. Use option "amqp_durable_queues" from group "oslo_messaging_rabbit".
2015-11-02 13:39:10.207 5087 INFO oslo.messaging._drivers.impl_rabbit [req-f6ce2cdc-faad-4cf8-9999-6b8c7876bd34 - - - - -] Connecting to AMQP server on 192.169.142.127:5672
2015-11-02 13:39:10.217 5087 INFO oslo.messaging._drivers.impl_rabbit [req-f6ce2cdc-faad-4cf8-9999-6b8c7876bd34 - - - - -] Connected to AMQP server on 192.169.142.127:5672
2015-11-02 13:39:10.297 5087 INFO cinder.volume.manager [req-f6ce2cdc-faad-4cf8-9999-6b8c7876bd34 - - - - -] Driver initialization completed successfully.
2015-11-02 13:39:10.385 5087 INFO oslo.messaging._drivers.impl_rabbit [req-b4c3e107-d745-4d8b-9362-0b490792d699 - - - - -] Connecting to AMQP server on 192.169.142.127:5672
2015-11-02 13:39:10.394 5087 INFO oslo.messaging._drivers.impl_rabbit [req-b4c3e107-d745-4d8b-9362-0b490792d699 - - - - -] Connected to AMQP server on 192.169.142.127:5672

***********************
and make sure that
***********************

[root@ip-192-169-142-157 ~(keystone_admin)]# systemctl status target -l
target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Mon 2015-11-02 15:07:13 MSK; 1h 55min ago
  Process: 1006 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 1006 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service

Nov 02 15:07:13 ip-192-169-142-157.ip.secureserver.net systemd[1]: Started Restore LIO kernel target configuration.


*******************************************
Status services on Storage Node
*******************************************


  192.169.142.157 works as LVMiSCSI Target Server. iSCSI target automatically created via cinder download glance image to mentioned volume group and launching cloud instance attached to this volume.


   Compute Node (iSCSI initiator  host )
  

*************************************************
Verfication status of running Cloud VM (L2)
*************************************************
  
 [root@ip-192-169-142-147 ~(keystone_demo)]# nova list
+--------------------------------------+------------+--------+------------+-------------+--------------------------------------+
| ID                                   | Name       | Status | Task State | Power State | Networks                             |
+--------------------------------------+------------+--------+------------+-------------+--------------------------------------+
| fc4ca22d-50e5-41c7-bd97-1b1c1bd1e056 | VF22Devs01 | ACTIVE | -          | Running     | demo_network=50.0.0.11, 172.24.4.228 |
+--------------------------------------+------------+--------+------------+-------------+--------------------------------------+
[root@ip-192-169-142-147 ~(keystone_demo)]# nova show fc4ca22d-50e5-41c7-bd97-1b1c1bd1e056
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2015-11-02T11:11:39.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2015-11-02T11:11:30Z                                     |
| demo_network network                 | 50.0.0.11, 172.24.4.228                                  |
| flavor                               | m1.small (2)                                             |
| hostId                               | 920c0f6fd9218b6718daf508a4c1be97e2f632d8db16ea8acb008318 |
| id                                   | fc4ca22d-50e5-41c7-bd97-1b1c1bd1e056                     |
| image                                | Attempt to boot from volume - no image supplied          |
| key_name                             | oskeystor                                                |
| metadata                             | {}                                                       |
| name                                 | VF22Devs01                                               |
| os-extended-volumes:volumes_attached | [{"id": "d419e056-262e-46c9-bd04-daa28a7ac04e"}]         |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 121591a2b3f24be3b462a90041e8bc40                         |
| updated                              | 2015-11-02T12:16:07Z                                     |
| user_id                              | f70cf283ded44d8a80e23df7cf4eb008                         |
+--------------------------------------+----------------------------------------------------------+

[root@ip-192-169-142-157 ~(keystone_admin)]# targetcli
targetcli shell version 2.1.fb37
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / ......................................................................................... [...]
  o- backstores .............................................................................. [...]
  | o- block .................................................................. [Storage Objects: 1]
  | | o- iqn.2010-10.org.openstack:volume-d419e056-262e-46c9-bd04-daa28a7ac04e  [/dev/cinder-volumes51/volume-d419e056-262e-46c9-bd04-daa28a7ac04e (7.0GiB) write-thru activated]
  | o- fileio ................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................ [Targets: 1]
  | o- iqn.2010-10.org.openstack:volume-d419e056-262e-46c9-bd04-daa28a7ac04e ............. [TPGs: 1]
  |   o- tpg1 .......................................................... [no-gen-acls, auth per-acl]
  |     o- acls .......................................................................... [ACLs: 1]
  |     | o- iqn.1994-05.com.redhat:d989f7dce882 ...................... [1-way auth, Mapped LUNs: 1]
  |     |   o- mapped_lun0  [lun0 block/iqn.2010-10.org.openstack:volume-d419e056-262e-46c9-bd04-daa28a7ac04e (rw)]
  |     o- luns .......................................................................... [LUNs: 1]
  |     | o- lun0  [block/iqn.2010-10.org.openstack:volume-d419e056-262e-46c9-bd04-daa28a7ac04e (/dev/cinder-volumes51/volume-d419e056-262e-46c9-bd04-daa28a7ac04e)]
  |     o- portals .................................................................... [Portals: 1]
  |       o- 192.169.142.157:3260 ............................................................. [OK]
  o- loopback ......................................................................... [Targets: 0]
/>

 
   Dashboard report for Storage node services status
  


DVR set up on RDO Liberty with separated Controller && Network Nodes

$
0
0
  Actually, setup down here was carefully tested in regards of Mitaka Milestone 1 which hopefully will allow to verify solution provided by Bug #1365473 Unable to create a router that's both HA and distributed
Delorean repos now are supposed to be rebuilt and ready for testing via RDO deployment in a week after each Mitaka Milestone [ 1 ] .

  The DVR is providing direct (vice/versa) access to external network on Compute nodes. For instances with a floating IP addresses routing from project to external network is performed on the compute nodes.Thus DVR eliminates single point of failure and network congestion on Network Node.Agent_mode is set "dvr" in l3_agent.ini on Compute Nodes. Instances with a fixed IP address (only) still rely on the only network node for outbound connectivity via SNAT. Agent_mode is set "dvr_snat" in l3_agent.ini on Network Node.  To support DVR each compute node is running neutron-l3-agent,neutron-metadata-agent,neutron-openvswitch-agent. DVR also requires L2population activated and ARP proxys running on Neutron L2 layer. 

Setup

192.169.142.127 - Controller
192.169.142.147 -Network Node
192.169.142.137 - Compute Node
192.169.142.157 - Compute Node

*********************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 3 VM are attached to this subnet . Attached to all nodes
**********************************************************************************
2. Second Libvirt subnet "public" serves for simulation external network  Network Node && Compute node are attached to public, latter on "eth2" interface (belongs to "public") is supposed to be converted into OVS port of br-ex OVS bridges on Network Node and Compute nodes
***********************************************************************************
3.Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet. ***********************************************************************************

# cat openstackvms.xml

<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>

# cat public.xml
<network>
   <name>public</name>
   <uuid>d1e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
  </ip>
 </network>


# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

Four CentOS 7.1 VMs (4 GB RAM, 4 VCPU ) has been built for testing
at Fedora 23 KVM Hypervisor.

Controller node  - one VNIC (eth0 for mgmt network )
Network node    - three VNICs ( eth0 mgmt, eth1 vteps, eth2 public )
2xCompute node    - three VNICs ( eth0 mgmt, eth1 vteps, eth2 public )

*************************************************
Installation answer-file : answer4Node.txt
*************************************************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,
192.169.142.157
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

**************************************
At this point run on Controller:-
**************************************

# yum -y  install centos-release-openstack-liberty
# yum -y  install openstack-packstack
# packstack --answer-file=./answer4Node.txt


***************************************************************************
After packstack install perform on  Network && Compute Nodes
***************************************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.230"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*********************************
Switch to network service
*********************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.229"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*********************************
Switch to network service
*********************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

[root@ip-192-169-142-157 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.238"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ip-192-169-142-157 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*********************************
Switch to network service
*********************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot


******************
Network Node
******************
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
snat-00223343-b771-4b7a-bbc1-10c5fe924a12
qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12
qdhcp-3371ea3f-35f5-418c-8d07-82a2a54b5c1d

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec snat-00223343-b771-4b7a-bbc1-10c5fe924a12 ip a |grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 70.0.0.13/24 brd 70.0.0.255 scope global sg-67571326-46
    inet 172.24.4.236/28 brd 172.24.4.239 scope global qg-57d45794-46

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec snat-00223343-b771-4b7a-bbc1-10c5fe924a12 iptables-save | grep SNAT
-A neutron-l3-agent-snat -o qg-57d45794-46 -j SNAT --to-source 172.24.4.236
-A neutron-l3-agent-snat -m mark ! --mark 0x2/0xffff -m conntrack --ctstate DNAT -j SNAT --to-source 172.24.4.236

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ip a |grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 70.0.0.1/24 brd 70.0.0.255 scope global qr-bdd297b1-05

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ip rule ls
0:    from all lookup local
32766:    from all lookup main
32767:    from all lookup default
1174405121:    from 70.0.0.1/24 lookup 1174405121

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ip route show table all
default via 70.0.0.13 dev qr-bdd297b1-05  table 1174405121
70.0.0.0/24 dev qr-bdd297b1-05  proto kernel  scope link  src 70.0.0.1
broadcast 70.0.0.0 dev qr-bdd297b1-05  table local  proto kernel  scope link  src 70.0.0.1
local 70.0.0.1 dev qr-bdd297b1-05  table local  proto kernel  scope host  src 70.0.0.1
broadcast 70.0.0.255 dev qr-bdd297b1-05  table local  proto kernel  scope link  src 70.0.0.1

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-bdd297b1-05: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 70.0.0.1  netmask 255.255.255.0  broadcast 70.0.0.255
        inet6 fe80::f816:3eff:fedf:c80b  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:df:c8:0b  txqueuelen 0  (Ethernet)
        RX packets 19  bytes 1530 (1.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 864 (864.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec snat-00223343-b771-4b7a-bbc1-10c5fe924a12 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-57d45794-46: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.24.4.236  netmask 255.255.255.240  broadcast 172.24.4.239
        inet6 fe80::f816:3eff:fec7:1583  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:c7:15:83  txqueuelen 0  (Ethernet)
        RX packets 25  bytes 1698 (1.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 1074 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

sg-67571326-46: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 70.0.0.13  netmask 255.255.255.0  broadcast 70.0.0.255
        inet6 fe80::f816:3eff:fed1:69b4  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:d1:69:b4  txqueuelen 0  (Ethernet)
        RX packets 11  bytes 914 (914.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14  bytes 1140 (1.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0





   Neutron agents running on Network Node 



******************************************************************************
Neutron.conf should be the same on Controller and Network nodes
******************************************************************************

[root@ip-192-169-142-147 neutron(keystone_admin)]# cat neutron.conf | grep -v ^#|grep -v ^$
[DEFAULT]
verbose = True
router_distributed = True
debug = False
state_path = /var/lib/neutron
use_syslog = False
use_stderr = True
log_dir =/var/log/neutron
bind_host = 0.0.0.0
bind_port = 9696
core_plugin =neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins =router
auth_strategy = keystone
base_mac = fa:16:3e:00:00:00
dvr_base_mac = fa:16:3f:00:00:00
mac_generation_retries = 16
dhcp_lease_duration = 86400
dhcp_agent_notification = True
allow_bulk = True
allow_pagination = False
allow_sorting = False
allow_overlapping_ips = True
advertise_mtu = False
dhcp_agents_per_network = 1
use_ssl = False
rpc_response_timeout=60
rpc_backend=rabbit
control_exchange=neutron
lock_path=/var/lib/neutron/lock
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
report_interval = 30
[keystone_authtoken]
auth_uri = http://192.169.142.127:5000/v2.0
identity_uri = http://192.169.142.127:35357
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
[database]
[nova]
[oslo_concurrency]
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
kombu_reconnect_delay = 1.0
rabbit_host = 192.169.142.127
rabbit_port = 5672
rabbit_hosts = 192.169.142.127:5672
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_ha_queues = False
heartbeat_rate=2
heartbeat_timeout_threshold=0
[qos]

[root@ip-192-169-142-147 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#|grep -v ^$
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
# Set for Network Node
agent_mode = dvr_snat
[AGENT]

***********************************************************************
Next files are supposed to be  replicated to all compute nodes
***********************************************************************

[root@ip-192-169-142-147 neutron(keystone_admin)]# cat metadata_agent.ini | grep -v ^#|grep -v ^$

[DEFAULT]
debug = False
auth_url = http://192.169.142.127:5000/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
nova_metadata_protocol = http
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =4
metadata_backlog = 4096
cache_url = memory://?default_ttl=5
[AGENT]

[root@ip-192-169-142-147 ml2(keystone_admin)]# cat ml2_conf.ini | grep -v ^#|grep -v ^$
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[ml2_type_geneve]
[securitygroup]
enable_security_group = True
[agent]
l2_population=True 


[root@ip-192-169-142-147 ml2(keystone_admin)]# cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147  <== updated corresponently
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True

prevent_arp_spoofing = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

******************************
On Compute Node
******************************

[root@ip-192-169-142-137 neutron]# cat neutron.conf | grep -v ^#|grep -v ^$
[DEFAULT]
verbose = True
debug = False
state_path = /var/lib/neutron
use_syslog = False
use_stderr = True
log_dir =/var/log/neutron
bind_host = 0.0.0.0
bind_port = 9696
core_plugin =neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins =router
auth_strategy = keystone
base_mac = fa:16:3e:00:00:00
mac_generation_retries = 16
dhcp_lease_duration = 86400
dhcp_agent_notification = True
allow_bulk = True
allow_pagination = False
allow_sorting = False
allow_overlapping_ips = True
advertise_mtu = False
dhcp_agents_per_network = 1
use_ssl = False
rpc_response_timeout=60
rpc_backend=rabbit
control_exchange=neutron
lock_path=/var/lib/neutron/lock
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
report_interval = 30
[keystone_authtoken]
auth_uri = http://127.0.0.1:35357/v2.0/
identity_uri = http://127.0.0.1:5000
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
[database]
[nova]
[oslo_concurrency]
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
kombu_reconnect_delay = 1.0
rabbit_host = 192.169.142.127
rabbit_port = 5672
rabbit_hosts = 192.169.142.127:5672
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_ha_queues = False
heartbeat_rate=2
heartbeat_timeout_threshold=0
[qos]

[root@ip-192-169-142-137 neutron]# cat l3_agent.ini | grep -v ^#|grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
# Set for Compute Node
agent_mode = dvr
[AGENT]

**********************************************************************************
On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started.
**********************************************************************************
# yum install  openstack-neutron-ml2  
# systemctl start neutron-l3-agent
# systemctl start neutron-metadata-agent
# systemctl enable neutron-l3-agent
# systemctl enable neutron-metadata-agent


[root@ip-192-169-142-137 ml2]# cat ml2_conf.ini | grep -v ^#|grep -v ^$
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[ml2_type_geneve]
[securitygroup]
enable_security_group = True
[agent]
l2_population=True 


[root@ip-192-169-142-137 ml2]# cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.137
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True

prevent_arp_spoofing = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

***********************
Compute Node
***********************

[root@ip-192-169-142-157 ~]# ip netns
fip-115edb73-ebe2-4e48-811f-4823fc19d9b6
qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12

[root@ip-192-169-142-157 ~]# ip netns exec  qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ip a | grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 169.254.31.28/31 scope global rfp-00223343-b
    inet 172.24.4.231/32 brd 172.24.4.231 scope global rfp-00223343-b
    inet 172.24.4.233/32 brd 172.24.4.233 scope global rfp-00223343-b
    inet 70.0.0.1/24 brd 70.0.0.255 scope global qr-bdd297b1-05

[root@ip-192-169-142-157 ~]# ip netns exec  qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 iptables-save -t nat | grep "^-A"|grep l3-agent
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A neutron-l3-agent-OUTPUT -d 172.24.4.231/32 -j DNAT --to-destination 70.0.0.15
-A neutron-l3-agent-OUTPUT -d 172.24.4.233/32 -j DNAT --to-destination 70.0.0.17
-A neutron-l3-agent-POSTROUTING ! -i rfp-00223343-b ! -o rfp-00223343-b -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 172.24.4.231/32 -j DNAT --to-destination 70.0.0.15
-A neutron-l3-agent-PREROUTING -d 172.24.4.233/32 -j DNAT --to-destination 70.0.0.17
-A neutron-l3-agent-float-snat -s 70.0.0.15/32 -j SNAT --to-source 172.24.4.231
-A neutron-l3-agent-float-snat -s 70.0.0.17/32 -j SNAT --to-source 172.24.4.233
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat

[root@ip-192-169-142-157 ~]# ip netns exec  fip-115edb73-ebe2-4e48-811f-4823fc19d9b6  ip a | grep "inet "
    inet 127.0.0.1/8 scope host lo
    inet 169.254.31.29/31 scope global fpr-00223343-b
    inet 172.24.4.237/28 brd 172.24.4.239 scope global fg-d00d8427-25

[root@ip-192-169-142-157 ~]# ip netns exec  qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ip rule ls
0:    from all lookup local
32766:    from all lookup main
32767:    from all lookup default
57480:    from 70.0.0.17 lookup 16
57481:    from 70.0.0.15 lookup 16
1174405121:    from 70.0.0.1/24 lookup 1174405121

[root@ip-192-169-142-157 ~]# ip netns exec  qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ip route show table 16
default via 169.254.31.29 dev rfp-00223343-b

[root@ip-192-169-142-157 ~]# ip netns exec  qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ip route
70.0.0.0/24 dev qr-bdd297b1-05  proto kernel  scope link  src 70.0.0.1
169.254.31.28/31 dev rfp-00223343-b  proto kernel  scope link  src 169.254.31.28

[root@ip-192-169-142-157 ~]# ip netns exec  fip-115edb73-ebe2-4e48-811f-4823fc19d9b6 ip route
default via 172.24.4.225 dev fg-d00d8427-25
169.254.31.28/31 dev fpr-00223343-b  proto kernel  scope link  src 169.254.31.29
172.24.4.224/28 dev fg-d00d8427-25  proto kernel  scope link  src 172.24.4.237
172.24.4.231 via 169.254.31.28 dev fpr-00223343-b
172.24.4.233 via 169.254.31.28 dev fpr-00223343-b

[root@ip-192-169-142-157 ~]# ip netns exec  fip-115edb73-ebe2-4e48-811f-4823fc19d9b6 ifconfig
fg-d00d8427-25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.24.4.237  netmask 255.255.255.240  broadcast 172.24.4.239
        inet6 fe80::f816:3eff:fe10:3928  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:10:39:28  txqueuelen 0  (Ethernet)
        RX packets 46  bytes 4382 (4.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16  bytes 1116 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

fpr-00223343-b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.31.29  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::d88d:7ff:fe1c:23a5  prefixlen 64  scopeid 0x20<link>
        ether da:8d:07:1c:23:a5  txqueuelen 1000  (Ethernet)
        RX packets 7  bytes 738 (738.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7  bytes 738 (738.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-157 ~]# ip netns exec  qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-bdd297b1-05: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 70.0.0.1  netmask 255.255.255.0  broadcast 70.0.0.255
        inet6 fe80::f816:3eff:fedf:c80b  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:df:c8:0b  txqueuelen 0  (Ethernet)
        RX packets 9  bytes 746 (746.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 864 (864.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

rfp-00223343-b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.31.28  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::5c77:1eff:fe6b:5a21  prefixlen 64  scopeid 0x20<link>
        ether 5e:77:1e:6b:5a:21  txqueuelen 1000  (Ethernet)
        RX packets 7  bytes 738 (738.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7  bytes 738 (738.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

***********************
Network Node
***********************

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
738cdbf4-4dde-4887-a95e-cc994702138e
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
        Port "qg-57d45794-46"
            Interface "qg-57d45794-46"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a00009d"
            Interface "vxlan-0a00009d"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.157"}
    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qr-bdd297b1-05"           <=========
            tag: 1
            Interface "qr-bdd297b1-05"
                type: internal
        Port "sg-67571326-46"           <=========
            tag: 1
            Interface "sg-67571326-46"
                type: internal

        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port "tap06dd3fa7-c0"
            tag: 1
            Interface "tap06dd3fa7-c0"
                type: internal
    ovs_version: "2.4.0"



***********************
SNAT forwarding
***********************

==== Compute Node ====

[root@ip-192-169-142-157 ~]# ip netns
fip-115edb73-ebe2-4e48-811f-4823fc19d9b6
qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12

[root@ip-192-169-142-157 ~]# ip netns exec qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12  ip rule ls
0:    from all lookup local
32766:    from all lookup main
32767:    from all lookup default
57480:    from 70.0.0.17 lookup 16
57481:    from 70.0.0.15 lookup 16
1174405121:    from 70.0.0.1/24 lookup 1174405121

[root@ip-192-169-142-157 ~]# ip netns exec qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12 ip route show table all

default via 70.0.0.13 dev qr-bdd297b1-05  table 1174405121 <====
default via 169.254.31.29 dev rfp-00223343-b  table 16
70.0.0.0/24 dev qr-bdd297b1-05  proto kernel  scope link  src 70.0.0.1
169.254.31.28/31 dev rfp-00223343-b  proto kernel  scope link  src 169.254.31.28 


====Network Node  ====

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
snat-00223343-b771-4b7a-bbc1-10c5fe924a12
qrouter-00223343-b771-4b7a-bbc1-10c5fe924a12
qdhcp-3371ea3f-35f5-418c-8d07-82a2a54b5c1d

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec snat-00223343-b771-4b7a-bbc1-10c5fe924a12 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-57d45794-46: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.24.4.236  netmask 255.255.255.240  broadcast 172.24.4.239
        inet6 fe80::f816:3eff:fec7:1583  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:c7:15:83  txqueuelen 0  (Ethernet)
        RX packets 49  bytes 4463 (4.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 1074 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

sg-67571326-46: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 70.0.0.13  netmask 255.255.255.0  broadcast 70.0.0.255

        inet6 fe80::f816:3eff:fed1:69b4  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:d1:69:b4  txqueuelen 0  (Ethernet)
        RX packets 11  bytes 914 (914.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14  bytes 1140 (1.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

   *********************************************************************
   SNAT sample VM with no FIP downloading data from Internet
   `iftop -i eth2` snapshot on Network Node.
   *********************************************************************

   Download running on VM with FIP on 192.169.142.157 


          Download running on VM with FIP on 192.169.142.137 
   


 System information
   
 

Attempt to set up HAProxy/Keepalived 3 Node Controller on RDO Liberty per Javier Pena

$
0
0
URGENT UPDATE 11/18/2015
 Please, view https://github.com/beekhof/osp-ha-deploy/commit/b2e01e86ca93cfad9ad01d533b386b4c9607c60d
 It looks as work in progress.
 See also https://www.redhat.com/archives/rdo-list/2015-November/msg00168.html
END UPDATE

  Actually, setup bellow follows closely https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md
As far as to my knowledge Cisco's schema has been implemented :-
Keepalived, HAProxy,Galera for MySQL Manual install, at least 3 controller nodes. I just highlighted several steps  which as I believe allowed me to bring this work to success.  Javier is using flat external network provider for Controllers cluster disabling from the same start NetworkManager && enabling service network, there is one step which i was unable to skip. It's disabling IP's of eth0's interfaces && restarting network service right before running `ovs-vsctl add-port br-eth0 eth0` per  Neutron building instructions of mentioned "Howto", which seems to be one of the best I've ever seen.
  I (just) guess that due this sequence of steps even on already been built and seems to run OK  three nodes Controller Cluster external network is still ping able :-

 
  However, would i disable eth0's IPs from the start i would lost connectivity right away switching to network service from NetworkManager . In general,  external network is supposed to be ping able from qrouter namespace due to Neutron router's  DNAT/SNAT IPtables forwarding, but not from Controller . I am also aware of that when Ethernet interface becomes an OVS port of OVS bridge it's IP is supposed to be suppressed. When external network provider is not used , then br-ex gets any IP  available IP on external network. Using external network provider changes situation. Details may be seen here :-
https://www.linux.com/community/blogs/133-general-linux/858156-multiple-external-networks-with-a-single-l3-agent-testing-on-rdo-liberty-per-lars-kellogg-stedman

[root@hacontroller1 ~(keystone_admin)]# systemctl status NetworkManager
NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled)
   Active: inactive (dead)

[root@hacontroller1 ~(keystone_admin)]# systemctl status network
network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: active (exited) since Wed 2015-11-18 08:36:53 MSK; 2h 10min ago
  Process: 708 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)


Nov 18 08:36:47 hacontroller1.example.com network[708]: Bringing up loopback interface:  [  OK  ]
Nov 18 08:36:51 hacontroller1.example.com network[708]: Bringing up interface eth0:  [  OK  ]
Nov 18 08:36:53 hacontroller1.example.com network[708]: Bringing up interface eth1:  [  OK  ]
Nov 18 08:36:53 hacontroller1.example.com systemd[1]: Started LSB: Bring up/down networking.

[root@hacontroller1 ~(keystone_admin)]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::5054:ff:fe6d:926a  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:6d:92:6a  txqueuelen 1000  (Ethernet)
        RX packets 5036  bytes 730778 (713.6 KiB)
        RX errors 0  dropped 12  overruns 0  frame 0
        TX packets 15715  bytes 930045 (908.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.169.142.221  netmask 255.255.255.0  broadcast 192.169.142.255
        inet6 fe80::5054:ff:fe5e:9644  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:5e:96:44  txqueuelen 1000  (Ethernet)
        RX packets 1828396  bytes 283908183 (270.7 MiB)
        RX errors 0  dropped 13  overruns 0  frame 0
        TX packets 1839312  bytes 282429736 (269.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 869067  bytes 69567890 (66.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 869067  bytes 69567890 (66.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@hacontroller1 ~(keystone_admin)]# ping -c 3  10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=2.04 ms
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.103 ms
64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.118 ms

--- 10.10.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.103/0.754/2.043/0.911 ms



 
  Both mgmt and external networks emulated by corresponging Libvirt Networks
on F23 Virtualization Server. Total four VMs been setup , 3 of them for Controller
nodes and one for compute (4 VCPUS, 4 GB RAM)

[root@fedora23wks ~]# cat openstackvms.xml ( for eth1's)
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>
[root@fedora23wks ~]# cat public.xml ( for external network provider )
<network>
   <name>public</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.10.10.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.10.10.2' end='10.10.10.254' />
     </dhcp>
   </ip>
 </network>

Only one file is bit different on Controller Nodes , it is l3_agent.ini

[root@hacontroller1 neutron(keystone_demo)]# cat l3_agent.ini | grep -v ^# | grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
send_arp_for_ha = 3
metadata_ip = controller-vip.example.com
external_network_bridge =
gateway_external_network_id =
[AGENT]

*************************************************************************************
Due to posted "UPDATE" on the top of  the blog entry in meantime
perfect solution is provided by https://github.com/beekhof/osp-ha-deploy/commit/b2e01e86ca93cfad9ad01d533b386b4c9607c60d
Per mentioned patch, assuming eth0 is your interface attached to the external network, create two files in /etc/sysconfig/network-scripts/ as follows (change MTU if you need):

    cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-eth0
    DEVICE=eth0
    ONBOOT=yes
    DEVICETYPE=ovs
    TYPE=OVSPort
    OVS_BRIDGE=br-eth0
    ONBOOT=yes
    BOOTPROTO=none
    VLAN=yes
    MTU="9000"
    NM_CONTROLLED=no
    EOF

    cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-br-eth0
    DEVICE=br-eth0
    DEVICETYPE=ovs
    OVSBOOTPROTO=none
    TYPE=OVSBridge
    ONBOOT=yes
    BOOTPROTO=static
    MTU="9000"
    NM_CONTROLLED=no
    EOF

Restart the network for the changes to take effect.

systemctl restart network

The commit has been done on 11/14/2015 right after discussion at RDO mailing list.
*************************************************************************************

One more step which I did ( not sure that is really has
to be done at this point of time ).IP's on eth0's interfaces were disabled just before
running `ovs-vsctl add-port br-eth0 eth0`:-

1. Updated ifcfg-eth0 files on all Controllers
2. `service network restart` on all Controllers
3.  `ovs-vsctl add-port br-eth0 eth0`on all Controllers

*****************************************************************************************
Targeting just POC ( to get floating ips accessible from Fedora 23 Virtualization
host )  resulted  Controllers Cluster setup:-
*****************************************************************************************
I installed only

Keystone

**************************
UPDATE to official docs
**************************
[root@hacontroller1 ~(keystone_admin)]# cat   keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=regionOne
export OS_PASSWORD=keystonetest
export OS_AUTH_URL=http://controller-vip.example.com:35357/v2.0/
export OS_SERVICE_ENDPOINT=http://controller-vip.example.com:35357/v2.0
export OS_SERVICE_TOKEN=$(cat /root/keystone_service_token)

export PS1='[\u@\h \W(keystone_admin)]\$ '

Glance
Neutron
Nova
Horizon

Due to running Galera Synchronous MultiMaster Replication between Controllers each commands like :-

# su keystone -s /bin/sh -c "keystone-manage db_sync"
# su glance -s /bin/sh -c "glance-manage db_sync"
# neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
# su nova -s /bin/sh -c "nova-manage db sync"

are supposed to run just once from Conroller node 1 ( for instance )


************************
Compute Node setup:-
*************************

Compute setup

**********************
On all nodes
**********************
[root@hacontroller1 neutron(keystone_demo)]# cat /etc/hosts
192.169.142.220 controller-vip.example.com controller-vip
192.169.142.221 hacontroller1.example.com hacontroller1
192.169.142.222 hacontroller2.example.com hacontroller2
192.169.142.223 hacontroller3.example.com hacontroller3
192.169.142.224 compute.example.con compute
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

[root@hacontroller1 ~(keystone_admin)]# cat /etc/neutron/neutron.conf | grep -v ^$| grep -v ^#
[DEFAULT]
bind_host = 192.169.142.22(X)
auth_strategy = keystone
notification_driver = neutron.openstack.common.notifier.rpc_notifier
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = router,lbaas
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
dhcp_agents_per_network = 2
api_workers = 2
rpc_workers = 2
l3_ha = True
min_l3_agents_per_router = 2
max_l3_agents_per_router = 2

[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller-vip.example.com:5000/
identity_uri = http://127.0.0.1:5000
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
auth_plugin = password
auth_url = http://controller-vip.example.com:35357/
username = neutron
password = neutrontest
project_name = services
[database]
connection = mysql://neutron:neutrontest@controller-vip.example.com:3306/neutron
max_retries = -1
[nova]
nova_region_name = regionOne
project_domain_id = default
project_name = services
user_domain_id = default
password = novatest
username = compute
auth_url = http://controller-vip.example.com:35357/
auth_plugin = password
[oslo_concurrency]
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_hosts = hacontroller1,hacontroller2,hacontroller3
rabbit_ha_queues = true
[qos]


[root@hacontroller1 haproxy(keystone_demo)]# cat haproxy.cfg
global
    daemon
    stats socket /var/lib/haproxy/stats
defaults
    mode tcp
    maxconn 10000
    timeout connect 5s
    timeout client 30s
    timeout server 30s

listen monitor
    bind 192.169.142.220:9300
    mode http
    monitor-uri /status
    stats enable
    stats uri /admin
    stats realm Haproxy\ Statistics
    stats auth root:redhat
    stats refresh 5s

frontend vip-db
    bind 192.169.142.220:3306
    timeout client 90m
    default_backend db-vms-galera
backend db-vms-galera
    option httpchk
    stick-table type ip size 1000
    stick on dst
    timeout server 90m
    server rhos8-node1 192.169.142.221:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
    server rhos8-node2 192.169.142.222:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
    server rhos8-node3 192.169.142.223:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions

# Note the RabbitMQ entry is only needed for CloudForms compatibility
# and should be removed in the future
frontend vip-rabbitmq
    option clitcpka
    bind 192.169.142.220:5672
    timeout client 900m
    default_backend rabbitmq-vms
backend rabbitmq-vms
    option srvtcpka
    balance roundrobin
    timeout server 900m
    server rhos8-node1 192.169.142.221:5672 check inter 1s
    server rhos8-node2 192.169.142.222:5672 check inter 1s
    server rhos8-node3 192.169.142.223:5672 check inter 1s

frontend vip-keystone-admin
    bind 192.169.142.220:35357
    default_backend keystone-admin-vms
    timeout client 600s
backend keystone-admin-vms
    balance roundrobin
    timeout server 600s
    server rhos8-node1 192.169.142.221:35357 check inter 1s on-marked-down shutdown-sessions
    server rhos8-node2 192.169.142.222:35357 check inter 1s on-marked-down shutdown-sessions
    server rhos8-node3 192.169.142.223:35357 check inter 1s on-marked-down shutdown-sessions

frontend vip-keystone-public
    bind 192.169.142.220:5000
    default_backend keystone-public-vms
    timeout client 600s
backend keystone-public-vms
    balance roundrobin
    timeout server 600s
    server rhos8-node1 192.169.142.221:5000 check inter 1s on-marked-down shutdown-sessions
    server rhos8-node2 192.169.142.222:5000 check inter 1s on-marked-down shutdown-sessions
    server rhos8-node3 192.169.142.223:5000 check inter 1s on-marked-down shutdown-sessions

frontend vip-glance-api
    bind 192.169.142.220:9191
    default_backend glance-api-vms
backend glance-api-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:9191 check inter 1s
    server rhos8-node2 192.169.142.222:9191 check inter 1s
    server rhos8-node3 192.169.142.223:9191 check inter 1s

frontend vip-glance-registry
    bind 192.169.142.220:9292
    default_backend glance-registry-vms
backend glance-registry-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:9292 check inter 1s
    server rhos8-node2 192.169.142.222:9292 check inter 1s
    server rhos8-node3 192.169.142.223:9292 check inter 1s

frontend vip-cinder
    bind 192.169.142.220:8776
    default_backend cinder-vms
backend cinder-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:8776 check inter 1s
    server rhos8-node2 192.169.142.222:8776 check inter 1s
    server rhos8-node3 192.169.142.223:8776 check inter 1s

frontend vip-swift
    bind 192.169.142.220:8080
    default_backend swift-vms
backend swift-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:8080 check inter 1s
    server rhos8-node2 192.169.142.222:8080 check inter 1s
    server rhos8-node3 192.169.142.223:8080 check inter 1s

frontend vip-neutron
    bind 192.169.142.220:9696
    default_backend neutron-vms
backend neutron-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:9696 check inter 1s
    server rhos8-node2 192.169.142.222:9696 check inter 1s
    server rhos8-node3 192.169.142.223:9696 check inter 1s

frontend vip-nova-vnc-novncproxy
    bind 192.169.142.220:6080
    default_backend nova-vnc-novncproxy-vms
backend nova-vnc-novncproxy-vms
    balance roundrobin
    timeout tunnel 1h
    server rhos8-node1 192.169.142.221:6080 check inter 1s
    server rhos8-node2 192.169.142.222:6080 check inter 1s
    server rhos8-node3 192.169.142.223:6080 check inter 1s

frontend nova-metadata-vms
    bind 192.169.142.220:8775
    default_backend nova-metadata-vms
backend nova-metadata-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:8775 check inter 1s
    server rhos8-node2 192.169.142.222:8775 check inter 1s
    server rhos8-node3 192.169.142.223:8775 check inter 1s

frontend vip-nova-api
    bind 192.169.142.220:8774
    default_backend nova-api-vms
backend nova-api-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:8774 check inter 1s
    server rhos8-node2 192.169.142.222:8774 check inter 1s
    server rhos8-node3 192.169.142.223:8774 check inter 1s

frontend vip-horizon
    bind 192.169.142.220:80
    timeout client 180s
    default_backend horizon-vms
backend horizon-vms
    balance roundrobin
    timeout server 180s
    mode http
    cookie SERVERID insert indirect nocache
    server rhos8-node1 192.169.142.221:80 check inter 1s cookie rhos8-horizon1 on-marked-down shutdown-sessions
    server rhos8-node2 192.169.142.222:80 check inter 1s cookie rhos8-horizon2 on-marked-down shutdown-sessions
    server rhos8-node3 192.169.142.223:80 check inter 1s cookie rhos8-horizon3 on-marked-down shutdown-sessions

frontend vip-heat-cfn
    bind 192.169.142.220:8000
    default_backend heat-cfn-vms
backend heat-cfn-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:8000 check inter 1s
    server rhos8-node2 192.169.142.222:8000 check inter 1s
    server rhos8-node3 192.169.142.223:8000 check inter 1s

frontend vip-heat-cloudw
    bind 192.169.142.220:8003
    default_backend heat-cloudw-vms
backend heat-cloudw-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:8003 check inter 1s
    server rhos8-node2 192.169.142.222:8003 check inter 1s
    server rhos8-node3 192.169.142.223:8003 check inter 1s

frontend vip-heat-srv
    bind 192.169.142.220:8004
    default_backend heat-srv-vms
backend heat-srv-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:8004 check inter 1s
    server rhos8-node2 192.169.142.222:8004 check inter 1s
    server rhos8-node3 192.169.142.223:8004 check inter 1s

frontend vip-ceilometer
    bind 192.169.142.220:8777
    timeout client 90s
    default_backend ceilometer-vms
backend ceilometer-vms
    balance roundrobin
    timeout server 90s
    server rhos8-node1 192.169.142.221:8777 check inter 1s
    server rhos8-node2 192.169.142.222:8777 check inter 1s
    server rhos8-node3 192.169.142.223:8777 check inter 1s

frontend vip-sahara
    bind 192.169.142.220:8386
    default_backend sahara-vms
backend sahara-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:8386 check inter 1s
    server rhos8-node2 192.169.142.222:8386 check inter 1s
    server rhos8-node3 192.169.142.223:8386 check inter 1s

frontend vip-trove
    bind 192.169.142.220:8779
    default_backend trove-vms
backend trove-vms
    balance roundrobin
    server rhos8-node1 192.169.142.221:8779 check inter 1s
    server rhos8-node2 192.169.142.222:8779 check inter 1s
    server rhos8-node3 192.169.142.223:8779 check inter 1s

[root@hacontroller1 ~(keystone_demo)]# cat /etc/my.cnf.d/galera.cnf
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
max_connections=8192
query_cache_size=0
query_cache_type=0
bind_address=192.169.142.22(X)
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://192.169.142.221,192.169.142.222,192.169.142.223"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync

[root@hacontroller1 ~(keystone_demo)]# cat /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
    script "/usr/bin/killall -0 haproxy"
    interval 2
}

vrrp_instance VI_PUBLIC {
    interface eth1
    state BACKUP
    virtual_router_id 52
    priority 101
    virtual_ipaddress {
        192.169.142.220 dev eth1
    }
    track_script {
        chk_haproxy
    }
    # Avoid failback
    nopreempt
}

vrrp_sync_group VG1
    group {
        VI_PUBLIC
    }

*************************************************************************
The most difficult  procedure is re-syncing Galera Mariadb cluster
*************************************************************************
https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/galera-bootstrap.md

Due to nova services start not waiting for getting in sync Galera databases, after sync is done and regardless systemctl reports that service are up and running, database update by `openstack-service restart nova` is required on every Controller.  Also the most suspicious reason for failure access Nova metadata Server by starting VMs is failure to start neutron-l3-agent service  on each Controller due to classical design - VM's access metadata via neutron-ns-metadata-proxy running in qrouter namespace. neutron-l3-agents may be started with no problems, some times just restarted when needed.

*****************************************
Creating Neutron Router via CLI.
*****************************************

[root@hacontroller1 ~(keystone_admin)]# cat  keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=regionOne
export OS_PASSWORD=keystonetest
export OS_AUTH_URL=http://controller-vip.example.com:35357/v2.0/
export OS_SERVICE_ENDPOINT=http://controller-vip.example.com:35357/v2.0
export OS_SERVICE_TOKEN=$(cat /root/keystone_service_token)

export PS1='[\u@\h \W(keystone_admin)]\$ '


[root@hacontroller1 ~(keystone_admin)]# keystone tenant-list

+----------------------------------+----------+---------+
|                id                |   name   | enabled |
+----------------------------------+----------+---------+
| acdc927b53bd43ae9a7ed657d1309884 |  admin   |   True  |
| 7db0aa013d60434996585c4ee359f512 |   demo   |   True  |
| 9d8bf126d54e4d11a109bd009f54a87f   | services |   True  |
+----------------------------------+----------+---------+

[root@hacontroller1 ~(keystone_admin)]# neutron router-create --ha True --tenant-id 7db0aa013d60434996585c4ee359f512  RouterDS
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up   |   True                                 |
| distributed          | False                                |
| external_gateway_info |                                      |
| ha                       | True                                 |
| id                        | fdf540d2-c128-4677-b403-d71c796d7e18 |
| name                  | RouterDS                             |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id           | 7db0aa013d60434996585c4ee359f512   |
+-----------------------+--------------------------------------+


    

    RUN Time Snapshots. Keepalived status on Controller's nodes

   

   HA Neutron router belonging tenant demo create via Neutron CLI 
 

***********************************************************************
 At this point hacontroller1 goes down. On hacontroller2 run :-
***********************************************************************
root@hacontroller2 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterHA
+--------------------------------------+---------------------------+----------------+-------+----------+
| id                                   | host                      | admin_state_up | alive | ha_state |
+--------------------------------------+---------------------------+----------------+-------+----------+
| a03409d2-fbe9-492c-a954-e1bdf7627491 | hacontroller2.example.com | True           | :-)   | active   |
| 0d6e658a-e796-4cff-962f-06e455fce02f | hacontroller1.example.com | True           | xxx   | active   |
+--------------------------------------+---------------------------+----------------+-------+-------

  
***********************************************************************
 At this point hacontroller2 goes down. hacontroller1 goes up :-
***********************************************************************


          Nova Services status on all Controllers
  



     Neutron Services status on all Controllers  


   Compute Node status
  

  

 ******************************************************************************
 Cloud VM (L3) at runtime . Accessibility from F23 Virtualization Host,
 running HA 3  Nodes Controller and Compute Node VMs (L2)
 ******************************************************************************
[root@fedora23wks ~]# ping  10.10.10.103
PING 10.10.10.103 (10.10.10.103) 56(84) bytes of data.
64 bytes from 10.10.10.103: icmp_seq=1 ttl=63 time=1.14 ms
64 bytes from 10.10.10.103: icmp_seq=2 ttl=63 time=0.813 ms
64 bytes from 10.10.10.103: icmp_seq=3 ttl=63 time=0.636 ms
64 bytes from 10.10.10.103: icmp_seq=4 ttl=63 time=0.778 ms
64 bytes from 10.10.10.103: icmp_seq=5 ttl=63 time=0.493 ms
^C
--- 10.10.10.103 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4001ms
rtt min/avg/max/mdev = 0.493/0.773/1.146/0.218 ms

[root@fedora23wks ~]# ssh -i oskey1.priv fedora@10.10.10.103
Last login: Tue Nov 17 09:02:30 2015
[fedora@vf23dev ~]$ uname -a
Linux vf23dev.novalocal 4.2.5-300.fc23.x86_64 #1 SMP Tue Oct 27 04:29:56 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

  
   
  

 ********************************************************************************
 Verifying neutron workflow on 3 node controller been built via patch:-
 ********************************************************************************
[root@hacontroller1 ~(keystone_admin)]# ovs-ofctl show br-eth0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000baf0db1a854f
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(eth0): addr:52:54:00:aa:0e:fc
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 2(phy-br-eth0): addr:46:c0:e0:30:72:92
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-eth0): addr:ba:f0:db:1a:85:4f
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@hacontroller1 ~(keystone_admin)]# ovs-ofctl dump-flows  br-eth0
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=15577.057s, table=0, n_packets=50441, n_bytes=3262529, idle_age=2, priority=4,in_port=2,dl_vlan=3 actions=strip_vlan,NORMAL
 cookie=0x0, duration=15765.938s, table=0, n_packets=31225, n_bytes=1751795, idle_age=0, priority=2,in_port=2 actions=drop
 cookie=0x0, duration=15765.974s, table=0, n_packets=39982, n_bytes=42838752, idle_age=1, priority=0 actions=NORMAL

Check `ovs-vsctl show`

 Bridge br-int
        fail_mode: secure
        Port "tapc8488877-45"
            tag: 4
            Interface "tapc8488877-45"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tap14aa6eeb-70"
            tag: 2
            Interface "tap14aa6eeb-70"
                type: internal
        Port "qr-8f5b3f4a-45"
            tag: 2
            Interface "qr-8f5b3f4a-45"
                type: internal
        Port "int-br-eth0"
            Interface "int-br-eth0"
                type: patch
                options: {peer="phy-br-eth0"}
        Port "qg-34893aa0-17"
            tag: 3



[root@hacontroller2 ~(keystone_demo)]# ovs-ofctl show  br-eth0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000b6bfa2bafd45
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(eth0): addr:52:54:00:73:df:29
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 2(phy-br-eth0): addr:be:89:61:87:56:20
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-eth0): addr:b6:bf:a2:ba:fd:45
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

[root@hacontroller2 ~(keystone_demo)]# ovs-ofctl dump-flows  br-eth0
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=15810.746s, table=0, n_packets=0, n_bytes=0, idle_age=15810, priority=4,in_port=2,dl_vlan=2 actions=strip_vlan,NORMAL
 cookie=0x0, duration=16105.662s, table=0, n_packets=31849, n_bytes=1786827, idle_age=0, priority=2,in_port=2 actions=drop
 cookie=0x0, duration=16105.696s, table=0, n_packets=39762, n_bytes=2100763, idle_age=0, priority=0 actions=NORMAL

Check `ovs-vsctl show`

   Bridge br-int
        fail_mode: secure
        Port "qg-34893aa0-17"
            tag: 2
            Interface "qg-34893aa0-17"
                type: internal

It looks like qrouter's namespace output interface   qg-xxxxxx sends vlan tagged packets to eth0 (which has VLAN=yes, see link bellow) , but OVS bridge br-eth0 is not aware of vlan tagging , it strips tags before sending packets outside into external flat network. In case of external network providers qg-xxxxxx interfaces are on br-int and that is normal. I believe that it's core reason why patch  https://github.com/beekhof/osp-ha-deploy/commit/b2e01e86ca93cfad9ad01d533b386b4c9607c60d
works pretty stable. This issue doesn't show up on single controller and appears
to be critical for HAProxy/Keepalived 3 node controllers cluster at least via my
experience.

Nova and Neutron work-flow && CLI for HAProxy/Keepalived 3 Node Controller RDO Liberty

$
0
0
The correct name of this post is supposed to be "Nova and Neutron workflow && CLI for HAProxy/Keepalived 3 Node Controller RDO Liberty in an appropriate amount of detail". It follows up http://lxer.com/module/newswire/view/222164/index.html . All environment has been built via Nova and Neutron CLI ( no Horizon involvement ).
Neutron work-flow on Controller is described including OVS flow rules on external brigge created by flat external network provider and eth0 external interface as VLAN OVS port of bridge br-eth0.

First create keystonerc_admin to provide admin ability manage via CLI

[root@hacontroller1 ~(keystone_admin)]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=regionOne
export OS_PASSWORD=keystonetest
export OS_AUTH_URL=http://controller-vip.example.com:35357/v2.0/
export OS_SERVICE_ENDPOINT=http://controller-vip.example.com:35357/v2.0 export OS_SERVICE_TOKEN=$(cat /root/keystone_service_token)
export PS1='[\u@\h \W(keystone_admin)]\$ '
[root@hacontroller1 ~(keystone_admin)]# cat keystonerc_demo
export OS_USERNAME=demo
export OS_TENANT_NAME=demo
export OS_PROJECT_NAME=demo
export OS_REGION_NAME=regionOne
export OS_PASSWORD=redhat
export OS_AUTH_URL=http://controller-vip.example.com:5000/v2.0/
export PS1='[\u@\h \W(keystone_demo)]\$ '

 
[root@hacontroller1 ~(keystone_admin)]#  keystone tenant-list
+----------------------------------+----------+---------+
| id | name | enabled |
+----------------------------------+----------+---------+
| b2be742697534c3188bdc5ec56038853 | admin | True |
| efe017b919c1487bab8c58281fcaceeb | demo | True |
| 4cd322b30ca947eeb86c0a883e549a27 | services | True |
+----------------------------------+----------+---------+

****************************************************
Creating HA Neutron router belongs tenant demo
****************************************************

[root@hacontroller1 ~(keystone_admin)]# neutron router-create --ha True \
--tenant-id efe017b919c1487bab8c58281fcaceeb RouterDMS

[root@hacontroller1 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterDMS
+--------------------------------------+---------------------------+----------------+-------+----------+
| id | host | admin_state_up | alive | ha_state |
+--------------------------------------+---------------------------+----------------+-------+----------+
| 9c83e688-e7b4-4101-97df-844510d0ee52 | hacontroller1.example.com | True | :-) | active |
| a7bdf03e-4550-4f1b-ae6f-25744894086d | hacontroller2.example.com | True | :-) | standby |
+--------------------------------------+---------------------------+----------------+-------+-------

**************************************
Creating private network as demo
**************************************

[root@hacontroller2 ~(keystone_demo)]# neutron net-create private
[root@hacontroller2 ~(keystone_demo)]# neutron subnet-create private \
30.0.0.0/24 --dns_nameservers list=true 83.221.202.254

**************************************
Creating public network as admin
**************************************

[root@hacontroller1 ~(keystone_admin)]# neutron net-create public --shared \
--provider:network_type flat --provider:physical_network physnet1 --router:external

[root@hacontroller1 ~(keystone_admin)]# neutron subnet-create --gateway 10.10.10.1 \
--allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp \
--name public_subnet public 10.10.10.0/24
 
[root@hacontroller1 neutron(keystone_demo)]# cat l3_agent.ini | grep -v ^# | grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
send_arp_for_ha = 3
metadata_ip = controller-vip.example.com
external_network_bridge =
[AGENT]
 
[root@hacontroller1 ml2(keystone_admin)]# cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
local_ip = 192.169.142.221
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
bridge_mappings = physnet1:br-eth0
network_vlan_ranges = physnet1
[agent]
tunnel_types = vxlan
vxlan_udp_port = 4789
l2_population = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

When "external_network_bridge = " , Neutron places the external
interface of the router into the OVS bridge specified by the
"provider_network" provider attribute in the Neutron network. Traffic is
processed by Open vSwitch flow rules. In this configuration it is
possible to utilize flat and VLAN provider networks.

[root@hacontroller1 ~(keystone_admin)]# ovs-ofctl show br-eth0
OFPT_FEATURES_REPLY (xid=0x2): dpid:00003e31a75b624a
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(eth0): addr:52:54:00:41:74:39
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
2(phy-br-eth0): addr:de:0e:37:e4:28:49
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-eth0): addr:3e:31:a7:5b:62:4a
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@hacontroller1 ~(keystone_admin)]# ovs-ofctl dump-flows br-eth0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=6785.707s, table=0, n_packets=18476, n_bytes=1202867, idle_age=3, priority=4,
in_port=2,dl_vlan=3 actions=strip_vlan,NORMAL <==== VLAN tag is stripped
cookie=0x0, duration=6977.001s, table=0, n_packets=13639, n_bytes=766402, idle_age=1, priority=2,in_port=2 actions=drop
cookie=0x0, duration=6977.041s, table=0, n_packets=11557, n_bytes=10607506, idle_age=1, priority=0 actions=NORMAL
 
[root@hacontroller1 ~(keystone_admin)]# ovs-vsctl show
eae701a9-447e-4b75-98b5-4f7ce026ddbb
Bridge br-tun
fail_mode: secure
Port "vxlan-c0a98ee0"
Interface "vxlan-c0a98ee0"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.169.142.221", out_key=flow, remote_ip="192.169.142.224"}
Port br-tun
Interface br-tun
type: internal
Port "vxlan-c0a98ede"
Interface "vxlan-c0a98ede"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.169.142.221", out_key=flow, remote_ip="192.169.142.222"}
Port "vxlan-c0a98edf"
Interface "vxlan-c0a98edf"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.169.142.221", out_key=flow, remote_ip="192.169.142.223"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge "br-eth0"
Port "br-eth0"
Interface "br-eth0"
type: internal
Port "eth0"
Interface "eth0"<=============
Port "phy-br-eth0"
Interface "phy-br-eth0"
type: patch
options: {peer="int-br-eth0"}

Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "ha-013404f6-0b"
tag: 2
Interface "ha-013404f6-0b"
type: internal
Port "int-br-eth0"
Interface "int-br-eth0"
type: patch
options: {peer="phy-br-eth0"}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qr-dd6507fd-73"
tag: 1
Interface "qr-dd6507fd-73"
type: internal
Port "qg-a37e106e-70" ===============>
tag: 3

Interface "qg-a37e106e-70"
type: internal
Port "tap7e8e240c-aa"
tag: 1
Interface "tap7e8e240c-aa"
type: internal
ovs_version: "2.4.0" 
 
The packet exits the qg-a37e106e-70 (which is outgoing interface of 
corresponding qrouter-namespace attached to br-int due to external network provider involvment)
interface, where it is assigned the VLAN tag associated with the external network 3.
The packet is delivered to the external bridge, where a flow rule strip the VLAN tag 3.
The packet is sent out the physical interface associated with the bridge.
Per https://github.com/beekhof/osp-ha-deploy/commit/b2e01e86ca93cfad9ad01d533b386b4c9607c60d#diff-ee239d1187adb09f970dc4ddcf0df1c2
 
Assuming eth0 is your interface attached to the external network, create two files in /etc/sysconfig/network-scripts/ as follows (change MTU if you need):

cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0 
ONBOOT=yes 
DEVICETYPE=ovs 
TYPE=OVSPort 
OVS_BRIDGE=br-eth0 
ONBOOT=yes 
BOOTPROTO=none 
VLAN=yes 
MTU="9000" 
NM_CONTROLLED=no 
EOF 

cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-br-eth0 
DEVICE=br-eth0 
DEVICETYPE=ovs 
OVSBOOTPROTO=none 
TYPE=OVSBridge 
ONBOOT=yes BOOTPROTO=static 
MTU="9000" 
NM_CONTROLLED=no 
EOF

Restart the network for the changes to take effect

[root@hacontroller1 ~(keystone_admin)]# neutron net-list
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
| b4580386-bc02-4aa7-8792-ea4c40c41573 | public | a2c617b1-17cc-4768-b213-9f0795d07b40 10.10.10.0/24 |
| ab421dc7-27fa-4984-ae21-ba9518887293 | HA network tenant efe017b919c1487bab8c58281fcaceeb | 6886d46c-4947-455d-8656-ff0f2a649632 169.254.192.0/18 |
| 847e5c9c-ce9f-4b2c-86fb-d7597017e8e3 | private | 1c47d964-d7ec-4a72-a5a7-bc390c96359d 30.0.0.0/24 |
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+ 
 
[root@hacontroller1 ~(keystone_admin)]# neutron subnet-list
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+
| a2c617b1-17cc-4768-b213-9f0795d07b40 | public_subnet | 10.10.10.0/24 | {"start": "10.10.10.100", "end": "10.10.10.150"} |
| 6886d46c-4947-455d-8656-ff0f2a649632 | HA subnet tenant efe017b919c1487bab8c58281fcaceeb | 169.254.192.0/18 | {"start": "169.254.192.1", "end": "169.254.255.254"} |
| 1c47d964-d7ec-4a72-a5a7-bc390c96359d | | 30.0.0.0/24 | {"start": "30.0.0.2", "end": "30.0.0.254"} |
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+
 
[root@hacontroller2 ~(keystone_demo)]#  neutron router-gateway-set RouterDMS public
[root@hacontroller2 ~(keystone_demo)]#  neutron router-interface-add RouterDMS \
1c47d964-d7ec-4a72-a5a7-bc390c96359d

[root@hacontroller2 ~(keystone_demo)]# neutron router-port-list RouterDMS
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| dd6507fd-73e3-45f6-a935-8bbf29dacbb9 | | fa:16:3e:26:55:06 | {"subnet_id": "1c47d964-d7ec-4a72-a5a7-bc390c96359d", "ip_address": "30.0.0.1"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
[root@hacontroller2 ~(keystone_demo)]# neutron port-show dd6507fd-73e3-45f6-a935-8bbf29dacbb9
+-----------------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:vnic_type | normal |
| device_id | afe13460-e106-4a0a-abf5-a618f97de6b9 |
| device_owner | network:router_interface |
| dns_assignment | {"hostname": "host-30-0-0-1", "ip_address": "30.0.0.1", "fqdn": "host-30-0-0-1.openstacklocal."} |
| dns_name | |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "1c47d964-d7ec-4a72-a5a7-bc390c96359d", "ip_address": "30.0.0.1"} |
| id | dd6507fd-73e3-45f6-a935-8bbf29dacbb9 |
| mac_address | fa:16:3e:26:55:06 |
| name | |
| network_id | 847e5c9c-ce9f-4b2c-86fb-d7597017e8e3 |
| security_groups | |
| status | ACTIVE |
| tenant_id | efe017b919c1487bab8c58281fcaceeb |
+-----------------------+--------------------------------------------------------------------------------------------------+

********************************************
Creating security rules for tenant demo
********************************************

[root@hacontroller2 ~(keystone_demo)]# neutron security-group-rule-create --protocol icmp \
--direction ingress --remote-ip-prefix 0.0.0.0/0 default

[root@hacontroller2 ~(keystone_demo)]# neutron security-group-rule-create --protocol tcp \
--port-range-min 22 --port-range-max 22 --direction ingress --remote-ip-prefix 0.0.0.0/0 default 
 
********************************************
Creating ssh keypair for tenant demo
********************************************
[root@hacontroller2 ~(keystone_demo)]#  nova keypair-add oskey1 > oskey1.priv
[root@hacontroller2 ~(keystone_demo)]# chmod 600 oskey1.priv

[root@hacontroller2 ~(keystone_demo)]# neutron net-list
+--------------------------------------+---------+----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+----------------------------------------------------+
| b4580386-bc02-4aa7-8792-ea4c40c41573 | public | a2c617b1-17cc-4768-b213-9f0795d07b40 10.10.10.0/24 |
| 847e5c9c-ce9f-4b2c-86fb-d7597017e8e3 | private | 1c47d964-d7ec-4a72-a5a7-bc390c96359d 30.0.0.0/24 |
+--------------------------------------+---------+--------------------------------------------------
[root@hacontroller2 ~(keystone_demo)]# glance image-list
+--------------------------------------+-----------+
| ID | Name |
+--------------------------------------+-----------+
| 6b4ee270-41ca-4a14-b584-d21f6ff5d6be | cirros |
| e6945bf1-0a0d-4e99-a1fc-64ca45479095 | VF23Cloud |
+--------------------------------------+-----------+

[root@hacontroller2 ~(keystone_demo)]# nova boot --flavor 2 --key_name oskey1 --image \
e6945bf1-0a0d-4e99-a1fc-64ca45479095 --nic net-id=847e5c9c-ce9f-4b2c-86fb-d7597017e8e3 VF23Devs05
 +--------------------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | 8c3HZUTS3jZ3 |
| config_drive | |
| created | 2015-11-28T17:44:02Z |
| flavor | m1.small (2) |
| hostId | |
| id | 68db2410-5d7d-42ca-82ab-6000123ab8d2 |
| image | VF23Cloud (e6945bf1-0a0d-4e99-a1fc-64ca45479095) |
| key_name | oskey1 |
| metadata | {} |
| name | VF23Devs05 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | efe017b919c1487bab8c58281fcaceeb |
| updated | 2015-11-28T17:44:03Z |
| user_id | 426a9a98019f4055a2edb3d145355646 |
+--------------------------------------+--------------------------------------------------+
[root@hacontroller2 ~(keystone_demo)]# nova list

+--------------------------------------+------------+---------+------------+-------------+--------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+---------+------------+-------------+--------------------------------+
| 2b0f822f-be17-43c1-b127-f626d5a62823 | CirrOSDevs | SHUTOFF | - | Shutdown | private=30.0.0.4, 10.10.10.101 |
| 68db2410-5d7d-42ca-82ab-6000123ab8d2 | VF23Devs05 | BUILD | spawning | NOSTATE | |
+--------------------------------------+------------+---------+------------+-------------+--------------------------------+
[root@hacontroller2 ~(keystone_demo)]# nova list
+--------------------------------------+------------+---------+------------+-------------+--------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+---------+------------+-------------+--------------------------------+
| 2b0f822f-be17-43c1-b127-f626d5a62823 | CirrOSDevs | SHUTOFF | - | Shutdown | private=30.0.0.4, 10.10.10.101 |
| 68db2410-5d7d-42ca-82ab-6000123ab8d2 | VF23Devs05 | ACTIVE | - | Running | private=30.0.0.10 |
+--------------------------------------+------------+---------+------------+-------------+--------------------------------+

[root@hacontroller2 ~(keystone_demo)]# neutron port-list --device-id \
68db2410-5d7d-42ca-82ab-6000123ab8d2
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| 879c8ca8-fe8e-42d7-8b6b-34be981d03d0 | | fa:16:3e:32:47:49 | {"subnet_id": "1c47d964-d7ec-4a72-a5a7-bc390c96359d", "ip_address": "30.0.0.10"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+

[root@hacontroller2 ~(keystone_demo)]# neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 10.10.10.102 |
| floating_network_id | b4580386-bc02-4aa7-8792-ea4c40c41573 |
| id | aa48fd10-bb25-46ae-8f76-eb90e343b3f1 |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | efe017b919c1487bab8c58281fcaceeb |
+---------------------+--------------------------------------+

[root@hacontroller2 ~(keystone_demo)]# neutron floatingip-associate \
aa48fd10-bb25-46ae-8f76-eb90e343b3f1879c8ca8-fe8e-42d7-8b6b-34be981d03d0
Associated floating IP aa48fd10-bb25-46ae-8f76-eb90e343b3f1

[root@hacontroller2 ~(keystone_demo)]# nova list
+--------------------------------------+------------+---------+------------+-------------+---------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+---------+------------+-------------+---------------------------------+
| 2b0f822f-be17-43c1-b127-f626d5a62823 | CirrOSDevs | SHUTOFF | - | Shutdown | private=30.0.0.4, 10.10.10.101 |
| 68db2410-5d7d-42ca-82ab-6000123ab8d2 | VF23Devs05 | ACTIVE | - | Running | private=30.0.0.10, 10.10.10.102 |
+--------------------------------------+------------+---------+------------+-------------+---------------------------------+
 
[root@hacontroller1 ~(keystone_admin)]# ip netns exec qrouter-afe13460-e106-4a0a-abf5-a618f97de6b9   ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
8: ha-013404f6-0b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:d5:7e:6f brd ff:ff:ff:ff:ff:ff
inet 169.254.192.2/18 brd 169.254.255.255 scope global ha-013404f6-0b
valid_lft forever preferred_lft forever
inet 169.254.0.1/24 scope global ha-013404f6-0b
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fed5:7e6f/64 scope link
valid_lft forever preferred_lft forever
9: qr-dd6507fd-73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:26:55:06 brd ff:ff:ff:ff:ff:ff
inet 30.0.0.1/24 scope global qr-dd6507fd-73
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe26:5506/64 scope link nodad
valid_lft forever preferred_lft forever
10: qg-a37e106e-70: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:0e:4b:eb brd ff:ff:ff:ff:ff:ff
inet 10.10.10.100/24 scope global qg-a37e106e-70
valid_lft forever preferred_lft forever
inet 10.10.10.101/32 scope global qg-a37e106e-70
valid_lft forever preferred_lft forever
inet 10.10.10.102/32 scope global qg-a37e106e-70
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe0e:4beb/64 scope link nodad
valid_lft forever preferred_lft forever
 
 
Instance started
 
 
  [root@hacontroller2 ~(keystone_demo)]# nova console-log VF23Devs05
 


  References
  1.  http://blog.oddbit.com/2015/08/13/provider-external-networks-details/ 
  2.  https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/neutron-config.md

DVR_SNAT && DVR on RDO Mitaka M1 (CentOS 7.1)

$
0
0
********************************************************
Setup Delorean Repos for Mitaka M1 on CentOS 7.1
********************************************************
yum -y install yum-plugin-priorities
cd /etc/yum.repos.d/
# for Centos 7 and RHEL 7
wget http://trunk.rdoproject.org/centos7/delorean-deps.repo
wget http://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo
yum -y install openstack-packstack 

********************************************************************
Before running packstack on RDO Mitaka M1 be aware of
********************************************************************

1.  https://bugzilla.redhat.com/show_bug.cgi?id=1288179
2.  https://bugzilla.redhat.com/show_bug.cgi?id=1285314

[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer3Node.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
# For now, just to avoid headaches
CONFIG_CEILOMETER_INSTALL=n
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

********************
Then follow :-
********************
RDO Liberty DVR Neutron workflow on CentOS 7.1
http://dbaxps.blogspot.ru/2015/10/rdo-liberty-rc-dvr-deployment.html

[root@ip-192-169-142-127 ~(keystone_admin)]# nova-manage --version
No handlers could be found for logger "oslo_config.cfg"
13.0.0
[root@ip-192-169-142-127 ~(keystone_admin)]#  neutron l3-agent-list-hosting-router RouterDMS
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| id                                   | host                                   | admin_state_up | alive | ha_state |
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| 0e5f8de4-bae4-4b92-872c-b4a692ffca2b | ip-192-169-142-147.ip.secureserver.net | True           | :-)   |          |
| a08b319b-ce27-4b7c-8f72-e530c148ab70 | ip-192-169-142-127.ip.secureserver.net | True           | :-)   |          |
| ebab6768-cd2f-4d09-a1ba-a6e3aa6b4751 | ip-192-169-142-137.ip.secureserver.net | True           | :-)   |          |
+--------------------------------------+----------------------------------------+----------------+-------+----------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-show 0e5f8de4-bae4-4b92-872c-b4a692ffca2b
+---------------------+-------------------------------------------------------------------------------+
| Field               | Value                                                                         |
+---------------------+-------------------------------------------------------------------------------+
| admin_state_up      | True                                                                          |
| agent_type          | L3 agent                                                                      |
| alive               | True                                                                          |
| binary              | neutron-l3-agent                                                              |
| configurations      | {                                                                             |
|                     |      "router_id": "",                                                         |
|                     |      "agent_mode": "dvr",                                                     |
|                     |      "gateway_external_network_id": "",                                       |
|                     |      "handle_internal_only_routers": true,                                    |
|                     |      "use_namespaces": true,                                                  |
|                     |      "routers": 1,                                                            |
|                     |      "interfaces": 1,                                                         |
|                     |      "floating_ips": 1,                                                       |
|                     |      "interface_driver": "neutron.agent.linux.interface.OVSInterfaceDriver",  |
|                     |      "log_agent_heartbeats": false,                                           |
|                     |      "external_network_bridge": "br-ex",                                      |
|                     |      "ex_gw_ports": 1                                                         |
|                     | }                                                                             |
| created_at          | 2015-12-08 09:44:37                                                           |
| description         |                                                                               |
| heartbeat_timestamp | 2015-12-08 20:07:16                                                           |
| host                | ip-192-169-142-147.ip.secureserver.net                                        |
| id                  | 0e5f8de4-bae4-4b92-872c-b4a692ffca2b                                          |
| started_at          | 2015-12-08 12:11:16                                                           |
| topic               | l3_agent                                                                      |
+---------------------+-------------------------------------------------------------------------------+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova service-list
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-12-08T18:44:41.000000 | -               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-12-08T18:44:41.000000 | -               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-12-08T18:44:41.000000 | -               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-12-08T18:44:41.000000 | -               |
| 5  | nova-compute     | ip-192-169-142-147.ip.secureserver.net | nova     | enabled | up    | 2015-12-08T18:44:36.000000 | -               |
| 6  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-12-08T18:44:42.000000 | -               |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+

[root@ip-192-169-142-127 ~(keystone_admin)]#  neutron l3-agent-list-hosting-router RouterDMS
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| id                                   | host                                   | admin_state_up | alive | ha_state |
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| 0e5f8de4-bae4-4b92-872c-b4a692ffca2b | ip-192-169-142-147.ip.secureserver.net | True           | :-)   |          |
| a08b319b-ce27-4b7c-8f72-e530c148ab70 | ip-192-169-142-127.ip.secureserver.net | True           | :-)   |          |
| ebab6768-cd2f-4d09-a1ba-a6e3aa6b4751 | ip-192-169-142-137.ip.secureserver.net | True           | :-)   |          |
+--------------------------------------+----------------------------------------+----------------+-------+----------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| 0e5f8de4-bae4-4b92-872c-b4a692ffca2b | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 18f3207c-9cde-441c-bfda-26e35b393b5b | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 1d22e3ae-9f27-4cf2-aff7-9febb6745a28 | Open vSwitch agent | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 39dd0e7b-9359-4ea8-890d-bfe2ebda95b8 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 3fd7c6d3-f2bd-4612-9f88-4e7c01ceb90d | DHCP agent         | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| 55f597f5-0b7a-414a-aae2-1098b79fdec4 | Metadata agent     | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
| a08b319b-ce27-4b7c-8f72-e530c148ab70 | L3 agent           | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| d8bed085-b87c-4d7d-be8d-503b9ce60eda | Metadata agent     | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
| ddf1906c-09bd-4415-ab62-4a8d290f7fe9 | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
| ebab6768-cd2f-4d09-a1ba-a6e3aa6b4751 | L3 agent           | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-port-list RouterDMS
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                              |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| 48ec616f-1c9c-4654-a5bc-0626765801bf |      | fa:16:3e:4b:05:09 | {"subnet_id": "8d8210c4-9900-4245-a57f-cdcaa2e8f916", "ip_address": "50.0.0.11"}       |
| 7d7f60e7-f385-4eb2-ada9-d44e85a05200 |      | fa:16:3e:28:0d:19 | {"subnet_id": "8d8210c4-9900-4245-a57f-cdcaa2e8f916", "ip_address": "50.0.0.1"}        |
| 835367f9-8a98-4a44-9a51-0f07ad2b7e82 |      | fa:16:3e:58:a4:83 | {"subnet_id": "c79f4a19-ff8e-40e3-8868-dd6b23723ea4", "ip_address": "192.169.142.150"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
[root@ip-192-169-142-127 ~(keystone_admin)]#  neutron router-show RouterDMS
+-----------------------+------------------------===------------------------------------------------+
| Field                 | Value                                                                       |
+-----------------------+--------------------------------------------------------------------------------+
| admin_state_up    | True                                                                    |
| distributed           | True                                                                     |
| external_gateway_info | {"network_id": "252498c4-c7d8-4748-9583-587a52eb8b94", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "c79f4a19-ff8e-40e3-8868-dd6b23723ea4", "ip_address": "192.169.142.150"}]} |
| ha                    | False                                                                         |
| id                     | ad7c9612-408e-4dcb-ac16-32ed916f28b3               |
| name               | RouterDMS                                                               |
| routes              |                                                                                  |
| status              | ACTIVE                                                                     |
| tenant_id         | fd5942f812284d0c99ec25485cc3b297                    |   
+-----------------------+---------------------------------------------------------------------------------+



  

  
*******************************
DVR_SNAT Section 
*******************************
Controller :-

  

 
  
OVS Flows

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-ofctl show br-int | grep "sg-"
 8(sg-48ec616f-1c): addr:00:00:00:00:00:00

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-ofctl dump-flows  br-int| grep "output:8"
 cookie=0x976c449145b02ae6, duration=4239.096s, table=1, n_packets=1427417, n_bytes=94772642, idle_age=0, priority=4,dl_vlan=1,dl_dst=fa:16:3e:4b:05:09 actions=strip_vlan,mod_dl_src:fa:16:3e:28:0d:19,output:8
[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-ofctl dump-flows  br-int| grep "output:8"
 cookie=0x976c449145b02ae6, duration=4241.129s, table=1, n_packets=1429159, n_bytes=94888334, idle_age=0, priority=4,dl_vlan=1,dl_dst=fa:16:3e:4b:05:09 actions=strip_vlan,mod_dl_src:fa:16:3e:28:0d:19,output:8
[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-ofctl dump-flows  br-int| grep "output:8"
 cookie=0x976c449145b02ae6, duration=4245.441s, table=1, n_packets=1432026, n_bytes=95078792, idle_age=0, priority=4,dl_vlan=1,dl_dst=fa:16:3e:4b:05:09 actions=strip_vlan,mod_dl_src:fa:16:3e:28:0d:19,output:8
[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-ofctl dump-flows  br-int| grep "output:8"
 cookie=0x976c449145b02ae6, duration=4249.608s, table=1, n_packets=1434642, n_bytes=95252072, idle_age=0, priority=4,dl_vlan=1,dl_dst=fa:16:3e:4b:05:09 actions=strip_vlan,mod_dl_src:fa:16:3e:28:0d:19,output:8
[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-ofctl dump-flows  br-int| grep "output:8"
 cookie=0x976c449145b02ae6, duration=4256.953s, table=1, n_packets=1439185, n_bytes=95553634, idle_age=0, priority=4,dl_vlan=1,dl_dst=fa:16:3e:4b:05:09 actions=strip_vlan,mod_dl_src:fa:16:3e:28:0d:19,output:8
[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-ofctl dump-flows  br-int| grep "output:8"
 cookie=0x976c449145b02ae6, duration=4262.048s, table=1, n_packets=1442924, n_bytes=95800408, idle_age=0, priority=4,dl_vlan=1,dl_dst=fa:16:3e:4b:05:09 actions=strip_vlan,mod_dl_src:fa:16:3e:28:0d:19,
    

High Availability cloud VMs (Neutron && VRRP) on RDO Liberty

$
0
0
 It is actually an update for Neutron on RDO Liberty of original blog entry
 http://blog.aaronorosen.com/implementing-high-availability-instances-with-neutron-using-vrrp/
  I only attempted to make post understandable for people with no knowledge
how to access cloud VMs having just private IPs. How work commands
`ip netns` , `ip netns exec qdhcp-namespace ssh -i oskeyvip.pem ubuntu@private-ip` ?
 Highlighted Neutron Commands which are not commonly known and give  an option to create floating IP working as VIP and providing High Available pair  of Ubuntu 14.04 cloud instances.
   The core idea belongs to Aaron Rosen, published in his post for  Openstack Havana Release and I don't have any intend to steal it. I just believe that Neutron Power deserves a bit more attention from  people still doing legacy ( e.g. Nova ) networking.

Create private network to launch a couple of Ubuntu Trusty VMs

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron net-create vrrp-net
Created a new network:
+-----------------+--------------------------------------+
| Field           | Value                                |
+-----------------+--------------------------------------+
| admin_state_up  | True                                 |
| id              | b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed |
| mtu             | 0                                    |
| name            | vrrp-net                             |
| router:external | False                                |
| shared          | False                                |
| status          | ACTIVE                               |
| subnets         |                                      |
| tenant_id       | df2302c143a84de9b6849ef75cc4368c     |
+-----------------+--------------------------------------+

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron subnet-create  --name vrrp-subnet --allocation-pool start=10.0.0.2,end=10.0.0.200 vrrp-net 10.0.0.0/24 --dns_nameservers list=true 83.221.202.254
Created a new subnet:
+-------------------+--------------------------------------------+
| Field             | Value                                      |
+-------------------+--------------------------------------------+
| allocation_pools  | {"start": "10.0.0.2", "end": "10.0.0.200"} |
| cidr              | 10.0.0.0/24                                |
| dns_nameservers   | 83.221.202.254                             |
| enable_dhcp       | True                                       |
| gateway_ip        | 10.0.0.1                                   |
| host_routes       |                                            |
| id                | 8742e4d1-849e-4f83-8357-0996b93d7ec8       |
| ip_version        | 4                                          |
| ipv6_address_mode |                                            |
| ipv6_ra_mode      |                                            |
| name              | vrrp-subnet                                |
| network_id        | b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed       |
| subnetpool_id     |                                            |
| tenant_id         | df2302c143a84de9b6849ef75cc4368c           |
+-------------------+--------------------------------------------+

************************************************************************
Create port on vrrp-net with IP  which is out of allocation pool
************************************************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron port-create --fixed-ip ip_address=10.0.0.201 --security-group default vrrp-net
Created a new port:

+-----------------------+--------------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                                  |
+-----------------------+--------------------------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                                   |
| allowed_address_pairs |                                                                                                        |
| binding:vnic_type     | normal                                                                                                 |
| device_id             |                                                                                                        |
| device_owner          |                                                                                                        |
| dns_assignment        | {"hostname": "host-10-0-0-201", "ip_address": "10.0.0.201", "fqdn": "host-10-0-0-201.openstacklocal."} |
| dns_name              |                                                                                                        |
| fixed_ips             | {"subnet_id": "8742e4d1-849e-4f83-8357-0996b93d7ec8", "ip_address": "10.0.0.201"}                      |
| id                    | 678f042b-dc2f-4426-b1b0-0d941ab21d5b                                                                   |
| mac_address           | fa:16:3e:73:ad:ef                                                                                      |
| name                  |                                                                                                        |
| network_id            | b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed                                                                   |
| security_groups       | 39bc2297-5fc4-426a-b266-43e3a86a03f9                                                                   |
| status                | DOWN                                                                                                   |
| tenant_id             | df2302c143a84de9b6849ef75cc4368c                                                                       |
+-----------------------+--------------------------------------------------------------------------------------------------------+

**********************************************
Associate FIP with port has been created
**********************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron floatingip-create --port-id=678f042b-dc2f-4426-b1b0-0d941ab21d5b public
Created a new floatingip:

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    | 10.0.0.201                           |
| floating_ip_address | 192.169.142.151                      |
| floating_network_id | e5f7d2f3-f924-4158-a111-9dfa2f116e34 |
| id                  | 81e7cd2c-f073-4805-ae4c-06d54db8e52d |
| port_id             | 678f042b-dc2f-4426-b1b0-0d941ab21d5b |
| router_id           | 15aaee00-223f-4bf9-b7e0-a1ff4f97c20e |
| status              | DOWN                                 |
| tenant_id           | df2302c143a84de9b6849ef75cc4368c     |
+---------------------+--------------------------------------+

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron net-list
+--------------------------------------+----------+-------------------------------------------------------+
| id                                   | name     | subnets                                               |
+--------------------------------------+----------+-------------------------------------------------------+
| e5f7d2f3-f924-4158-a111-9dfa2f116e34 | public   | afb7d629-1685-4c3b-a4e3-1bcebeef2844 192.169.142.0/24 |
| b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed | vrrp-net | 8742e4d1-849e-4f83-8357-0996b93d7ec8 10.0.0.0/24      |
+--------------------------------------+----------+-------------------------------------------------------+

****************************************************
Detect ports corresponding VMs been launched
****************************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron port-list -- --network_id=b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                         |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+
| 08f28bb2-8abe-4ea0-bd5b-566ef9881bf3 |      | fa:16:3e:38:01:1b | {"subnet_id": "8742e4d1-849e-4f83-8357-0996b93d7ec8", "ip_address": "10.0.0.5"}   |
| 09d3dee4-2ee6-4d2a-b7c9-034f31991606 |      | fa:16:3e:8e:93:e9 | {"subnet_id": "8742e4d1-849e-4f83-8357-0996b93d7ec8", "ip_address": "10.0.0.2"}   |
| 5d3a69c8-2e88-481c-a3aa-df923db6d624 |      | fa:16:3e:13:a4:c5 | {"subnet_id": "8742e4d1-849e-4f83-8357-0996b93d7ec8", "ip_address": "10.0.0.4"}   |
| 678f042b-dc2f-4426-b1b0-0d941ab21d5b |      | fa:16:3e:73:ad:ef | {"subnet_id": "8742e4d1-849e-4f83-8357-0996b93d7ec8", "ip_address": "10.0.0.201"} |
| dce31b48-620c-4265-ab2a-13017f6ed97c |      | fa:16:3e:6a:52:38 | {"subnet_id": "8742e4d1-849e-4f83-8357-0996b93d7ec8", "ip_address": "10.0.0.1"}   |
+--------------------------------------+------+------------------

-+-----------------------------------------------------------------------------------+
[root@ip-192-169-142-54 ~(keystone_demo)]# nova list
+--------------------------------------+-------------+--------+------------+-------------+-------------------+
| ID                                   | Name        | Status | Task State | Power State | Networks          |
+--------------------------------------+-------------+--------+------------+-------------+-------------------+
| 88efc361-0b0e-487a-8634-07b9782af9bd | UbuntuSRV01 | ACTIVE | -          | Running     | vrrp-net=10.0.0.4 |
| 8f4e4c2e-6049-4451-94c2-1990ee4072ea | UbuntuSRV02 | ACTIVE | -          | Running     | vrrp-net=10.0.0.5 |
+--------------------------------------+-------------+--------+------------+-------------+-------------------+

******************************************************************************
Update status of both ports using "Allowed_address_pairs" feature
******************************************************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron port-update 08f28bb2-8abe-4ea0-bd5b-566ef9881bf3  --allowed_address_pairs list=true type=dict ip_address=10.0.0.201
Updated port: 08f28bb2-8abe-4ea0-bd5b-566ef9881bf3


[root@ip-192-169-142-54 ~(keystone_demo)]# neutron port-update 5d3a69c8-2e88-481c-a3aa-df923db6d624  --allowed_address_pairs list=true type=dict ip_address=10.0.0.201
Updated port: 5d3a69c8-2e88-481c-a3aa-df923db6d624


***************************************************************
Now make sure that commands above succeeded
***************************************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron port-show 08f28bb2-8abe-4ea0-bd5b-566ef9881bf3
+-----------------------+--------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                            |
+-----------------------+--------------------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                             |
| allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address": "fa:16:3e:38:01:1b"}                                 |
| binding:vnic_type     | normal                                                                                           |
| device_id             | 8f4e4c2e-6049-4451-94c2-1990ee4072ea                                                             |
| device_owner          | compute:nova                                                                                     |
| dns_assignment        | {"hostname": "host-10-0-0-5", "ip_address": "10.0.0.5", "fqdn": "host-10-0-0-5.openstacklocal."} |
| dns_name              |                                                                                                  |
| extra_dhcp_opts       |                                                                                                  |
| fixed_ips             | {"subnet_id": "8742e4d1-849e-4f83-8357-0996b93d7ec8", "ip_address": "10.0.0.5"}                  |
| id                    | 08f28bb2-8abe-4ea0-bd5b-566ef9881bf3                                                             |
| mac_address           | fa:16:3e:38:01:1b                                                                                |
| name                  |                                                                                                  |
| network_id            | b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed                                                             |
| security_groups       | 39bc2297-5fc4-426a-b266-43e3a86a03f9                                                             |
| status                | ACTIVE                                                                                           |
| tenant_id             | df2302c143a84de9b6849ef75cc4368c                                                                 |
+-----------------------+--------------------------------------------------------------------------------------------------+

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron port-show 5d3a69c8-2e88-481c-a3aa-df923db6d624
+-----------------------+--------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                            |
+-----------------------+-------------------------------------------------------------------
| admin_state_up        | True                                                                                             |
| allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address": "fa:16:3e:13:a4:c5"}                                 |
| binding:vnic_type     | normal                                                                                           |
| device_id             | 88efc361-0b0e-487a-8634-07b9782af9bd                                                             |
| device_owner          | compute:nova                                                                                     |
| dns_assignment        | {"hostname": "host-10-0-0-4", "ip_address": "10.0.0.4", "fqdn": "host-10-0-0-4.openstacklocal."} |
| dns_name              |                                                                                                  |
| extra_dhcp_opts       |                                                                                                  |
| fixed_ips             | {"subnet_id": "8742e4d1-849e-4f83-8357-0996b93d7ec8", "ip_address": "10.0.0.4"}                  |
| id                    | 5d3a69c8-2e88-481c-a3aa-df923db6d624                                                             |
| mac_address           | fa:16:3e:13:a4:c5                                                                                |
| name                  |                                                                                                  |
| network_id            | b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed                                                             |
| security_groups       | 39bc2297-5fc4-426a-b266-43e3a86a03f9                                                             |
| status                | ACTIVE                                                                                           |
| tenant_id             | df2302c143a84de9b6849ef75cc4368c                                                                 |
+-----------------------+--------------------------------------------------------------------------------------------------+


[root@ip-192-169-142-54 ~(keystone_demo)]# neutron port-show 08f28bb2-8abe-4ea0-bd5b-566ef9881bf3
+-----------------------+--------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                            |
+-----------------------+--------------------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                             |
| allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address": "fa:16:3e:38:01:1b"}                                 |
| binding:vnic_type     | normal                                                                                           |
| device_id             | 8f4e4c2e-6049-4451-94c2-1990ee4072ea                                                             |
| device_owner          | compute:nova                                                                                     |
| dns_assignment        | {"hostname": "host-10-0-0-5", "ip_address": "10.0.0.5", "fqdn": "host-10-0-0-5.openstacklocal."} |
| dns_name              |                                                                                                  |
| extra_dhcp_opts       |                                                                                                  |
| fixed_ips             | {"subnet_id": "8742e4d1-849e-4f83-8357-0996b93d7ec8", "ip_address": "10.0.0.5"}                  |
| id                    | 08f28bb2-8abe-4ea0-bd5b-566ef9881bf3                                                             |
| mac_address           | fa:16:3e:38:01:1b                                                                                |
| name                  |                                                                                                  |
| network_id            | b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed                                                             |
| security_groups       | 39bc2297-5fc4-426a-b266-43e3a86a03f9                                                             |
| status                | ACTIVE                                                                                           |
| tenant_id             | df2302c143a84de9b6849ef75cc4368c                                                                 |
+-----------------------+--------------------------------------------------------------------------------------------------+

****************************************************************************
At this point we need log into ubuntu VMs without floating IPs, proceed as
follows. First detect qrouter and qdhcp namespaces names been created on
the system
*****************************************************************************
[root@ip-192-169-142-54 ~(keystone_demo)]# ip netns
qrouter-15aaee00-223f-4bf9-b7e0-a1ff4f97c20e
qdhcp-b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed

*****************************************************************************
Now using command bellow and ssh-keypair created along with VM log into
each one of instances to configure services keepalived and apache
******************************************************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# ip netns exec qdhcp-b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed ssh -i oskeyvip.pem ubuntu@10.0.0.4
The authenticity of host '10.0.0.4 (10.0.0.4)' can't be established.
ECDSA key fingerprint is b2:03:72:69:9e:d2:0b:2c:7c:43:47:90:21:42:af:b6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.4' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-68-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Fri Dec 11 09:38:05 UTC 2015

  System load: 0.6               Memory usage: 2%   Processes:       53
  Usage of /:  57.1% of 1.32GB   Swap usage:   0%   Users logged in: 0

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@ubuntusrv01:~$ sudo su -
sudo: unable to resolve host ubuntusrv01
root@ubuntusrv01:~# apt-get install keepalived

$vi  /etc/keepalived/keepalived.conf
vrrp_instance vrrp_group_1 {
 state MASTER
 interface eth0
 virtual_router_id 1
 priority 100
 authentication {
  auth_type PASS
  auth_pass password
 }
 virtual_ipaddress {
  10.0.0.201/24 brd 10.0.0.255 dev eth0
 }
}
:wq

# service keepalived start

root@ubuntusrv01:~# apt-get install apache2 -y
root@ubuntusrv01:~# echo "UbuntuSRV01 is up"> /var/www/html/index.html
root@ubuntusrv01:~# service apache2 restart
 * Restarting web server apache2                                                                 AH00557: apache2: apr_sockaddr_info_get() failed for ubuntusrv01
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
                                                                                          [ OK ]

[root@ip-192-169-142-54 ~(keystone_demo)]# ip netns exec qdhcp-b526aca8-e8b0-4d39-a7d4-4d4e0ebfe5ed ssh -i oskeyvip.pem ubuntu@10.0.0.5

Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-68-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Fri Dec 11 09:38:05 UTC 2015

  System load: 0.6               Memory usage: 2%   Processes:       53
  Usage of /:  57.1% of 1.32GB   Swap usage:   0%   Users logged in: 0

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@ubuntusrv01:~$ sudo su -
sudo: unable to resolve host ubuntusrv02
root@ubuntusrv01:~# apt-get install keepalived

$ vi  /etc/keepalived/keepalived.conf
vrrp_instance vrrp_group_1 {
 state BACKUP
 interface eth0
 virtual_router_id 1
 priority 50
 authentication {
  auth_type PASS
  auth_pass password
 }
 virtual_ipaddress {
  10.0.0.201/24 brd 10.0.0.255 dev eth0
 }
}
:wq

root@ubuntusrv02:~# apt-get install apache2 -y
root@ubuntusrv02:~# echo "UbuntuSRV02 is up"> /var/www/html/index.html
root@ubuntusrv02:~# service apache2 restart
 * Restarting web server apache2                                                                 AH00557: apache2: apr_sockaddr_info_get() failed for ubuntusrv01
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message

*******************************************************************************
Restart keepalived to load the configuration change on both nodes:
*******************************************************************************

# service keepalived restart

******************************************************************************
Snapshots bellow demonstrates as FIP 192.169.142.151  works as VIP providing
HA for VMs 10.0.0.4 and 10.0.0.5 (private IPs)
*******************************************************************************

  

AIO RDO Liberty && several external networks VLAN provider setup

$
0
0
Post bellow is addressing the question when AIO RDO Liberty Node has to have external networks of VLAN type with predefined vlan tags. Straight forward packstack --allinone install doesn't  allow to achieve desired network configuration. External network provider of vlan type appears to be required. In particular case, office networks 10.10.10.0/24 vlan tagged (157) ,10.10.57.0/24 vlan tagged (172), 10.10.32.0/24 vlan tagged (200) already exists when RDO install is running. If demo_provision was "y" , then delete router1 and created external network of VXLAN type

First
***********************************************************
Update /etc/neutron/plugins/ml2/ml2_conf.ini
***********************************************************
[root@ip-192-169-142-52 ml2(keystone_demo)]# cat ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vlan,vxlan
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges = vlan157:157:157,vlan172:172:172,vlan200:200:200
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =10:100
vxlan_group =224.0.0.1
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

**************
Then
**************

# openstack-service restart neutron


***************************************************
Invoke external network provider
***************************************************

[root@ip-192-169-142-52 ~(keystone_admin]#neutron net-create vlan157 --shared --provider:network_type vlan --provider:segmentation_id 157 --provider:physical_network vlan157 --router:external

[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan157 --gateway 10.10.10.1  --allocation-pool start=10.10.10.100,end=10.10.10.200 vlan157 10.10.10.0/24


***********************************************
Create second external network
***********************************************

[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan172 --shared --provider:network_type vlan --provider:segmentation_id 172 --provider:physical_network vlan172  --router:external

[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan172 --gateway 10.10.57.1 --allocation-pool start=10.10.57.100,end=10.10.57.200 vlan172 10.10.57.0/24


***********************************************
Create third external network
***********************************************

[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan200 --shared --provider:network_type vlan --provider:segmentation_id 200 --provider:physical_network vlan200  --router:external
[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan200 --gateway 10.10.32.1 --allocation-pool start=10.10.32.100,end=10.10.57.200 vlan172 10.10.32.0/24

***********************************************************************
No need to update sub-net (vs [ 1 ]). No switch to "enable_isolataed_metadata=True"
Neutron L3 agent configuration results attaching qg-<port-id> interfaces to br-int
***********************************************************************


[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan157
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | b41e4d36-9a63-4631-abb0-6436f2f50e2e |
| mtu                       | 0                                    |
| name                      | vlan157                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan157                              |
| provider:segmentation_id  | 157                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | bb753fc3-f257-4ce5-aa7c-56648648056b |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+---------------------------+--------------------------------------+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan157
+-------------------+------------------------------------------------------------------+
| Field             | Value                                                            |
+-------------------+------------------------------------------------------------------+
| allocation_pools  | {"start": "10.10.10.100", "end": "10.10.10.200"}                 |
| cidr              | 10.10.10.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.10.1                                                       |
| host_routes       | {"destination": "169.254.169.254/32", "nexthop": "10.10.10.151"} |
| id                | bb753fc3-f257-4ce5-aa7c-56648648056b                             |
| ip_version        | 4                                                                |
| ipv6_address_mode |                                                                  |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan157                                                      |
| network_id        | b41e4d36-9a63-4631-abb0-6436f2f50e2e                             |
| subnetpool_id     |                                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                                 |
+-------------------+------------------------------------------------------------------+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan172
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 3714adc9-ab17-4f96-9df2-48a6c0b64513 |
| mtu                       | 0                                    |
| name                      | vlan172                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan172                              |
| provider:segmentation_id  | 172                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 21419f2f-212b-409a-8021-2b4a2ba6532f |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+---------------------------+--------------------------------------+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan172
+-------------------+------------------------------------------------------------------+
| Field             | Value                                                            |
+-------------------+------------------------------------------------------------------+
| allocation_pools  | {"start": "10.10.57.100", "end": "10.10.57.200"}                 |
| cidr              | 10.10.57.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.57.1                                                       |
| host_routes       | {"destination": "169.254.169.254/32", "nexthop": "10.10.57.151"} |
| id                | 21419f2f-212b-409a-8021-2b4a2ba6532f                             |
| ip_version        | 4                                                                |
| ipv6_address_mode |                                                                  |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan172                                                      |
| network_id        | 3714adc9-ab17-4f96-9df2-48a6c0b64513                             |
| subnetpool_id     |                                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                                 |
+-------------------+------------------------------------------------------------------+


**************
Next Step
**************

# modprobe 8021q
# ovs-vsctl add-br br-vlan
# ovs-vsctl add-port br-vlan eth1
# vconfig add br-vlan 157

# ovs-vsctl add-br br-vlan2
# ovs-vsctl add-port br-vlan2 eth2
# vconfig add br-vlan2 172

# ovs-vsctl add-br br-vlan3
# ovs-vsctl add-port br-vlan3 eth3
# vconfig add br-vlan3  200



******************************
Update l3_agent.ini file
******************************
external_network_bridge =
gateway_external_network_id =


**********************************************
/etc/neutron/plugins/ml2/openvswitch_agent.ini
**********************************************
bridge_mappings = vlan157:br-vlan,vlan172:br-vlan2,vlan200:br-vlan3

*************************************
Update Neutron Configuration
*************************************

# openstack-service restart neutron


*******************************************
Set up config persistent between reboots
*******************************************
/etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE="eth1"
ONBOOT=yes
OVS_BRIDGE=br-vlan
TYPE=OVSPort
DEVICETYPE="ovs"

/etc/sysconfig/network-scripts/ifcfg-br-vlan

DEVICE=br-vlan
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE="ovs"

/etc/sysconfig/network-scripts/ifcfg-br-vlan.157

BOOTPROTO="none"
DEVICE="br-vlan.157"
ONBOOT="yes"
IPADDR="10.10.10.150"
PREFIX="24"
GATEWAY="10.10.10.1"
DNS1="83.221.202.254"
VLAN=yes
NOZEROCONF=yes
USERCTL=no


/etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE="eth2"
ONBOOT=yes
OVS_BRIDGE=br-vlan2
TYPE=OVSPort
DEVICETYPE="ovs"

/etc/sysconfig/network-scripts/ifcfg-br-vlan2

DEVICE=br-vlan2
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE="ovs"

/etc/sysconfig/network-scripts/ifcfg-br-vlan2.172

BOOTPROTO="none"
DEVICE="br-vlan2.172"
ONBOOT="yes"
IPADDR="10.10.57.150"
PREFIX="24"
GATEWAY="10.10.57.1"
DNS1="83.221.202.254"
VLAN=yes
NOZEROCONF=yes


/etc/sysconfig/network-scripts/ifcfg-br-vlan3
DEVICE=br-vlan3
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE="ovs"

/etc/sysconfig/network-scripts/ifcfg-br-vlan3.200

BOOTPROTO="none"
DEVICE="br-vlan3.200"
ONBOOT="yes"
IPADDR="10.10.32.150"
PREFIX="24"
GATEWAY="10.10.32.1"
DNS1="83.221.202.254"
VLAN=yes
NOZEROCONF=yes
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE="eth3"
ONBOOT=yes
OVS_BRIDGE=br-vlan3
TYPE=OVSPort
DEVICETYPE="ovs"


********************************************
Routing table on AIO RDO Liberty Node
********************************************
[root@ip-192-169-142-52 ~(keystone_admin)]# ip route
default via 10.10.10.1 dev br-vlan.157
10.10.10.0/24 dev br-vlan.157  proto kernel  scope link  src 10.10.10.150
10.10.32.0/24 dev br-vlan3.200  proto kernel  scope link  src 10.10.32.150
10.10.57.0/24 dev br-vlan2.172  proto kernel  scope link  src 10.10.57.150
169.254.0.0/16 dev eth0  scope link  metric 1002
169.254.0.0/16 dev eth1  scope link  metric 1003
169.254.0.0/16 dev eth2  scope link  metric 1004
169.254.0.0/16 dev eth3  scope link  metric 1005
169.254.0.0/16 dev br-vlan3  scope link  metric 1008
169.254.0.0/16 dev br-vlan2  scope link  metric 1009
169.254.0.0/16 dev br-vlan  scope link  metric 1011
192.169.142.0/24 dev eth0  proto kernel  scope link  src 192.169.142.52

****************************************************************************
Notice that both qrouter-namespaces are attached to br-int.
No switch to "enable_isolated_metadata=True" vs  [ 1 ]

*****************************************************************************
[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-list | grep vlan
| 3dc90ff7-b1df-4079-aca1-cceedb23f440 | vlan200   | 60181211-ea36-4e4e-8781-f13f743baa19 10.10.32.0/24 |
| 235c8173-d3f8-407e-ad6a-c1d3d423c763 | vlan172   | c7588239-4941-419b-8d27-ccd970acc4ce 10.10.57.0/24 |
| b41e4d36-9a63-4631-abb0-6436f2f50e2e | vlan157   | bb753fc3-f257-4ce5-aa7c-56648648056b 10.10.10.0/24 |

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-vsctl show
40286423-e174-4714-9c82-32d026ef47ca
    Bridge br-vlan
        Port "eth1"
            Interface "eth1"
        Port br-vlan
            Interface br-vlan
                type: internal
        Port phy-br-vlan
            Interface phy-br-vlan
                type: patch
                options: {peer=int-br-vlan}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge "br-vlan2"
        Port "phy-br-vlan2"
            Interface "phy-br-vlan2"
                type: patch
                options: {peer="int-br-vlan2"}
        Port "eth2"
            Interface "eth2"
        Port "br-vlan2"
            Interface "br-vlan2"
                type: internal
    Bridge "br-vlan3"
        Port "br-vlan3"
            Interface "br-vlan3"
                type: internal
        Port "phy-br-vlan3"
            Interface "phy-br-vlan3"
                type: patch
                options: {peer="int-br-vlan3"}
        Port "eth3"
            Interface "eth3"
    Bridge br-int
        fail_mode: secure
        Port "qr-4e77c7a3-b5"
            tag: 3
            Interface "qr-4e77c7a3-b5"
                type: internal
        Port "int-br-vlan3"
            Interface "int-br-vlan3"
                type: patch
                options: {peer="phy-br-vlan3"}
        Port "tap8e684c78-a3"
            tag: 2
            Interface "tap8e684c78-a3"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvoe2761636-b5"
            tag: 4
            Interface "qvoe2761636-b5"
        Port "tap6cd6fadf-31"
            tag: 1
            Interface "tap6cd6fadf-31"
                type: internal
        Port "qg-02f7ff0d-6d"
            tag: 2
            Interface "qg-02f7ff0d-6d"
                type: internal
        Port "qg-943f7831-46"
            tag: 1
            Interface "qg-943f7831-46"
                type: internal
        Port "tap4ef27b41-be"
            tag: 5
            Interface "tap4ef27b41-be"
                type: internal
        Port "qr-f0fd3793-4e"
            tag: 8
            Interface "qr-f0fd3793-4e"
                type: internal
        Port "tapb1435e62-8b"
            tag: 7
            Interface "tapb1435e62-8b"
                type: internal
        Port "qvo1bb76476-05"
            tag: 3
            Interface "qvo1bb76476-05"
        Port "qvocf68fcd8-68"
            tag: 8
            Interface "qvocf68fcd8-68"
        Port "qvo8605f075-25"
            tag: 4
            Interface "qvo8605f075-25"
        Port "qg-08ccc224-1e"
            tag: 7
            Interface "qg-08ccc224-1e"
                type: internal
        Port "tapbb485628-0b"
            tag: 4
            Interface "tapbb485628-0b"
                type: internal
        Port "int-br-vlan2"
            Interface "int-br-vlan2"
                type: patch
                options: {peer="phy-br-vlan2"}
        Port "tapee030534-da"
            tag: 8
            Interface "tapee030534-da"
                type: internal
        Port "qr-4d679697-39"
            tag: 4
            Interface "qr-4d679697-39"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "tap9b38c69e-46"
            tag: 6
            Interface "tap9b38c69e-46"
                type: internal
        Port "tapc166022a-54"
            tag: 3
            Interface "tapc166022a-54"
                type: internal
        Port "qvo66d8f235-d4"
            tag: 8
            Interface "qvo66d8f235-d4"
        Port int-br-vlan
            Interface int-br-vlan
                type: patch
                options: {peer=phy-br-vlan}
    ovs_version: "2.4.0"

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns
qdhcp-e826aa22-dee0-478d-8bd7-721336e3824a
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-eda69965-c6ee-42be-944f-2d61498e4bea
qdhcp-6768214b-b71c-4178-a0fc-774b2a5d59ef
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qdhcp-03812cc9-69c5-492a-9995-985bf6e1ff13
qdhcp-235c8173-d3f8-407e-ad6a-c1d3d423c763
qdhcp-d958a059-f7bd-4f9f-93a3-3499d20a1fe2
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28
qrouter-71237c84-59ca-45dc-a6ec-23eb94c4249d

********************************************************************************
Access to Nova Metadata Server provided via neutron-ns-metadata-proxy
running in corresponding qrouter namespaces  (Neutron L3 Configuration)
********************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      12548/python2    

[root@ip-192-169-142-52 ~(keystone_admin)]# ps aux | grep 12548
neutron  12548  0.0  0.4 281028 35992 ?        S    18:34   0:00 /usr/bin/python2 /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b --state_path=/var/lib/neutron --metadata_port=9697 --metadata_proxy_user=990 --metadata_proxy_group=988 --verbose --log-file=neutron-ns-metadata-proxy-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b.log --log-dir=/var/log/neutron
root     32665  0.0  0.0 112644   960 pts/8    S+   19:29   0:00 grep --color=auto 12548

******************************************************************************
OVS flow verification on br-vlan3,br-vlan2. On each external network  vlan172,
vlan200 two VMs (on each one of vlan networks) are pinging each other
****************************************************************************** 

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
 cookie=0x0, duration=3554.739s, table=0, n_packets=33, n_bytes=2074, idle_age=2137, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
 cookie=0x0, duration=4204.459s, table=0, n_packets=2102, n_bytes=109304, idle_age=1, priority=0 actions=NORMAL
[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
 cookie=0x0, duration=3557.643s, table=0, n_packets=33, n_bytes=2074, idle_age=2140, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
 cookie=0x0, duration=4207.363s, table=0, n_packets=2103, n_bytes=109356, idle_age=2, priority=0 actions=NORMAL
[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
 cookie=0x0, duration=3568.225s, table=0, n_packets=33, n_bytes=2074, idle_age=2151, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
 cookie=0x0, duration=4217.945s, table=0, n_packets=2109, n_bytes=109668, idle_age=0, priority=0 actions=NORMAL


[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
 cookie=0x0, duration=4140.528s, table=0, n_packets=11, n_bytes=642, idle_age=695, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
 cookie=0x0, duration=4225.918s, table=0, n_packets=2113, n_bytes=109876, idle_age=1, priority=0 actions=NORMAL
[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
 cookie=0x0, duration=4143.600s, table=0, n_packets=11, n_bytes=642, idle_age=698, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
 cookie=0x0, duration=4228.990s, table=0, n_packets=2115, n_bytes=109980, idle_age=0, priority=0 actions=NORMAL
[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
 cookie=0x0, duration=4145.912s, table=0, n_packets=11, n_bytes=642, idle_age=700, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
 cookie=0x0, duration=4231.302s, table=0, n_packets=2116, n_bytes=110032, idle_age=0, priority=0 actions=NORMAL

********************************************************************************
Next question how local vlan tag 7 gets generated
Run following commands :-
********************************************************************************

 [root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan200
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 3dc90ff7-b1df-4079-aca1-cceedb23f440 |
| mtu                       | 0                                    |
| name                      | vlan200                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan200                              |
| provider:segmentation_id  | 200                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 60181211-ea36-4e4e-8781-f13f743baa19 |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+---------------------------+--------------------------------------+

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapb1435e62-8b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.32.100  netmask 255.255.255.0  broadcast 10.10.32.255
        inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
        RX packets 27  bytes 1526 (1.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 648 (648.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 tapb1435e62-8b
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 tapb1435e62-8b

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-vsctl show | grep b1435e62-8b
        Port "tapb1435e62-8b"
            Interface "tapb1435e62-8b"
*********************************************
Fragment from `ovs-vsct show `
*********************************************
Port "tapb1435e62-8b"
            tag: 7
            Interface "tapb1435e62-8b"


*************************************************************************
Next appearance of vlan tag 7, as expected is qg-08ccc224-1e.
Outgoing interface of  qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
namespace.
*************************************************************************
[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-08ccc224-1e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.32.101  netmask 255.255.255.0  broadcast 10.10.32.255
        inet6 fe80::f816:3eff:fed4:e7d  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:d4:0e:7d  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 28  bytes 1704 (1.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-f0fd3793-4e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 30.0.0.1  netmask 255.255.255.0  broadcast 30.0.0.255
        inet6 fe80::f816:3eff:fea9:5422  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:a9:54:22  txqueuelen 0  (Ethernet)
        RX packets 68948  bytes 7192868 (6.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 68859  bytes 7185051 (6.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 qg-08ccc224-1e
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 qg-08ccc224-1e
30.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 qr-f0fd3793-4e

*******************************************************************************************
Now verify Neutron router connecting qrouter-namespace, having interface with tag 7 and qdhcp namespace, been create to launch the instances.
*******************************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron router-list | grep RoutesDSA
| a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b | RoutesDSA  | {"network_id": "3dc90ff7-b1df-4079-aca1-cceedb23f440", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "60181211-ea36-4e4e-8781-f13f743baa19", "ip_address": "10.10.32.101"}]} | False       | False |

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep 3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapb1435e62-8b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.32.100  netmask 255.255.255.0  broadcast 10.10.32.255
        inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
        RX packets 27  bytes 1526 (1.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 648 (648.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

**************************
Finally run:-
**************************
[root@ip-192-169-142-52 ~(keystone_admin)]# neutron router-port-list RoutesDSA
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 08ccc224-1e23-491a-8eec-c4db0ec00f02 |      | fa:16:3e:d4:0e:7d | {"subnet_id": "60181211-ea36-4e4e-8781-f13f743baa19", "ip_address": "10.10.32.101"} |
| f0fd3793-4e5a-467a-bd3c-e87bc9063d26 |      | fa:16:3e:a9:54:22 | {"subnet_id": "0c962484-3e48-4d86-a17f-16b0b1e5fc4d", "ip_address": "30.0.0.1"}     |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 0c962484-3e48-4d86-a17f-16b0b1e5fc4d
| 0c962484-3e48-4d86-a17f-16b0b1e5fc4d |               | 30.0.0.0/24   | {"start": "30.0.0.2", "end": "30.0.0.254"}       |

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 60181211-ea36-4e4e-8781-f13f743baa19
| 60181211-ea36-4e4e-8781-f13f743baa19 | sub-vlan200   | 10.10.32.0/24 | {"start": "10.10.32.100", "end": "10.10.32.200"} |



************************************
OVS Flows at br-vlan3
************************************


[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL

cookie=0x0, duration=15793.182s, table=0, n_packets=33, n_bytes=2074, idle_age=14376, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
 cookie=0x0, duration=16442.902s, table=0, n_packets=8221, n_bytes=427492, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=15796.300s, table=0, n_packets=33, n_bytes=2074, idle_age=14379, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
 cookie=0x0, duration=16446.020s, table=0, n_packets=8223, n_bytes=427596, idle_age=0, priority=0 actions=NORMAL


****************************************************************
Another OVS flow on test br-int for vlan157
****************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh -i oskeyvls.pem cirros@10.10.10.101
$ ping -c 5 10.10.10.108
PING 10.10.10.108 (10.10.10.108): 56 data bytes
64 bytes from 10.10.10.108: seq=0 ttl=63 time=0.706 ms
64 bytes from 10.10.10.108: seq=1 ttl=63 time=0.772 ms
64 bytes from 10.10.10.108: seq=2 ttl=63 time=0.734 ms
64 bytes from 10.10.10.108: seq=3 ttl=63 time=0.740 ms
64 bytes from 10.10.10.108: seq=4 ttl=63 time=0.785 ms

--- 10.10.10.108 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.706/0.747/0.785 ms



  
  

******************************************************************************
   Testing VM1<=>VM2 via floating IPs on external vlan net 10.10.10.0/24
*******************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# nova list --all
+--------------------------------------+--------------+----------------------------------+--------+------------+-------------+---------------------------------+
| ID                                   | Name         | Tenant ID                        | Status | Task State | Power State | Networks                        |
+--------------------------------------+--------------+----------------------------------+--------+------------+-------------+---------------------------------+
| a3d5ecf6-0fdb-4aa3-815f-171871eccb77 | CirrOSDevs01 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | -          | Running     | private=40.0.0.17, 10.10.10.101 |
| 1b65f5db-d7d5-4e92-9a7c-60e7866ff8e5 | CirrOSDevs02 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | -          | Running     | private=40.0.0.18, 10.10.10.110 |
| 46b7dad1-3a7d-4d94-8407-a654cca42750 | VF23Devs01   | f16de8f8497d4f92961018ed836dee19 | ACTIVE | -          | Running     | private=40.0.0.19, 10.10.10.111 |
+--------------------------------------+--------------+----------------------------------+--------+------------+-------------+---------------------------------+

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns
qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh cirros@10.10.10.110
The authenticity of host '10.10.10.110 (10.10.10.110)' can't be established.
RSA key fingerprint is b8:d3:ec:10:70:a7:da:d4:50:13:a8:2d:01:ba:e4:83.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.10.110' (RSA) to the list of known hosts.
cirros@10.10.10.110's password:
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:F1:6E:E5 
          inet addr:40.0.0.18  Bcast:40.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fef1:6ee5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
          RX packets:367 errors:0 dropped:0 overruns:0 frame:0
          TX packets:291 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:36442 (35.5 KiB)  TX bytes:32019 (31.2 KiB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.110$

$ ssh fedora@10.10.10.111
Host '10.10.10.111' is not in the trusted hosts file.
(fingerprint md5 23:c0:fb:fd:74:80:2f:12:d3:09:2f:9e:dd:19:f1:74)
Do you want to continue connecting? (y/n) y
fedora@10.10.10.111's password:
Last login: Sun Dec 13 15:52:43 2015 from 10.10.10.101

[fedora@vf23devs01 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400
        inet 40.0.0.19  netmask 255.255.255.0  broadcast 40.0.0.255
        inet6 fe80::f816:3eff:fea4:1a52  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:a4:1a:52  txqueuelen 1000  (Ethernet)
        RX packets 283  bytes 30213 (29.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 303  bytes 35022 (34.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[fedora@vf23devs01 ~]$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.111[fedora@vf23devs01 ~]$
[fedora@vf23devs01 ~]$ curl http://169.254.169.254/latest/meta-data/instance-id
i-00000009[fedora@vf23devs01 ~]$
[fedora@vf23devs01 ~]$

Running DVR with External network provider (flat) on CentOS 7.2 RDO Liberty

$
0
0
Test bellow is targeting two potential problems :-
  1. Creating HAProxy\Keepalived 3 Node Controller in RDO Mitaka with router
supporting VRRP && DVR at a time (coming up in Mitaka release) per https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md
in regards of using DVR on Compute nodes along with SNAT_DVR on HA 3 Node Controller.
   2. Creating DVR system working with two flat external networks. Details of conversion maybe seen in  DVR with Two external networks via flat network provider on CentOS 7.2 RDO Liberty 
Core tuning was done per http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/
Question which several times was raised up at ask.openstack.org, however
was not addressed properly.

*******************************************************************************
1. Setup Controller/Network + Compute ML2&OVS&VXLAN via answer-file
*******************************************************************************

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_SAHARA_INSTALL=n
CONFIG_HEAT_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=n
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.127

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_SAHARA_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.127
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=a25b5ece9db24e2aba8d3a2b4d908ca5
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=976496a551b94296
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_REDIS_MASTER_HOST=192.169.142.127
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=PW_PLACEHOLDER

************************************************************************************ 
Three VNICs on each node MGMT (eth0) , VTEPS (eth1), EXT Interface (eth2)
************************************************************************************
Eth2 interfaces attached to VMs via libvirt subnet :-

[root@fedora23wks ~]# cat external1.xml
<network>
   <name>external1</name>
   <uuid>d0a7964b-f93d-40c2-b749-b609aed52cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr4' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.10.10.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.10.10.2' end='10.10.10.254' />
     </dhcp>
   </ip>
</network>

# virsh net-define external1.xml
# virsh net-start external1
# virsh net-autostart external1

************************************************
Management network created via
************************************************
[root@fedora23wks ~]# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
</network>
***********************************************
VTEPS network created via
***********************************************
[root@fedora23wks ~]# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d2e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='12.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='12.0.0.1' end='12.0.0.254' />
     </dhcp>
   </ip>
 </network>


*****************************************************************************
2. Tune all nodes to work with external network provider :-
*****************************************************************************
Setexternal_network_bridge =to an empty value in /etc/neutron/l3-agent.ini. This enables the use of external provider networks.  Files ml2_conf.ini && openvswitch_agent.ini already tuned via answer-file directives. Then run


# openstack-service restart neutron

*****************************************************************************
On Controller/Network node create external flat network:-
******************************************************************************
 [root@ip-192-169-142-127 ~(keystone_admin)]#  neutron net-create public1  --provider:network_type flat --provider:physical_network physnet1 --router:external

[root@ip-192-169-142-127 ~(keystone_admin)]#    neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public1_subnet public1 10.10.10.0/24

******************
On all nodes 
******************

# cat /etc/syscnfig/network-scripts/ifcfg-br-ex
    DEVICE="br-ex"
    NM_CONTROLLED="no"
    ONBOOT="yes"
    TYPE="OVSIntPort"
    OVS_BRIDGE=br-ex
    DEVICETYPE="ovs"


# cat /etc/syscnfig/network-scripts/ifcfg-eth2
    DEVICE="eth2"
    ONBOOT="yes"
    TYPE="OVSPort"
    DEVICETYPE="ovs"
    OVS_BRIDGE=br-ex
    NM_CONTROLLED=no
    IPV6INIT=no

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart


When done tune DVR configs per "RDO Liberty DVR Neutron workflow on CentOS 7.2"  https://www.linux.com/community/blogs/133-general-linux/859376-rdo-liberty-rc2-dvr-neutron-workflow-on-centos-71
and restart nodes. Make sure VXLAN tunnels are present.

*********************************
Compute node configuration
*********************************

[root@ip-192-169-142-137 ~]# ip netns
fip-bb5509d1-84a3-489e-847f-c07573b8f6a1
qrouter-8a103913-f272-46ee-95de-38562860c3b1

[root@ip-192-169-142-137 ~]# ip netns exec fip-bb5509d1-84a3-489e-847f-c07573b8f6a1 ip route
default via 10.10.10.1 dev fg-a6949885-91
10.10.10.0/24 dev fg-a6949885-91  proto kernel  scope link  src 10.10.10.102
10.10.10.101 via 169.254.31.28 dev fpr-8a103913-f
10.10.10.103 via 169.254.31.28 dev fpr-8a103913-f
169.254.31.28/31 dev fpr-8a103913-f  proto kernel  scope link  src 169.254.31.29

[root@ip-192-169-142-137 ~]# ip netns exec fip-bb5509d1-84a3-489e-847f-c07573b8f6a1 ip a| grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 169.254.31.29/31 scope global fpr-8a103913-f
    inet6 fe80::44bd:31ff:fed2:b39f/64 scope link
    inet 10.10.10.102/24 brd 10.10.10.255 scope global fg-a6949885-91
    inet6 fe80::f816:3eff:fecf:84a5/64 scope link

[root@ip-192-169-142-137 ~]# ip netns exec qrouter-8a103913-f272-46ee-95de-38562860c3b1 ip a| grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 169.254.31.28/31 scope global rfp-8a103913-f
    inet 10.10.10.101/32 brd 10.10.10.101 scope global rfp-8a103913-f
    inet 10.10.10.103/32 brd 10.10.10.103 scope global rfp-8a103913-f

    inet6 fe80::54be:36ff:fea5:918c/64 scope link
    inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-98432f0d-0c
    inet6 fe80::f816:3eff:fe37:7da7/64 scope link


***************************************************************************************
Outgoing interface fg-a6949885-91 of fip-namespace is now attached to br-int.
Neutron flow is forwarded from  fg-a6949885-91 to br-ex via veth pair
{phy-br-ex;int-br-ex} and gets outside through eth2 interface
***************************************************************************************

[root@ip-192-169-142-137 ~]# ovs-vsctl show
6b29bb4b-b7e0-42d7-94ba-662cd321bf82
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex                  <======= veth pair
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth2"
            Interface "eth2"

    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qvo997b88c5-a8"
            tag: 1
            Interface "qvo997b88c5-a8"
        Port int-br-ex
            Interface int-br-ex                  <========= veth pair
                type: patch
                options: {peer=phy-br-ex}
        Port "fg-a6949885-91"
            tag: 2
            Interface "fg-a6949885-91"
                type: internal

        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo2be937c0-cc"
            tag: 1
            Interface "qvo2be937c0-cc"
        Port "qr-98432f0d-0c"
            tag: 1
            Interface "qr-98432f0d-0c"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0c00007f"
            Interface "vxlan-0c00007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="12.0.0.137", out_key=flow, remote_ip="12.0.0.127"}
    ovs_version: "2.4.0"



**************
Controller
**************
[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qdhcp-e722c424-9f72-4236-a81f-77f79e097274 ip route
  default via 50.0.0.1 dev tap7f80c809-9e
  50.0.0.0/24 dev tap7f80c809-9e  proto kernel  scope link  src 50.0.0.10

**************
Compute
**************
[root@ip-192-169-142-137 ~]# ip netns exec qrouter-8a103913-f272-46ee-95de-38562860c3b1 ip route
  50.0.0.0/24 dev qr-98432f0d-0c  proto kernel  scope link  src 50.0.0.1
  169.254.31.28/31 dev rfp-8a103913-f  proto kernel  scope link  src 169.254.31.28

[root@ip-192-169-142-137 ~]# ip netns exec fip-bb5509d1-84a3-489e-847f-c07573b8f6a1 ip route
  default via 10.10.10.1 dev fg-a6949885-91
  10.10.10.0/24 dev fg-a6949885-91  proto kernel  scope link  src 10.10.10.102

  10.10.10.101 via 169.254.31.28 dev fpr-8a103913-f
  10.10.10.103 via 169.254.31.28 dev fpr-8a103913-f
  169.254.31.28/31 dev fpr-8a103913-f  proto kernel  scope link  src    169.254.31.29



Compare with same report on Compute Nodes in
https://www.linux.com/community/blogs/133-general-linux/859376-rdo-liberty-rc2-dvr-neutron-workflow-on-centos-71
where fg-xxxxx interface is attached to bridge br-ex. Case of bridged external networking.

************************ 
On Compute node 
************************ 
[root@ip-192-169-142-137 ~]# ip netns
fip-bb5509d1-84a3-489e-847f-c07573b8f6a1
qrouter-8a103913-f272-46ee-95de-38562860c3b1
 

  Cloud VM VF23Devs01 is downloading  4.0 GB from Internet.
  `iptop -i eth2` is running on Compute node console .


   *********************************
   On Controller/Network node
   *********************************
 [root@ip-192-169-142-127 ~(keystone_admin)]# ip netns
    qdhcp-e722c424-9f72-4236-a81f-77f79e097274
    snat-8a103913-f272-46ee-95de-38562860c3b1
    qrouter-8a103913-f272-46ee-95de-38562860c3b1

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-port-list RouterDSA
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 98432f0d-0ce7-4ed0-84e4-427f3b70f359 |      | fa:16:3e:37:7d:a7 | {"subnet_id": "63727bd4-7586-4803-8cb1-c2a8b3cf990e", "ip_address": "50.0.0.1"}     |
| bc854d58-dd9c-4d88-9b9a-10fc69f2fbc4 |      | fa:16:3e:0d:99:24 | {"subnet_id": "a0935f2a-03ef-4ae9-902e-f791b95528fa", "ip_address": "10.10.10.100"} |
| caa27d49-8383-414f-ba29-39f73ac31ea0 |      | fa:16:3e:79:28:c4 | {"subnet_id": "63727bd4-7586-4803-8cb1-c2a8b3cf990e", "ip_address": "50.0.0.11"}    |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterDSA
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| id                                   | host                                   | admin_state_up | alive | ha_state |
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| 0d1cf08d-0d6c-4004-912f-eff90adc92a1 | ip-192-169-142-137.ip.secureserver.net | True           | :-)   |          |
| c42c97c0-e6a1-43a1-b1ed-f6e6c087b490 | ip-192-169-142-127.ip.secureserver.net | True           | :-)   |          |
+--------------------------------------+----------------------------------------+----------------+-------+----------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron net-list
+--------------------------------------+----------+----------------------------------------------------+
| id                                   | name     | subnets                                            |
+--------------------------------------+----------+----------------------------------------------------+
| bb5509d1-84a3-489e-847f-c07573b8f6a1 | public1  | a0935f2a-03ef-4ae9-902e-f791b95528fa 10.10.10.0/24 |
| e722c424-9f72-4236-a81f-77f79e097274 | demo_net | 63727bd4-7586-4803-8cb1-c2a8b3cf990e 50.0.0.0/24   |
+--------------------------------------+----------+----------------------------------------------------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron net-show public1
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | bb5509d1-84a3-489e-847f-c07573b8f6a1 |
| mtu                       | 0                                    |
| name                      | public1                              |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | a0935f2a-03ef-4ae9-902e-f791b95528fa |
| tenant_id                 | 2acd2c3b654f49e9a497dc1ad2807c9a     |
+---------------------------+--------------------------------------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron subnet-list
+--------------------------------------+----------------+---------------+--------------------------------------------------+
| id                                   | name           | cidr          | allocation_pools                                 |
+--------------------------------------+----------------+---------------+--------------------------------------------------+
| a0935f2a-03ef-4ae9-902e-f791b95528fa | public1_subnet | 10.10.10.0/24 | {"start": "10.10.10.100", "end": "10.10.10.150"} |
| 63727bd4-7586-4803-8cb1-c2a8b3cf990e | sub_demo_net   | 50.0.0.0/24   | {"start": "50.0.0.10", "end": "50.0.0.254"}      |
+--------------------------------------+----------------+---------------+--------------------------------------------------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron subnet-show public1_subnet
+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "10.10.10.100", "end": "10.10.10.150"} |
| cidr              | 10.10.10.0/24                                    |
| dns_nameservers   |                                                  |
| enable_dhcp       | False                                            |
| gateway_ip        | 10.10.10.1                                       |
| host_routes       |                                                  |
| id                | a0935f2a-03ef-4ae9-902e-f791b95528fa             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | public1_subnet                                   |
| network_id        | bb5509d1-84a3-489e-847f-c07573b8f6a1             |
| subnetpool_id     |                                                  |
| tenant_id         | 2acd2c3b654f49e9a497dc1ad2807c9a                 |
+-------------------+--------------------------------------------------+


  
  


SNAT download via Controller/Network Node. Cloud VM VF23Devs02 is downloading 1.4 GB from Internet.  `iptop -i eth2` is running on Controller
node console .




************************************************************
Final configuration on Controller/Network node
************************************************************
[root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#|grep -v ^$
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
external_network_bridge =
gateway_external_network_id =
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
[AGENT]

[root@ip-192-169-142-127 ml2(keystone_admin)]# cat ml2_conf.ini | grep -v ^#|grep -v ^$
[ml2]
type_drivers = vxlan,flat
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population

path_mtu = 0
[ml2_type_flat]
flat_networks =*
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

[root@ip-192-169-142-127 ml2(keystone_admin)]# cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =12.0.0.127
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True
prevent_arp_spoofing = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

******************************************
Final configuration on Compute node
******************************************

[root@ip-192-169-142-137 neutron]# cat l3_agent.ini | grep -v ^#|grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
gateway_external_network_id =
agent_mode = dvr
[AGENT]

[root@ip-192-169-142-137 ml2]# cat ml2_conf.ini | grep -v ^#|grep -v ^$
[ml2]
type_drivers = vxlan,flat
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
path_mtu = 0
[ml2_type_flat]
flat_networks =*
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[ml2_type_geneve]
[securitygroup]
enable_security_group = True
[agent]
l2_population=True 


[root@ip-192-169-142-137 ml2]# cat openvswitch_agent.ini  | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =12.0.0.137
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True
prevent_arp_spoofing = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


DVR with Two external networks via flat network provider on CentOS 7.2 RDO Liberty

$
0
0
Post is actually step by step procedure of creating DVR system working with two external networks been built via flat network provider. Question which several times was raised up at ask.openstack.org, however was not addressed properly.

************************************************************************************
1. Setup Controller/Network + Compute ML2&OVS&VXLAN via answer-file
************************************************************************************

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_SAHARA_INSTALL=n
CONFIG_HEAT_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=n
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.127

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_SAHARA_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.127
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=a25b5ece9db24e2aba8d3a2b4d908ca5
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=976496a551b94296
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_REDIS_MASTER_HOST=192.169.142.127
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=PW_PLACEHOLDER

************************************************************************************ 
 Four VNICs on each node MGMT (eth0) , VTEPS (eth1), EXT
 Interfaces (eth2,eth3 )
************************************************************************************
Interfaces eth2,eth3 are attached to VMs via libvirt subnets :-

[root@fedora23wks ~]# cat external1.xml
<network>
   <name>external1</name>
   <uuid>d0a7964b-f93d-40c2-b749-b609aed52cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr4' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.10.10.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.10.10.2' end='10.10.10.254' />
     </dhcp>
   </ip>
</network>


[root@fedora23wks ~]# cat external2.xml
<network>
   <name>external2</name>
   <uuid>d0a7964b-f93d-40c2-b749-b609aed52cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr5' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.10.50.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.10.50.2' end='10.10.50.254' />
     </dhcp>
   </ip>
</network>

# virsh net-define external1.xml
# virsh net-start external1
# virsh net-autostart external1


# virsh net-define external2.xml
# virsh net-start external2
# virsh net-autostart external2

************************************************
Management network created via
************************************************
[root@fedora23wks ~]# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
</network>
***********************************************
VTEPS network created via
***********************************************
[root@fedora23wks ~]# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d2e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='12.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='12.0.0.1' end='12.0.0.254' />
     </dhcp>
   </ip>
 </network>

*******************************************
Creating two flat networks as admin
*******************************************

# neutron net-create public1  --provider:network_type flat --provider:physical_network physnet1 --router:external --shared
# neutron net-create public2  --provider:network_type flat --provider:physical_network physnet2 --router:external --shared


# neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public1_subnet public1 10.10.10.0/24
# neutron subnet-create --gateway 10.10.50.1 --allocation-pool start=10.10.50.100,end=10.10.50.150 --disable-dhcp --name public2_subnet public2 10.10.50.0/24

***********************************************************************
On all nodes l3_agent.ini && openvswitch_agent.ini should
have highlighted entries 
************************************************************************
[root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
external_network_bridge =
gateway_external_network_id =

metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
#on compute
# agent_mode =dvr 
[AGENT]

[root@ip-192-169-142-127 ml2(keystone_admin)]# cat openvswitch_agent.ini
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =12.0.0.127
network_vlan_ranges = physnet1,physnet2
bridge_mappings =physnet1:br-ex,physnet2:br-ex1

enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True
prevent_arp_spoofing = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

********************************************************************************
When updates above committed on all nodes && `openstack-service restart
neutron` also has been run , configure ifcfg-* files on all nodes as shown bellow.
Sequence of steps is important to allow service network successful restart.
********************************************************************************

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-br-ex1
DEVICE="br-ex1"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex1
DEVICETYPE="ovs"

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth3
DEVICE="eth3"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex1
NM_CONTROLLED=no
IPV6INIT=no

*******************************************************************************
Now start two OVS bridges and two OVS ports ( eth2 belongs public1,
eth3 belongs public2 ) via script start.sh
*******************************************************************************
#!/bin/bash -x
chkconfig network on ;
systemctl stop NetworkManager ;
systemctl disable NetworkManager ;
service network restart

When done tune DVR configs per "RDO Liberty DVR Neutron workflow on CentOS 7.2"  https://www.linux.com/community/blogs/133-general-linux/859376-rdo-liberty-rc2-dvr-neutron-workflow-on-centos-71
and restart nodes. Make sure VXLAN tunnels are present.

*****************
Verification
*****************

[root@ip-192-169-142-137 network-scripts]# ovs-vsctl show
cf8aca0d-1b18-4b88-8bdd-b5bdc5196176
    Bridge "br-ex1"
        Port "phy-br-ex1"                              <==== veth pair
            Interface "phy-br-ex1"
                type: patch
                options: {peer="int-br-ex1"}
        Port "eth3"
            Interface "eth3"
        Port "br-ex1"
            Interface "br-ex1"
                type: internal

    Bridge br-int
        fail_mode: secure
        Port "qr-a0a7d1ec-ad"
            tag: 3
            Interface "qr-a0a7d1ec-ad"
                type: internal
        Port "qvob03fdb03-fc"
            tag: 3
            Interface "qvob03fdb03-fc"
       Port int-br-ex
            Interface int-br-ex            <===== veth pair
                type: patch
                options: {peer=phy-br-ex}

        Port "fg-7b5266a1-03"          <==== outgoing interface of fip-namespace1
            tag: 4
            Interface "fg-7b5266a1-03"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qvoc7a65c42-82"
            tag: 1
            Interface "qvoc7a65c42-82"
        Port "qr-034141ad-ae"
        tag: 1            Interface "qr-034141ad-ae"
                type: internal
        Port "fg-cfd8afb9-fe"           <==== outgoing interface of fip-namespace2
            tag: 2
            Interface "fg-cfd8afb9-fe"
                type: internal
        Port "int-br-ex1"
            Interface "int-br-ex1"                <====    veth pair                               
                type: patch
                options: {peer="phy-br-ex1"}

        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0c00007f"
            Interface "vxlan-0c00007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="12.0.0.137", out_key=flow, remote_ip="12.0.0.127"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex              <==== veth pair
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"

    ovs_version: "2.4.0"
 


    

***************************
Compute Node
***************************
[root@ip-192-169-142-137 ~]# ip netns
fip-393c5f85-e34a-4b0a-b5fc-582e464e9403
qrouter-7e33b300-104c-4415-bfbe-c212e4a45424
fip-15494dea-f35d-4572-a49c-9e8231cb9559
qrouter-83d7e1b2-125b-4d62-8cd7-eaf3cffe39c0

[root@ip-192-169-142-137 ~]# ip netns exec fip-393c5f85-e34a-4b0a-b5fc-582e464e9403 ip route
default via 10.10.10.1 dev fg-7b5266a1-03
10.10.10.0/24 dev fg-7b5266a1-03  proto kernel  scope link  src 10.10.10.102

10.10.10.101 via 169.254.31.238 dev fpr-7e33b300-1
169.254.31.238/31 dev fpr-7e33b300-1  proto kernel  scope link  src 169.254.31.239

[root@ip-192-169-142-137 ~]# ip netns exec fip-15494dea-f35d-4572-a49c-9e8231cb9559 ip route
default via 10.10.50.1 dev fg-cfd8afb9-fe
10.10.50.0/24 dev fg-cfd8afb9-fe  proto kernel  scope link  src 10.10.50.102

10.10.50.101 via 169.254.31.28 dev fpr-83d7e1b2-1
169.254.31.28/31 dev fpr-83d7e1b2-1  proto kernel  scope link  src 169.254.31.29

[root@ip-192-169-142-137 ~]# ip netns exec qrouter-7e33b300-104c-4415-bfbe-c212e4a45424 ip route
70.0.0.0/24 dev qr-a0a7d1ec-ad  proto kernel  scope link  src 70.0.0.1
169.254.31.238/31 dev rfp-7e33b300-1  proto kernel  scope link  src 169.254.31.238

[root@ip-192-169-142-137 ~]# ip netns exec qrouter-83d7e1b2-125b-4d62-8cd7-eaf3cffe39c0 ip route
50.0.0.0/24 dev qr-034141ad-ae  proto kernel  scope link  src 50.0.0.1
169.254.31.28/31 dev rfp-83d7e1b2-1  proto kernel  scope link  src 169.254.31.28

[root@ip-192-169-142-137 ~]# ip netns exec fip-393c5f85-e34a-4b0a-b5fc-582e464e9403 ip a| grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 169.254.31.239/31 scope global fpr-7e33b300-1
    inet6 fe80::6016:cfff:fed9:74c2/64 scope link
    inet 10.10.10.102/24 brd 10.10.10.255 scope global fg-7b5266a1-03
    inet6 fe80::f816:3eff:fe8d:431d/64 scope link

[root@ip-192-169-142-137 ~]# ip netns exec fip-15494dea-f35d-4572-a49c-9e8231cb9559 ip a| grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 169.254.31.29/31 scope global fpr-83d7e1b2-1
    inet6 fe80::d8b1:1bff:fe28:264/64 scope link
    inet 10.10.50.102/24 brd 10.10.50.255 scope global fg-cfd8afb9-fe
    inet6 fe80::f816:3eff:feb5:cb7a/64 scope link

[root@ip-192-169-142-137 ~]# ip netns exec qrouter-7e33b300-104c-4415-bfbe-c212e4a45424  ip a| grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 169.254.31.238/31 scope global rfp-7e33b300-1
    inet 10.10.10.101/32 brd 10.10.10.101 scope global rfp-7e33b300-1
    inet 10.10.10.103/32 brd 10.10.10.103 scope global rfp-7e33b300-1
    inet6 fe80::f6:d3ff:fecf:a5d5/64 scope link
    inet 70.0.0.1/24 brd 70.0.0.255 scope global qr-a0a7d1ec-ad
    inet6 fe80::f816:3eff:fe80:34a7/64 scope link

[root@ip-192-169-142-137 ~]# ip netns exec qrouter-83d7e1b2-125b-4d62-8cd7-eaf3cffe39c0 ip a| grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 169.254.31.28/31 scope global rfp-83d7e1b2-1
    inet 10.10.50.103/32 brd 10.10.50.103 scope global rfp-83d7e1b2-1
    inet 10.10.50.101/32 brd 10.10.50.101 scope global rfp-83d7e1b2-1
    inet6 fe80::4c90:16ff:fe74:ec6d/64 scope link
    inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-034141ad-ae
    inet6 fe80::f816:3eff:feae:1104/64 scope link
  

 
  

   Three node deployment
  
  

  
   References
   1. http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/
   2. http://www.linux.com/community/blogs/133-general-linux/874900-running-dvr-with-external-network-provider-flat-on-centos-72-rdo-liberty/

Hackery setting up RDO Kilo on CentOS 7.2 with Mongodb && Nagios up and running as of 01/08/2016

$
0
0
I have noticed several questions (ask.openstack.org,stackoverflow.com [ 1 ],[ 2 ])
regarding mentioned ongoing issue with mongodb-server and nagios when
installing RDO Kilo 2015.1.1 on CentOS 7.2 via packstack. At the moment
I see a hack provided bellow which might be applied as pre-installation step
or fix after initial packstack crash. Bug submitted to bugzilla.redhat.com

Due to https://bugzilla.redhat.com/show_bug.cgi?id=1296844
to avoid packstack crash during mongodb.pp run, perform as root

1. Download rdo-release-kilo-1.noarch.rpm
2. # rpm -iv   rdo-release-kilo-1.noarch.rpm
3. # yum -y install openstack-packstack
4. # yum -y install mongodb-server python-pymongo
5. # cd /etc
6. # rm -f mongodb.conf
7. # touch -f mongod.conf
8. # ln -s /etc/mongod.conf mongodb.conf

************************
Verify output :-
************************
[root@ip-192-169-142-57 etc]# ls -l mongo*

lrwxrwxrwx. 1 root root   16 Jan  8 15:57 mongodb.conf -> /etc/mongod.conf
-rw-r--r--. 1 root root 1943 Dec 26  2014 mongodb-shard.conf
-rw-r--r--. 1 root root    0 Jan  8 15:56 mongod.conf

9.  # cd
10.# packstack --gen-answe-file answerAIO.txt

*******************************
11. Update answerAIO.txt
*******************************

      CONFIG_KEYSTONE_SERVICE_NAME=httpd

12. #  packstack --answer-file=./answerAIO.txt

[root@ip-192-169-142-57 ~]# packstack --answer-file=./answerAIO.txt
Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20160108-160051-vRoOZ_/openstack-setup.log

Installing:
Clean Up                                             [ DONE ]
Discovering ip protocol version                      [ DONE ]
Setting up ssh keys                                  [ DONE ]
Preparing servers                                    [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Setting up CACERT                                    [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding MariaDB manifest entries                      [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Glance Keystone manifest entries              [ DONE ]
Adding Glance manifest entries                       [ DONE ]
Adding Cinder Keystone manifest entries              [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Cinder manifest entries                       [ DONE ]
Adding Nova API manifest entries                     [ DONE ]
Adding Nova Keystone manifest entries                [ DONE ]
Adding Nova Cert manifest entries                    [ DONE ]
Adding Nova Conductor manifest entries               [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries                 [ DONE ]
Adding Nova Scheduler manifest entries               [ DONE ]
Adding Nova VNC Proxy manifest entries               [ DONE ]
Adding OpenStack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries                  [ DONE ]
Adding Neutron FWaaS Agent manifest entries          [ DONE ]
Adding Neutron LBaaS Agent manifest entries          [ DONE ]
Adding Neutron API manifest entries                  [ DONE ]
Adding Neutron Keystone manifest entries             [ DONE ]
Adding Neutron L3 manifest entries                   [ DONE ]
Adding Neutron L2 Agent manifest entries             [ DONE ]
Adding Neutron DHCP Agent manifest entries           [ DONE ]
Adding Neutron Metering Agent manifest entries       [ DONE ]
Adding Neutron Metadata Agent manifest entries       [ DONE ]
Checking if NetworkManager is enabled and running    [ DONE ]
Adding OpenStack Client manifest entries             [ DONE ]
Adding Horizon manifest entries                      [ DONE ]
Adding Swift Keystone manifest entries               [ DONE ]
Adding Swift builder manifest entries                [ DONE ]
Adding Swift proxy manifest entries                  [ DONE ]
Adding Swift storage manifest entries                [ DONE ]
Adding Swift common manifest entries                 [ DONE ]
Adding Provisioning Demo manifest entries            [ DONE ]
Adding Provisioning Glance manifest entries          [ DONE ]
Adding MongoDB manifest entries                      [ DONE ]
Adding Redis manifest entries                        [ DONE ]
Adding Ceilometer manifest entries                   [ DONE ]
Adding Ceilometer Keystone manifest entries          [ DONE ]
Adding Nagios server manifest entries                [ DONE ]
Adding Nagios host manifest entries                  [ DONE ]
Adding post install manifest entries                 [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.169.142.57_prescript.pp
192.169.142.57_prescript.pp:                         [ DONE ]         
Applying 192.169.142.57_amqp.pp
Applying 192.169.142.57_mariadb.pp
192.169.142.57_amqp.pp:                              [ DONE ]       
192.169.142.57_mariadb.pp:                           [ DONE ]       
Applying 192.169.142.57_keystone.pp
Applying 192.169.142.57_glance.pp
Applying 192.169.142.57_cinder.pp
192.169.142.57_keystone.pp:                          [ DONE ]        
192.169.142.57_cinder.pp:                            [ DONE ]        
192.169.142.57_glance.pp:                            [ DONE ]        
Applying 192.169.142.57_api_nova.pp
192.169.142.57_api_nova.pp:                          [ DONE ]        
Applying 192.169.142.57_nova.pp
192.169.142.57_nova.pp:                              [ DONE ]    
Applying 192.169.142.57_neutron.pp
192.169.142.57_neutron.pp:                           [ DONE ]       
Applying 192.169.142.57_osclient.pp
Applying 192.169.142.57_horizon.pp
192.169.142.57_osclient.pp:                          [ DONE ]        
192.169.142.57_horizon.pp:                           [ DONE ]        
Applying 192.169.142.57_ring_swift.pp
192.169.142.57_ring_swift.pp:                        [ DONE ]          
Applying 192.169.142.57_swift.pp
Applying 192.169.142.57_provision_demo.pp
Applying 192.169.142.57_provision_glance
192.169.142.57_swift.pp:                             [ DONE ]              
192.169.142.57_provision_demo.pp:                    [ DONE ]              
192.169.142.57_provision_glance:                     [ DONE ]              
Applying 192.169.142.57_mongodb.pp
Applying 192.169.142.57_redis.pp
192.169.142.57_mongodb.pp:                           [ DONE ]       
192.169.142.57_redis.pp:                             [ DONE ]       
Applying 192.169.142.57_ceilometer.pp
192.169.142.57_ceilometer.pp:                        [ DONE ]          
Applying 192.169.142.57_nagios.pp
Applying 192.169.142.57_nagios_nrpe.pp
192.169.142.57_nagios.pp:                            [ DONE ]           
192.169.142.57_nagios_nrpe.pp:                       [ DONE ]    
       
Applying 192.169.142.57_postscript.pp
192.169.142.57_postscript.pp:                        [ DONE ]          
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******

Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * Warning: NetworkManager is active on 192.169.142.57. OpenStack networking currently does not work on systems that have the Network Manager service enabled.
 * File /root/keystonerc_admin has been created on OpenStack client host 192.169.142.57. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://192.169.142.57/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * To use Nagios, browse to http://192.169.142.57/nagios username: nagiosadmin, password: 8596f9100d0d48d0
 * The installation log file is available at: /var/tmp/packstack/20160108-160051-vRoOZ_/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20160108-160051-vRoOZ_/manifests
[root@ip-192-169-142-57 ~]# vi answerAIO.txt
You have mail in /var/spool/mail/root
[root@ip-192-169-142-57 ~]# . keystonerc_admin

[root@ip-192-169-142-57 ~(keystone_admin)]# nova-manage version
2015.1.1-1.el7

[root@ip-192-169-142-57 ~(keystone_admin)]# netstat -antp | grep mongod

tcp        0      0 192.169.142.57:27017    0.0.0.0:*               LISTEN      6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56898    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56883    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56897    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56884    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56895    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56896    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56900    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56899    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56906    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56904    ESTABLISHED 6557/mongod        
tcp        0      0 192.169.142.57:27017    192.169.142.57:56905    ESTABLISHED 6557/mongod
    
[root@ip-192-169-142-57 ~(keystone_admin)]# ps -ef | grep 6557
mongodb   6557     1  0 16:22 ?        00:00:01 /usr/bin/mongod --quiet -f /etc/mongodb.conf run



[root@ip-192-169-142-57 ~(keystone_admin)]# cat /etc/mongodb.conf | grep -v ^#|grep -v ^$
logpath=/var/log/mongodb/mongodb.log
logappend=true
bind_ip = 192.169.142.57
fork=true
dbpath=/var/lib/mongodb
pidfilepath=/var/run/mongodb/mongod.pid
journal = true
noauth=true
smallfiles = true


  

Python API for "boot from image creates new volume" RDO Liberty

$
0
0
Post bellow addresses several questions been posted at ask.openstack.org
In particular, code bellow doesn't require volume UUID to be  hard coded
to start server attached to boot able cinder's LVM, created via glance image,
which is supposed to be passed to script via command line. In the same way
name of cinder volume and instance name may be passed to script via CLI. 

Place in current directory following files :-

[root@ip-192-169-142-127 api(keystone_admin)]# cat credentials.py
#!/usr/bin/env python
import os

def get_keystone_creds():
    d = {}
    d['username'] = os.environ['OS_USERNAME']
    d['password'] = os.environ['OS_PASSWORD']
    d['auth_url'] = os.environ['OS_AUTH_URL']
    d['tenant_name'] = os.environ['OS_TENANT_NAME']
    return d

def get_nova_creds():
    d = {}
    d['username'] = os.environ['OS_USERNAME']
    d['api_key'] = os.environ['OS_PASSWORD']
    d['auth_url'] = os.environ['OS_AUTH_URL']
    d['project_id'] = os.environ['OS_TENANT_NAME']
    return d

[root@ip-192-169-142-127 api(keystone_admin)]# cat  startServer.py
#!/usr/bin/env python
import sys
import os
import time
from novaclient.v2.client import Client
from credentials import get_nova_creds

total = len(sys.argv)
cmdargs = str(sys.argv)
print ("The total numbers of args passed to the script: %d " % total)
print ("Args list: %s " % cmdargs)
print ("First argument: %s" % str(sys.argv[1]))

creds = get_nova_creds()
nova = Client(**creds)
if not nova.keypairs.findall(name="oskeyadm0302"):
    with open(os.path.expanduser('~/.ssh/id_rsa.pub')) as fpubkey:
        nova.keypairs.create(name="oskeyadm0302", public_key=fpubkey.read())

# Creating bootable volume

image = nova.images.find(name=str(sys.argv[1]))
flavor = nova.flavors.find(name="m1.small")
volume = nova.volumes.create(5,display_name="Ubuntu1510LVM",
               volume_type="lvms", imageRef=image.id )

# Wait until volume download will be done

status = volume.status
while ( status == 'creating' or status == 'downloading'):
    time.sleep(15)
    print "status: %s" % status
    volume = nova.volumes.get(volume.id)
    status = volume.status
print "status: %s" % status

# Select tenant's network

nova.networks.list()
network = nova.networks.find(label="demo_network1")
nics = [{'net-id': network.id}]

block_dev_mapping = {'vda': volume.id }

# Starting nova instance

instance = nova.servers.create(name="Ubuntu1510Devs", image='',
                  flavor=flavor,
                  availability_zone="nova:ip-192-169-142-137.ip.secureserver.net",
                  key_name="oskeyadm0302", nics=nics,       
                  block_device_mapping=block_dev_mapping)

# Poll at 5 second intervals, until the status is no longer 'BUILD'
status = instance.status
while status == 'BUILD':
    time.sleep(5)
    # Retrieve the instance again so the status field updates
    instance = nova.servers.get(instance.id)
    status = instance.status
print "status: %s" % status


[root@ip-192-169-142-127 api(keystone_admin)]# cat assignFIP.py

#!/usr/bin/env python
import os
import time
from novaclient.v2.client import Client
from credentials import get_nova_creds

# Assign floating IP for active instance

creds = get_nova_creds()
nova = Client(**creds)
nova.floating_ip_pools.list()
floating_ip = nova.floating_ips.create(nova.floating_ip_pools.list()[0].name)
instance = nova.servers.find(name="Ubuntu1510Devs")
instance.add_floating_ip(floating_ip)



[root@ip-192-169-142-127 api(keystone_admin)]# /usr/bin/python  \
startServer.pyc  Ubuntu1510Cloud-image

The total numbers of args passed to the script: 2
Args list: ['startServer.pyc', 'Ubuntu1510Cloud-image']


status: creating
status: downloading
status: downloading
status: available
status: ACTIVE

[root@ip-192-169-142-127 api(keystone_admin)]# nova list
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+
| ID                                   | Name           | Status  | Task State | Power State | Networks                                 |
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+
| e6ffa475-c026-4033-bb83-f3d32e5bf491 | Ubuntu1510Devs | ACTIVE  | -          | Running     | demo_network1=50.0.0.17                  |
| 4ceb4c64-f347-40a8-81a0-d032286cbd16 | VF23Devs0137   | SHUTOFF | -          | Shutdown    | private_admin=70.0.0.15, 192.169.142.181 |
| 6c0e29bb-2936-47f3-b94b-a9b00cc5bee6 | VF23Devs0157   | SHUTOFF | -          | Shutdown    | private_admin=70.0.0.16, 192.169.142.182 |
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+

*************************************
Check ports available for server
*************************************

[root@ip-192-169-142-127 api(keystone_admin)]# neutron port-list --device-id e6ffa475-c026-4033-bb83-f3d32e5bf491
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                        |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| 53d18ed3-2ec1-487d-a975-0150ebcae23b |      | fa:16:3e:af:a1:21 | {"subnet_id": "5a148b53-780e-4282-8cbf-bf5e05624e5c", "ip_address": "50.0.0.17"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+

[root@ip-192-169-142-127 api(keystone_admin)]# python  assignFIP.pyc

[root@ip-192-169-142-127 api(keystone_admin)]# nova list
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+
| ID                                   | Name           | Status  | Task State | Power State | Networks                                 |
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+
| 5fd95200-4447-4866-868f-244071227640 | Ubuntu1510Devs | ACTIVE  | -          | Running     | demo_network1=50.0.0.20, 192.169.142.190 |
| 4ceb4c64-f347-40a8-81a0-d032286cbd16 | VF23Devs0137   | ACTIVE  | -          | Running     | private_admin=70.0.0.15, 192.169.142.181 |
| 6c0e29bb-2936-47f3-b94b-a9b00cc5bee6 | VF23Devs0157   | SHUTOFF | -          | Shutdown    | private_admin=70.0.0.16, 192.169.142.182 |
+--------------------------------------+----------------+---------+------------+-------------+---

  

 [root@ip-192-169-142-127 api(keystone_admin)]# nova show Ubuntu1510Devs

+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | ip-192-169-142-137.ip.secureserver.net                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname | ip-192-169-142-137.ip.secureserver.net   |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000e                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2016-02-04T14:49:56.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2016-02-04T14:49:48Z                                     |
| demo_network1 network                | 50.0.0.20, 192.169.142.190                               |
| flavor                               | m1.small (2)                                             |
| hostId                               | fcbdcecdd81fc89ce3ff23e041c16d119b5860926feb0c6e165791f6 |
| id                                   | 5fd95200-4447-4866-868f-244071227640                     |
| image                                | Attempt to boot from volume - no image supplied          |
| key_name                             | oskeyadm0302                                             |
| metadata                             | {}                                                       |
| name                                 | Ubuntu1510Devs                                           |
| os-extended-volumes:volumes_attached | [{"id": "bbc3f6e7-cc6e-4e7a-8e34-dc329293e157"}]         |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | c90f8ee371d04850afd2eab63628cfca                         |
| updated                              | 2016-02-04T14:49:55Z                                     |
| user_id                              | 4551168e044a43a9ae63cc7c8fe08b94                         |
+--------------------------------------+----------------------------------------------------------+
[root@ip-192-169-142-127 api(keystone_admin)]# cinder list

+--------------------------------------+--------+------------------+---------------+------+-------------+----------+-------------+--------------------------------------+
|                  ID                  | Status | Migration Status |      Name     | Size | Volume Type | Bootable | Multiattach |             Attached to              |
+--------------------------------------+--------+------------------+---------------+------+-------------+----------+-------------+--------------------------------------+
| a154f434-16e7-4ef7-9305-9d557369e819 | in-use |        -         |   VF23LVMS02  |  5   |     lvms    |   true   |    False    | 6c0e29bb-2936-47f3-b94b-a9b00cc5bee6 |
| bbc3f6e7-cc6e-4e7a-8e34-dc329293e157 | in-use |        -         | Ubuntu1510LVM |  5   |     lvms    |   true   |    False    | 5fd95200-4447-4866-868f-244071227640 |
+--------------------------------------+--------+------------------+---------------+------+-------------+----------+-------------+--------------------------------------+

References
1. http://www.ibm.com/developerworks/cloud/library/cl-openstack-pythonapis/
2. http://docs.openstack.org/developer/python-novaclient/api/novaclient.v1_1.volumes.html

Setup Swift as Glance backend on RDO Liberty (CentOS 7.2)

$
0
0
Post bellow presumes that your testing Swift storage is located  somewhere
on workstation (say /dev/sdb1) is about 25 GB (XFS) and before running packstack (AIO mode for testing)  following steps have been done :-


# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
# yum update -y
# yum install -y openstack-packstack

To get swift account installed preinstall packages, it does no harm to
packstack during runtime

# yum install -y openstack-swift-object openstack-swift-container \
 openstack-swift-account openstack-swift-proxy openstack-utils \
  rsync xfsprogs


mkfs.xfs /dev/sdb1
mkdir -p /srv/node/sdb1
echo "/dev/sdb1 /srv/node/sdb1 xfs defaults 1 2"&gt;&gt; /etc/fstab
mount -a
chown -R swift:swift /srv/node
restorecon -R /srv/node

So , that  CONFIG_SWIFT_STORAGES=/dev/sdb1 was set before run packstack

When done update /etc/glance/glance-api.conf like it has been done bellow :-
192.168.1.47  is IP of my Controller (Keystone hosting Node )

[DEFAULT]
show_image_direct_url=False
bind_host=0.0.0.0
bind_port=9292
workers=4
backlog=4096
image_cache_dir=/var/lib/glance/image-cache
registry_host=0.0.0.0
registry_port=9191
registry_client_protocol=http
debug=False
verbose=True
log_file=/var/log/glance/api.log
log_dir=/var/log/glance
use_syslog=False
syslog_log_facility=LOG_USER
use_stderr=True
notification_driver =messaging
amqp_durable_queues=False
# default_store = swift
# swift_store_auth_address = http://192.168.1.47:5000/v2.0/
# swift_store_user = services:glance
# swift_store_key = 6bc67e33258c4228
# swift_store_create_container_on_put = True
# stores=glance.store.swift.Store
[database]
connection=mysql://glance:c6ce03f4464c45cc@192.168.1.47/glance
idle_timeout=3600

[glance_store]
default_store = swift
stores = glance.store.swift.Store
swift_store_auth_address = http://192.168.1.47:5000/v2.0/
swift_store_user = services:glance
swift_store_key = 6bc67e33258c4228
swift_store_create_container_on_put = True
os_region_name=RegionOne

[image_format]
[keystone_authtoken]
auth_uri=http://192.168.1.47:5000/v2.0
identity_uri=http://192.168.1.47:35357
admin_user=glance
admin_password=6bc67e33258c4228
admin_tenant_name=services
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host=192.168.1.47
rabbit_port=5672
rabbit_hosts=192.168.1.47:5672
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_ha_queues=False
heartbeat_timeout_threshold=0
heartbeat_rate=2
rabbit_notification_exchange=glance
rabbit_notification_topic=notifications
[oslo_policy]
[paste_deploy]
flavor=keystone
[store_type_location_strategy]
[task]
[taskflow_executor]

Lines commented out are suggested by  [ 1 ], however to succeed finally
we would have follow [ 2 ].

Also  per [ 1 ] run as admin :-

# keystone user-role-add --tenant_id=$UUID_SERVICES_TENANT \
  --user=$UUID_GLANCE_USER --role=$UUID_ResellerAdmin_ROLE

Value 6bc67e33258c4228  is corresponding CONFIG_GLANCE_KS_PW

***************
Next step is
***************

# openstack-service restart glance

# [root@ServerCentOS72 ~(keystone_admin)]# systemctl | grep glance
openstack-glance-api.service                                                        loaded active running   OpenStack Image Service (code-named Glance) API server
openstack-glance-registry.service                                                   loaded active running   OpenStack Image Service (code-named Glance) Registry server


what will result your swift storage to work as you glance back end on RDO Liberty (CentOS 7.2)

Verification of updates done to glance-api.conf

# wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
 
Then run as admin :-

# glance image-create --name ubuntu-trusty-x86_64 --disk-format=qcow2 \

--container-format=bare --file trusty-server-cloudimg-amd64-disk1.img \

--progress

Create keystonerc_glance as shown on snapshots to make sure,that now glance is
is uploading glance and sahara images to Swift Object Storage
 
# cat  keystonerc_glance
export OS_USERNAME=glance
export OS_PASSWORD=6bc67e33258c4228
export OS_TENANT_NAME=services
export OS_AUTH_URL=http://192.168.1.47:5000/v2.0
export PS1='[\u@\h \W(keystone_glance)]\$ '
export OS_REGION_NAME=RegionOne
 
 
 
 
Same file sahara-liberty-vanilla-2.7.1-ubuntu-14.04.qcow2 uploaded 
via `glance image-create` on i7 4790,16 GB box doesn't get fragmented   
[root@ServerCentOS7 ~(keystone_glance)]# glance image-list
+--------------------------------------+--------------------------------------+
| ID | Name |
+--------------------------------------+--------------------------------------+
| 74f3460e-489f-4e5f-8f37-ada1db576c67 | sahara-liberty-vanilla271-ubuntu1404 |
+--------------------------------------+--------------------------------------+
[root@ServerCentOS7 ~(keystone_glance)]# swift list glance
74f3460e-489f-4e5f-8f37-ada1db576c67
 
 

Hackery to get going RDO Kilo on Fedora 23

$
0
0
Sequence of hacks required to get going RDO Kilo on Fedora 23 is caused by existence of many areas where Nova has a hard dependency on Glance v1. Per https://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/use-glance-v2-api.html
The key areas that still use Glance v1 include:
  • Nova’s Image API, that is effectively just a proxy of Glance’s v1 API
  • Virt driver specific download of images and upload of images
  • CRUD operations on image metadata
  • Creating and deleting snapshots in common code.
According to https://bugs.launchpad.net/glance/+bug/1476770
I can see that for python-glanceclient status is "In Progress". So, to get things working right now only one option is available (for myself of course) , it is to try to convert into F23 downstream temporary patch abandoned commit   and test would it help me or no.

Versions on python-urllib3 :-

RDO Kilo F23 -  1.13.1
RDO Kilo CentOS 7 - 1.10.1
RDO Liberty CentOS 7 - 1.10.4

I believe that is a reason why bug manifests on Fedora 23 :  1.13.1 > 1.11.X
Patch helps on F23 due to per it's description :-

Subject: [PATCH] Convert headers to lower-case when parsing metadata

In urllib3 1.11.0 (and consequently, requests 2.8.0) handling was added
to preserve the casing of headers as they arrived over the wire. This
means that when iterating over a CaseInsensitiveDict from requests (as
we do in _image_meta_from_headers) we no longer get keys that are
already all lowercase. As a result, we need to ensure that the header
key is lowercased before checking its properties.

As of time of writing I've got positive results on AIO RDO Kilo set up on F23.

1. Download , rebuild and install pm-utils-1.4.1-31.fc22.src.rpm on F23
2. Download and rebuild python-glanceclient-0.17.0-3.fc23.src.rpm with patch
    increasing  0.17.0-3 up to 0.17.0-4. RPMs bellow are supposed to be
    installed after packstack set up

  [root@ip-192-169-142-54 ~]# ls -l *.rpm
    -rw-r--r--. 1 root root 140174 Feb 28 10:57 python-glanceclient- 0.17.0-4.fc23.noarch.rpm
    -rw-r--r--. 1 root root 122010 Feb 28 10:57 python-glanceclient-doc-0.17.0-4.fc23.noarch.rpm

 3. Activate as rdo repo https://repos.fedorapeople.org/repos/openstack/openstack-kilo/f22/ 

 4. Run `dnf -y install openstack-packstack`
 5. To be able use Swift back end for glance :-

# yum install -y \
openstack-swift-object openstack-swift-container \
openstack-swift-account openstack-swift-proxy openstack-utils \
rsync xfsprogs

mkfs.xfs /dev/vdb1
mkdir -p /srv/node/vdb1
echo "/dev/vdb1 /srv/node/vdb1 xfs defaults 1 2"&gt;&gt; /etc/fstab

mkfs.xfs /dev/vdc1
mkdir -p /srv/node/vdc1
echo "/dev/vdc1 /srv/node/vdc1 xfs defaults 1 2"&gt;&gt; /etc/fstab

mkfs.xfs /dev/vdd1
mkdir -p /srv/node/vdd1
echo "/dev/vdd1 /srv/node/vdd1 xfs defaults 1 2"&gt;&gt; /etc/fstab

mount -a
chown -R swift:swift /srv/node
restorecon -R /srv/node

  6. Apply patch https://bugzilla.redhat.com/show_bug.cgi?id=1234042
      mentioned in Comment 6
 
 Insert as first line:-
require File.expand_path(File.join(File.dirname(__FILE__), '.','ovs_redhat.rb'))
into ovs_redhat_el6.rb

*******************************************
Create answer-file as follows
*******************************************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_SAHARA_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.54
CONFIG_COMPUTE_HOSTS=192.169.142.54
CONFIG_NETWORK_HOSTS=192.169.142.54
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.54
CONFIG_SAHARA_HOST=192.169.142.54
CONFIG_USE_EPEL=n
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-54.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-54.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.54
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.54
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=1d1ac704de2f4d47
CONFIG_KEYSTONE_DB_PW=3be4677750ac4804
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=a1cd54646aa24ce79e900da43816690f
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=457b511a7ea04c19
CONFIG_KEYSTONE_DEMO_PW=ef3d90d60c1d4578
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.54
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=2aec3b4e90674863
CONFIG_GLANCE_KS_PW=1baf404b24cd41cc
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=eec9d502035a48cf
CONFIG_CINDER_KS_PW=918d89afa0d3462a
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=2G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PW=7188910f8c884108
CONFIG_NOVA_KS_PW=72b030a0e0df4982
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=em2
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=em1
CONFIG_NOVA_NETWORK_PRIVIF=em2
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=432ebd6f06ed49be
CONFIG_NEUTRON_DB_PW=7fab632699934012
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=8b15dba4e2b745f3
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=
CONFIG_NEUTRON_ML2_VXLAN_GROUP=
CONFIG_NEUTRON_ML2_VNI_RANGES=10:100
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=dad8835a16e74e2bb86e44cac84ddcd4
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=070f50c3263946f1
CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1
CONFIG_SWIFT_STORAGE_ZONES=3
CONFIG_SWIFT_STORAGE_REPLICAS=3
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=c8b8b521f7014c10
CONFIG_SWIFT_STORAGE_SIZE=20G

CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=311f2f0e2e2b48ca
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=n
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=0d7356fd2dae4954
CONFIG_CEILOMETER_KS_PW=f9c1869022ee4295
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.54
CONFIG_REDIS_MASTER_HOST=192.169.142.54
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=8d70d9cab47c4add

****************************************
Post installations steps
****************************************
# yum -y install python-glanceclient- 0.17.0-4.fc23.noarch.rpm \
     python-glanceclient-doc-0.17.0-4.fc23.noarch.rpm
# openstack-service restart
# cd /etc/sysconfig/network-scripts
**************************************************
Create ifcfg-br-ex and ifcfg-ens3 as follows
**************************************************
# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.54"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.255"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

# cat ifcfg-ens3
DEVICE="ens3"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************
#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node
****************************************************
Switch to swift back end  for glance as follows :-
****************************************************
Update /etc/glance/glance-api.conf like it has been done bellow

[DEFAULT]
show_image_direct_url=False
bind_host=0.0.0.0
bind_port=9292
workers=4
backlog=4096
image_cache_dir=/var/lib/glance/image-cache
registry_host=0.0.0.0
registry_port=9191
registry_client_protocol=http
debug=False
verbose=True
log_file=/var/log/glance/api.log
log_dir=/var/log/glance
use_syslog=False
syslog_log_facility=LOG_USER
use_stderr=True
notification_driver =messaging
amqp_durable_queues=False
# default_store = swift
# swift_store_auth_address = http://192.169.142.54:5000/v2.0/
# swift_store_user = services:glance
# swift_store_key = 6bc67e33258c4228
# swift_store_create_container_on_put = True

[database]
connection=mysql://glance:41264fc52ffd4fe8@192.169.142.54/glance
idle_timeout=3600

[glance_store]
default_store = swift
stores = glance.store.swift.Store
swift_store_auth_address = http://192.169.142.54:5000/v2.0/
swift_store_user = services:glance
swift_store_key = f6a9398960534797
swift_store_create_container_on_put = True
swift_store_large_object_size = 5120
swift_store_large_object_chunk_size = 200
swift_enable_snet = False
os_region_name=RegionOne

[image_format]
[keystone_authtoken]
auth_uri=http://192.169.142.54:5000/v2.0
identity_uri=http://192.169.142.54:35357
admin_user=glance
admin_password=6bc67e33258c4228
admin_tenant_name=services
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host=192.169.142.54
rabbit_port=5672
rabbit_hosts=192.169.142.54:5672
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_ha_queues=False
heartbeat_timeout_threshold=0
heartbeat_rate=2
rabbit_notification_exchange=glance
rabbit_notification_topic=notifications
[oslo_policy]
[paste_deploy]
flavor=keystone
[store_type_location_strategy]
[task]
[taskflow_executor]

Lines commented out are suggested by  [ 1 ], however to succeed finally
we would have follow [ 2 ]. Also  per [ 1 ] run as admin :-


# keystone user-role-add --tenant_id=$UUID_SERVICES_TENANT \
  --user=$UUID_GLANCE_USER --role=$UUID_ResellerAdmin_ROLE

In particular case :-

[root@ServerCentOS7 ~(keystone_admin)]# keystone tenant-list | grep services
 | 6bbac43abaf04cefb53c259a6afc285b |   services  |   True  |

[root@ServerCentOS7 ~(keystone_admin)]# keystone user-list | grep glance
| 99ed8b1fbd1c428f917f570d4cae75f4 |   glance   |   True  |   glance@localhost   |

[root@ServerCentOS7 ~(keystone_admin)]# keystone role-list | grep ResellerAdmin
| e229166b8ab24900933c23ea88c8b673 |  ResellerAdmin   |


# keystone user-role-add --tenant_id=6bbac43abaf04cefb53c259a6afc285b  \       
    --user=99ed8b1fbd1c428f917f570d4cae75f4  \
    --role=e229166b8ab24900933c23ea88c8b673


Value f6a9398960534797  is corresponding CONFIG_GLANCE_KS_PW in answer-file,i.e. keystone glance password for authentification

***************
Next step is
***************

# openstack-service restart glance

# [root@ServerCentOS72 ~(keystone_admin)]# systemctl | grep glance
openstack-glance-api.service                                                        loaded active running   OpenStack Image Service (code-named Glance) API server
openstack-glance-registry.service                                                   loaded active running   OpenStack Image Service (code-named Glance) Registry server


what will result your swift storage to work as you glance back end on RDO Kilo

********************************************************************************
In case of deployment to Storage Node , please , be aware of
http://silverskysoft.com/open-stack-xwrpr/2015/07/enabling-openstack-swift-object-storage-service/
*********************************************************************************
  


Viewing all 297 articles
Browse latest View live