OpenStack Homelab Installation

Steve Mohr
11 min readFeb 2, 2021

--

OpenStack is a collection of open source projects which together function as a full featured cloud solution. As a home lab project it is an ideal platform for learning new skills and playing with virtualized technology.

There are a number of different methods for deploying OpenStack. am installing the OpenStack Victoria release on CentOS 8.

With Packstack it is possible to install all off the needed components on a single computer. This is great for home use. However if you intend to run this in production you should look into Triple O. Triple O has a much more complex installation process. The end result is similar but with Triple O all of the services are containerized and troubleshooting or working with the underlying processes is much different.

The standard Packstack installation will set up all of the OpenStack services on a single computer. In my installation I am using 2 Dell Optiplex 9020’s named firefly and reaver.

I launch my installation from firefly which is where the majority of the services run. Reaver is set up as additional compute.

My network is pretty basic. I don’t run any vlans or anything fancy. It is just a single flat network with CIDR block 192.168.1.0/24 like many home networks. I will set up internal networks in OpenStack. Floating IP addresses in 192.168.1.* will be created to access the virtual machines.

Firefly and Reaver both have dual port network cards. The primary NIC is on the 192.168.1.* network. They are directly connected with the second NIC on the 172.168.1.* network.

In firefly I have a second 1TB hard drive. I will set up 2 partitions on this drive for my block storage (cinder) and my object storage (swift).

Firefly
Dell Optiplex 9020
CPU: Core i7 3.4Ghz
RAM: 16GB
HD: 256GB SSD, 1TB HDD
Network: Dual port 1GB ethernet
Public Network Interface: eth0–192.168.1.146
Internal Network Interface: eth1–172.168.1.2

Reaver
Dell Optiplex 9020
CPU: Core i7 3.4Ghz
RAM: 8GB
HD: 500GB HDD
Network: Dual port 1GB ethernet
Public Network Interface: eth0–192.168.1.147
Internal Network Interface: eth1–172.168.1.3

Initial OS installation

I am using CentOS 8 with a minimal install. For simplicity sake I make 1 big partition with XFS mounted at /. On reaver I use the entire 500GB hard drive. On firefly I use the 256GB SSD and leave the second hard drive alone for now.

Network Configuration

Being an old guy I prefer to rename my network interfaces to eth0 and eth1. Edit /etc/default/grub and add net.ifnames=0 biosdevname=0 to the line GRUB_CMDLINE_LINUX.

[root@firefly ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=”$(sed ‘s, release .*$,,g’ /etc/system-release)”
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT=”console”
GRUB_CMDLINE_LINUX=”crashkernel=auto net.ifnames=0 biosdevname=0 resume=UUID=6724d57f-2e5b-4a53–95ad-1c4c64060165 rhgb quiet”
GRUB_DISABLE_RECOVERY=”true”
GRUB_ENABLE_BLSCFG=true

Regenerate grub to make the changes take affect.

grub2-mkconfig -o /boot/grub2/grub.cfg

move network config files in /etc/sysconfig/network-scripts to their new eth* names.

[root@firefly ~]# cd /etc/sysconfig/network-scripts/[root@firefly network-scripts]# mv ifcfg-p2p1 ifcfg-eth0[root@firefly network-scripts]# mv ifcfg-p2p2 ifcfg-eth1

Set the network interfaces with static IP addresses. Eth0 is my external network, Eth1 is my internal network for firefly\reaver communication.

[root@firefly network-scripts]# cat ifcfg-eth0
DEFROUTE=yes
NAME=eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=static
NETMASK=255.255.255.0
HWADDR=00:e0:4c:67:9a:70
IPADDR=192.168.1.146
IPV6INIT=no
GATEWAY=192.168.1.1
DNS1=8.8.8.8
DNS2=8.8.4.4
[root@firefly network-scripts]# cat ifcfg-eth1
DEVICE=”eth1"
BOOTPROTO=”static”
HWADDR=00:e0:4c:67:9a:71
ONBOOT=”yes”
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=172.168.1.2
BROADCAST=72.168.1.255
IPV4_FAILURE_FATAL=no
IPV6INIT=no
[root@reaver network-scripts]# cat ifcfg-eth0
DEFROUTE=yes
NAME=eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=static
NETMASK=255.255.255.0
HWADDR=00:e0:4c:68:d6:54
IPADDR=192.168.1.147
IPV6INIT=no
GATEWAY=192.168.1.1
DNS1=8.8.8.8
DNS2=8.8.4.4
[root@reaver network-scripts]# cat ifcfg-eth1
DEVICE=”eth1"
BOOTPROTO=”static”
HWADDR=00:e0:4c:68:d6:55
ONBOOT=”yes”
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=172.168.1.3
BROADCAST=72.168.1.255
IPV4_FAILURE_FATAL=no
IPV6INIT=no

Additional configuration

I know is seems icky but the official instructions tell you to disable the firewall, network manager and turn off selinux. Iptables does get enabled later on in the installation process. In CentOS 8 the old network scripts don’t come installed by default so install network-scripts and net-tools and enable network. Run this on both firefly and reaver.

sudo dnf install network-scripts net-tools wget -y# systemctl disable firewalld
# sudo systemctl stop firewalld
# sudo systemctl disable NetworkManager
# sudo systemctl stop NetworkManager
# sudo systemctl mask NetworkManager
# sudo systemctl enable network
# sudo systemctl start network

Disable SELinux on firefly and reaver. Set SELINUX to disabled in /etc/selinux/config.

[root@firefly ~]# vi /etc/selinux/configSELINUX=disabled

Set the hostnames on firefly and reaver

# hostnamectl set-hostname firefly
# hostnamectl set-hostname reaver
[root@firefly ~]# cat /etc/hostnamefirefly[root@reaver ~]# cat /etc/hostnamereaver[root@firefly network-scripts]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.146 firefly firefly.localhost
192.168.1.147 reaver reaver.localhost

Reboot and check settings

Reboot firefly and reaver. When things come back up check to be sure that selinux is off, your hostname is set properly, your network interfaces are correct.

[root@firefly ~]# systemctl status firewalld● firewalld.service — firewalld — dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)[root@firefly ~]# systemctl status NetworkManager● NetworkManager.serviceLoaded: masked (Reason: Unit NetworkManager.service is masked.)Active: inactive (dead)[root@firefly ~]# sestatusSELinux status: disabled[root@firefly ~]# hostnamefirefly[root@firefly ~]# ifconfig eth0
inet 192.168.1.146
[root@firefly ~]# ifconfig eth1
inet 172.168.1.2

Set up storage on firefly

Our second hard drive is located at /dev/sdb. We will partition this and set it up for our cinder and swift storage.

Using parted create 2 partitions

[root@firefly ~]# parted -s -a optimal — /dev/sdb mklabel gpt
[root@firefly ~]# parted -s -a optimal — /dev/sdb mkpart primary 0% 60%
[root@firefly ~]# parted -s -a optimal — /dev/sdb mkpart primary 60% 100%

#Install lvm2 so that we can work with logical volumes.

[root@firefly ~]# yum install lvm2

Configure the partitions

[root@firefly ~]# pvcreate /dev/sdb1
[root@firefly ~]# pvcreate /dev/sdb2
[root@firefly ~]# vgcreate cinder-volumes /dev/sdb1
[root@firefly ~]# vgcreate swift-volumes /dev/sdb2
[root@firefly ~]# lvcreate -n swift-lvs -l 100%FREE swift-volumes
[root@firefly ~]# mkfs.ext4 /dev/swift-volumes/swift-lvs

Check that the disks are set up and everything looks good.

root@firefly ~# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 238.5G 0 disk├─sda1 8:1 0 1.9G 0 part /boot
├─sda2 8:2 0 186.3G 0 part /
└─sda3 8:3 0 14.9G 0 part [SWAP]
sdb 8:16 0 931.5G 0 disk
├─sdb1 8:17 0 558.9G 0 part
└─sdb2 8:18 0 372.6G 0 part
└─swift — volumes-swift — lvs 253:0 0 372.6G 0 lvm
[root@firefly ~]# vgsVG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 1 0 wz — n- 372.60g <18.45g
swift-volumes 1 1 0 wz — n- <558.91g 0

Install the OpenStack packages

On firefly install the openstack packages.

[root@firefly ~]# dnf config-manager — enable powertools
[root@firefly ~]# dnf install -y centos-release-openstack-victoria
[root@firefly ~]# dnf update -y
[root@firefly ~]# dnf install -y openstack-packstack

On reaver enable powertools and run an update.

$ sudo dnf config-manager — enable powertools
$ sudo dnf update -y

On firefly generate an answer file. We will configure this file for our network, storage, compute node.

[root@firefly ~]# packstack — gen-answer-file packstack_answers.txt

Make the following changes to your answers file

#Set the ntp servers

CONFIG_NTP_SERVERS=0.centos.pool.ntp.org,1.centos.pool.ntp.org,4.centos.pool.ntp.org,3.centos.pool.ntp.org

#Set the storage options for your cinder and swift partitions.

CONFIG_CINDER_VOLUMES_SIZE=558G
CONFIG_SWIFT_STORAGE_SIZE=372G
CONFIG_SWIFT_STORAGES=/dev/swift-volumes/swift-lvs

#Disable the demo provisioning. This will allow us more control over the cluster but means that the cirros image will need to be loaded and networking will need to manually configured.

CONFIG_PROVISION_DEMO=n

#Configure the networking. We are using a flat network. Once the installation has completed log into your nodes and run ifconfig. You should see that bridge interfaces have been created.

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan

#List the public IP addresses for the 2 nodes. Configure eth1 for the tunnel communication.

CONFIG_COMPUTE_HOSTS=192.168.1.146,192.168.1.147
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_TUNNEL_SUBNETS= 172.168.1.0/24
CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ex

Install!

Gottcha NTP

When I went to install I got an error that ntpdate could not be found. In CentOS 8 ntpdate has been replaced with chrony. I imagine that this hasn’t been updated in packstack since switching from 7 to 8. It’s bad form to install ntpdate but it will get things working.

[root@firefly ~]# dnf remove chrony
[root@firefly ~]# dnf install wget
[root@firefly ~]# wget ftp://ftp.pbone.net/mirror/vault.centos.org/7.8.2003/updates/x86_64/Packages/ntpdate-4.2.6p5-29.el7.centos.2.x86_64.rpm
[root@firefly ~]# dnf install /root/ntpdate-4.2.6p5–29.el7.centos.2.x86_64.rpm

Run packstack with your answer file. You will be prompted for the password to reaver. The installer will create SSH keys and configure the node.

[root@firefly ~]# packstack –answer-file=/root/packstack_answers.txt

Post install

If all goes well you should now have a working OpenStack installation. Check the keystonerc_admin file for your credentials.

[root@firefly ~]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
export OS_USERNAME=admin
export OS_PASSWORD=’********’
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=http://192.168.1.146:5000/v3
export PS1=’[\u@\h \W(keystone_admin)]\$ ‘
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

Check your clouds health

Source the keystone file and run some commands to check your clouds status.

[root@firefly ~]# . keystonerc_admin
[root@firefly ~(keystone_admin)]#

Here we see that firefly and reaver are listed and the compute service is up and running.

[root@firefly ~(keystone_admin)]# openstack compute service list+ — — + — — — — — — — — + — — — — -+ — — — — — + — — — — -+ — — — -+ — — — — — — — — — — — — — — +
| ID | Binary | Host | Zone | Status | State | Updated At |
+ — — + — — — — — — — — + — — — — -+ — — — — — + — — — — -+ — — — -+ — — — — — — — — — — — — — — +
| 1 | nova-conductor | firefly | internal | enabled | up | 2021–02–02T20:55:39.000000 |
| 3 | nova-scheduler | firefly | internal | enabled | up | 2021–02–02T20:55:33.000000 |
| 8 | nova-compute | firefly | nova | enabled | up | 2021–02–02T20:55:34.000000 |
| 9 | nova-compute | reaver | nova | enabled | up | 2021–02–02T20:55:30.000000
[root@firefly ~(keystone_admin)]# openstack catalog list

OOPS! It always seems like something is depreciated. This brand new installation is no different.

[root@firefly ~(keystone_admin)]# nova-status upgrade checkJSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.

Check the firewall rules to ensure that the proper ports are open to allow communication and database connections between your nodes.

[root@firefly sysconfig]# grep 5672 /etc/sysconfig/iptables-A INPUT -s 192.168.1.146/32 -p tcp -m multiport — dports 5671,5672 -m comment — comment “001 amqp incoming amqp_192.168.1.146” -j ACCEPT-A INPUT -s 192.168.1.147/32 -p tcp -m multiport — dports 5671,5672 -m comment — comment “001 amqp incoming amqp_192.168.1.147” -j ACCEPT[root@firefly sysconfig]# grep mari /etc/sysconfig/iptables-A INPUT -s 192.168.1.146/32 -p tcp -m multiport — dports 3306 -m comment — comment “001 mariadb incoming mariadb_192.168.1.146” -j ACCEPT-A INPUT -s 192.168.1.147/32 -p tcp -m multiport — dports 3306 -m comment — comment “001 mariadb incoming mariadb_192.168.1.147” -j ACCEPT

Create a network

We will make a network with CIDR block 10.10.1.0/24. Our virtual machines will run in this internal network.

— — — — — -

#Create an internal network

[root@firefly ~(keystone_admin)]# openstack network create internal

#Create a subnet on internal with an internal ip range in 10.10.1.*

[root@firefly ~(keystone_admin)]#openstack subnet create internal — subnet-range 10.10.1.0/24 — dns-nameserver 8.8.8.8 — network internal

This network will be public with a range of IP addresses from 192.168.1.80 to 192.168.1.100. We will create floating IP addresses from this network which will allow us to access our VM’s on the 10.10 network.

#Create a public network
[root@firefly ~(keystone_admin)]# openstack network create public — external — provider-network-type flat — provider-physical-network extnet
#Create a subnet for floating ip’s in the 192.168.1.* subnet.
[root@firefly ~(keystone_admin)]# openstack subnet create public — network public — dhcp — allocation-pool start=192.168.1.80,end=192.168.1.100 — dns-nameserver 8.8.8.8 — gateway 192.168.1.148 — subnet-range 192.168.1.0/24

Create a router and connect the internal and public networks to it. Set the public network as our gateway.

#Create a router
[root@firefly ~(keystone_admin)]#openstack router create zerocool
#Connect the internal network to the router
[root@firefly ~(keystone_admin)]#openstack router add subnet zerocool internal
#Connect the public network
[root@firefly ~(keystone_admin)]#openstack router add subnet zerocool public
#Set the gateway for the router to the public network
[root@firefly ~(keystone_admin)]#neutron router-gateway-set zerocool public

Install OS images

Cirros is a minimal Linux distribution. This will be used to test our installation and verify that we can start and instance and connect to it.

#Cirros

$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img$ openstack image create \
— container-format bare \
— disk-format qcow2 \
— file cirros-0.4.0-x86_64-disk.img \
Cirros-0.4.0-x86_64

Horizon the OpenStack web interface

Log into Horizon the OpenStack web interface. Open a web browser and visit the URL listed in your keystonerc_admin file. Use the username and password listed there.

export OS_USERNAME=admin
export OS_PASSWORD=’********’
export OS_AUTH_URL=http://192.168.1.146:5000/v3

Check Network Topology

Click through Project > Network > Network Topology. You will see your public and private networks listed.

Launch an instance

Click Project > Compute > Instances > launch Instance

Give it a name. In this case “test”.

Select the CirrOS image

Choose the tiny flavor

Choose the internal network

Click Launch Instance

At this point you should see the instance build and enter a running state. Click the Instance name. Click the log tab to see the boot process. Click console to interact with the instance.

You should now be greeted with a login terminal for your Cirros instance. Log in with user cirros and password gocubsgo.

Check the network interface and see if you can ping out to the internet.

Log into the instance remotely

Associate a floating IP

Click Project > Compute > Instances

on the right under actions click associate a floating IP.

Click the + to create a floating IP and associate it with your instance.

Allow SSH access

Click Project > Network > Security Groups > Create Security Group

Name the Security group

Name
SSH

Set the Rule to SSH. This will open port 22.

Return to Project > Compute > Instances

On the far right click actions > Edit security groups

Add your SSH security group and save.

Now you can log into your instance with the floating IP address assigned to it.

Macintosh:~ stevex0r$ ssh cirros@192.168.1.93
cirros@192.168.1.93’s password:
$ hostname
test

Congratulations! You now have your own private cloud! Now what the hell are you gunna do with it?

--

--