/mnt/vm00/sdn_testbed/devstack
/dev/sdb4 is lvm2 used by cider (volume).
1. HOST_IP is the ip of virbr0 interface: 192.168.122.1
git clone https://github.com/openstack-dev/devstack.git devstack
cd devstack
git fetch remote
git checkout -b icehouse origin/stable/icehouse
Now we are working with devstack configured to download openstack from the stable icehouse realease.
First run with
./stack.sh
Stop with
./unstack.sh
Devstack scripts will start all openstack services then detach them with “screen” instead of as system services. All screen command are written in “stack-screenrc”. This file is used by “rejoin-stack.sh” to start all services after reboot (without calling unstack.sh).
See next steps first stack.sh will clean database and setup system again. So use rejoint-stack.sh to start OpenStack:
./rejoin-stack.sh
Oppsss… start httpd first
systemctl restart httpd
Start mariadb (mysqld). If it's down keystone will not work. Page through screen to see any error output from keystone.
systemctl start mysqld
To exit rejoin-stack screen using:
ctrl+a+d
Reopen screen:
screen -x
Loop through each screened service using
ctrl+a+" ctrl+ a + n(ext) ctrl+ a + n(prev)
Kill currently shown screen/service with “Ctrl+C”. The screen is just a terminal where the command is run.
Running
sudo losetup -f /opt/stack/data/stack-volumes-backing-file
before rejoin-stack.sh makes the volume group come online and cinder-volume will start ok.
By default a loopdevice is associated with the stack volumes backing file:
sudo losetup -f /opt/stack/data/stack-volumes-backing-file
Build the Loopback File and Persist It
Start by checking on the amount of storage you have available on your node. This guide assume the loopback file will be created in the root partition. Become root and then execute the following command:
df -ah /
Take a gander at the result:
root@flop:/home/kord# df -ah / Filesystem Size Used Avail Use% Mounted on /dev/sda2 2.8T 2.5G 2.6T 1% /
We're good to create a 1TB file on that partition. Now execute the command to create the file we'll loop:
dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=1024G
Results come back instantly:
root@flop:/home/kord# dd if=/dev/zero of=/cinder-volumes bs=1 count=0 seek=1024G 0+0 records in 0+0 records out 0 bytes (0 B) copied, 1.035e-05 s, 0.0 kB/s
Now loop up the file to the loopback device (we'll use loop2 here):
losetup /dev/loop2 /cinder-volumes
Finally, create a rc file to mount the loopback file after a reboot:
echo “losetup /dev/loop2 /cinder-volumes; exit 0;”
> /etc/init.d/cinder-setup-backing-file
c
hmod 755 /etc/init.d/cinder-setup-backing-file ln -s /etc/init.d/cinder-setup-backing-file /etc/rc2.d/S10cinder-setup-backing-file
Physical & Volume Group Creation
The next step is to initialize a LVM volume named cinder-volumes. That name is used by default for all Cinder installs. It can be changed in the configuration files in /etc/cinder/ after you install it, but we'll assume here we're leaving it as the default. Run the commands in one cut and paste frenzy:
sudo pvcreate /dev/loop2 sudo vgcreate cinder-volumes /dev/loop2
Revel in the vast awesomeness of your successful paste job:
root@flop:/home/kord# sudo pvcreate /dev/loop2 Physical volume “/dev/loop2” successfully created root@flop:/home/kord# sudo vgcreate cinder-volumes /dev/loop2 Volume group “cinder-volumes” successfully created
Double check your work with pvdisplay and vgdisplay:
root@flop:/home/kord# pvdisplay; vgdisplay
PV Name /dev/loop2
VG Name cinder-volumes PV Size 1.00 TiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 262143 Free PE 262143 Allocated PE 0 PV UUID XnvA5r-wBw0-WIhd-9FHd-n4lT-71em-uYCLJt
VG Name cinder-volumes
System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 1024.00 GiB PE Size 4.00 MiB Total PE 262143 Alloc PE / Size 0 / 0 Free PE / Size 262143 / 1024.00 GiB VG UUID vabQ1q-mxZ2-ofrn-6640-rriR-jln1-WHm4jm
Install Cinder via Vagrant Chef
Move on over to the computer that was used to provision the nodes in your install. If you've halted the Vagrant server, restart it first and then ssh into it and become root:
Beast:texasholdem kord$ cd bluechipstack/ Beast:bluechipstack kord$ vagrant up Bringing machine 'default' up with 'virtualbox' provider… … [default] – /vagrant Beast:bluechipstack kord$ vagrant ssh Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64) … Welcome to your Vagrant-built virtual machine. Last login: Wed Sep 4 18:54:12 2013 from 10.0.2.2 vagrant@chef-server:~$ sudo su root@chef-server:/home/vagrant#
Now test Chef is still aware of the servers:
knife node list
Time to install Cinder on the nodes. Type the following to add the Cinder storage role to the nodes (making sure to change the node name(s) in the process):
knife node runlist add flop 'role[cinder-volume]' knife node runlist add turn 'role[cinder-volume]'
Volume Clear
Write Preview
Parsed as Markdown Edit in fullscreen
Create a volume group on /dev/sdb4 for this purpose:
vgcreate cinder_volume /dev/sdb4 pvscan / pvs sudo pvscan PV /dev/sda3 VG fedora_dai142 lvm2 [205.08 GiB / 0 free] PV /dev/loop0 VG stack-volumes lvm2 [10.01 GiB / 10.01 GiB free] PV /dev/sdb4 VG cinder_volume lvm2 [375.39 GiB / 375.39 GiB free] Total: 3 [590.48 GiB] / in use: 3 [590.48 GiB] / in no VG: 0 [0 ]