在centos7上安装openstack mitaka版本
时间:2016-05-08 16:17 来源:51cto.com 作者:youerning
前言:openstack真是一个庞然大物,想要吃透还真不容易,所以在对openstack大概有了一个了解的时候,就应该是部署,虽然openstack的安装方式有rdo或者devstack等一键安装工具,但是最好浅尝辄止,有了大概的使用经验之后就应该是从头到尾的安装一遍了,不然对于那些报错,以及故障的解决一定是不够气定神闲的,因此,当你有了openstack的基本认识后,开始安装吧~
注:openstack的官方文档写得真的是,好的不要不要的,但是看英文总是感觉有点不溜,因此在官方文档的基础上写得这篇笔记。
参考:http://docs.openstack.org/mitaka/install-guide-rdo/
首先应该是大概的规划,需要几个节点,选择什么操作系统,网络怎么划分~
下面是我的大概规划
节点数:2 (控制节点,计算节点)
操作系统:CentOS Linux release 7.2.1511 (Core)
网络配置:
控制节点: 10.0.0.101 192.168.15.101
结算节点: 10.0.0.102 192.168.15.102
先决条件:
The following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances:
Controller Node: 1 processor, 4 GB memory, and 5 GB storage
Compute Node: 1 processor, 2 GB memory, and 10 GB storage
官方建议概念验证的最小硬件需求。
控制节点 1 处理器,4 GB内存,5 GB硬盘
计算节点 1 处理器,2 GB内存,10 GB硬盘
参考:http://docs.openstack.org/mitaka/install-guide-rdo/environment.html
注:如果你是用手动一步一步的创建操作系统,配置网络,那么笔者就得好好的鄙视你了~~研究研究vagrant吧,通过下面的配置文件你就能一条命令生成两个虚拟机,并配置好网络了,vagrant简易教程参考:http://youerning.blog.51cto.com/10513771/1745102
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2)
do
|config|
config.vm.box =
"centos7"
node_servers = { :control => [
'10.0.0.101'
,
'192.168.15.101'
],
:compute => [
'10.0.0.102'
,
'192.168.15.102'
]
}
node_servers.each
do
|node_name,node_ip|
config.vm.define node_name
do
|node_config|
node_config.vm.host_name = node_name.to_s
node_config.vm.network :private_network,ip: node_ip[0]
node_config.vm.network :private_network,ip: node_ip[1],virtualbox_inet:
true
config.vm.boot_timeout = 300
node_config.vm.provider
"virtualbox"
do
|
v
|
v
.memory = 4096
v
.cpus = 1
end
end
end
end
通过vagrant up一条命令,稍等一会,两个热腾腾的虚拟机就出炉了,我们的环境就OK了~~
环境如下
操作系统:CentOS Linux release 7.2.1511 (Core)
网络配置:
控制节点: 10.0.0.101 192.168.15.101
结算节点: 10.0.0.102 192.168.15.102
注意:上面的config.vm.box = "centos7",首先需要有个centos7的box
在开始部署前,我们先捋一捋openstack安装步骤
首先是软件环境准备,我们需要将一些通用的软件以及源仓库等进行配置,基本如下
NTP服务器
控制节点,其他节点
openstack 安装包仓库
通用组件:
SQL 数据库 ===> MariaDB
NoSQL 数据库 ==> MongoDB(基本组件不需要,)
消息队列 ==> RabbitMQ
Memcached
再就是openstack整个框架下的各个组件,基本组件如下
认证服务 ===> Keystone
镜像服务 ===> Glance
计算资源服务 ===> Nova
网络资源服务 ===> Neutron
Dashboard ===> Horizon
块存储服务 ===> Cinder
其他存储服务,如下
文件共享服务 ===> Manila
对象存储服务 ===> Swift
其他组件,如下
编排服务 ===> Heat
遥测服务 ===> Ceilometer
数据库服务 ===> Trove
环境准备
域名解析:
在各个节点编辑hosts文件,加入以下配置
10.0.0.101 controller
10.0.0.102 compute
ntp时间服务器
控制节点
1) 安装chrony软件包
yum install chrony
2) 编辑配置文件 /etc/chrony.conf,添加以下内容,202.108.6.95可根据自己需求自行更改。
server 202.108.6.95 iburst
allow 10.0.0.0/24
3)加入自启动,并启动
# systemctl enable chronyd.service
# systemctl start chronyd.service
其他节点
1) 安装chrony软件包
yum install chrony
2) 编辑配置文件 /etc/chrony.conf,添加以下内容
server controller iburst
allow 10.0.0.0/24
3)加入自启动,并启动
# systemctl enable chronyd.service
# systemctl start chronyd.service
验证:
控制节点
1
chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
=============================================================
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/- 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/- 68ms
其他节点
1
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/- 15ms
openstack 安装包仓库
安装相应openstack版本yum源
1
yum
install
centos-release-openstack-mitaka
系统更新
1
yum upgrade
注:如果系统内核有更新,需要重启
安装openstackclient,openstack-selinux
1
2
yum
install
python-openstackclient
yum
install
openstack-selinux
注:如果报什么 Package does not match intended download,则yum clean all或者直接下载rpm包安装吧。
参考下载地址:http://ftp.usf.edu/pub/centos/7/cloud/x86_64/openstack-kilo/common/
SQL数据库
安装
1
yum
install
mariadb mariadb-server python2-PyMySQL
创建/etc/my.cnf.d/openstack.cnf配置文件,加入以下内容
1
2
3
4
5
6
7
8
#绑定IP
[mysqld]
bind-address = 10.0.0.11
#设置字符集等
default-storage-engine = innodb .
innodb_file_per_table
collation-server = utf8_general_ci
character-
set
-server = utf8
配置启动项,启动等
1
2
systemctl
enable
mariadb.service
systemctl start mariadb.service
数据库初始化,创建root密码等,操作如下
1
mysql_secure_installation
Enter current password for root (enter for none):[Enter]
Set root password? [Y/n] Y
New password: openstack
Re-enter new password:openstack
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] n
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y
消息队列rabbitmq
安装
1
yum
install
rabbitmq-server
配置启动项,启动
1
2
systemctl
enable
rabbitmq-server.service
systemctl start rabbitmq-server.service
添加openstack用户
1
rabbitmqctl add_user openstack RABBIT_PASS
设置openstack用户的权限,依次分别为写,读,访问
1
rabbitmqctl set_permissions openstack
".*"
".*"
".*"
NoSQL Mongodb
安装
1
yum
install
mongodb-server mongodb
配置/etc/mongod.conf配置文件
1
2
3
bind_ip = 10.0.0.11
#smallfile=true可选
smallfiles =
true
配置启动项,启动
1
2
# systemctl enable mongod.service
# systemctl start mongod.service
Memcached
安装
1
# yum install memcached python-memcached
配置启动项,启动
1
2
# systemctl enable memcached.service
# systemctl start memcached.service
至此,openstack整个框架的软件环境基本搞定,下面就是各组件了。
安装各组件很有意思,除了keystone基本上是差不多的步骤,唯一的区别就是创建时指定的名字不同而已,基本是一般以下步骤。
1)配置数据库
1
2
3
4
5
create database xxx
GRANT ALL PRIVILEGES ON keystone.* TO
'xxxx'
@
'localhost'
\
IDENTIFIED BY
'XXXX_DBPASS'
;
GRANT ALL PRIVILEGES ON keystone.* TO
'xxxx'
@
'%'
\
IDENTIFIED BY
'XXXX_DBPASS'
;
2)安装
1
yum
install
xxx
3)配置文件
配置各项服务的连接,比如数据库,rabbitmq等
认证配置
特定配置
5)数据库同步
创建需要的表
4)加入启动项,启动
1
2
# systemctl enable openstack-xxx.service
# systemctl start openstack-xxxx.service
5)创建用户,service,endpoint等
1
2
3
openstack user create xxx
openstack service create xxx
openstack endpoint create xxx
6)验证服务是否成功
注:配置文件的配置建议首先备份,然后为了省略不必要的篇幅,在此说明配置文件的编辑方式,如下。
[DEFAULT]
...
admin_token = ADMIN_TOKEN
上面的内容,指明在[DEFAULT]的段落加入admin_token = ADMIN_TOKEN内容。
各组件安装
认证服务 Keystone
配置数据库
1
2
3
4
5
6
$ mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO
'keystone'
@
'localhost'
\
IDENTIFIED BY
'KEYSTONE_DBPASS'
;
GRANT ALL PRIVILEGES ON keystone.* TO
'keystone'
@
'%'
\
IDENTIFIED BY
'KEYSTONE_DBPASS'
;
安装
1
# yum install openstack-keystone httpd mod_wsgi
配置文件/etc/keystone/keystone.conf
admin令牌
1
2
3
[DEFAULT]
...
admin_token = ADMIN_TOKEN
数据库
1
2
3
[database]
...
connection = mysql+pymysql:
//keystone
:KEYSTONE_DBPASS@controller
/keystone
令牌生成方式
1
2
3
[token]
...
provider = fernet
注:上面的ADMIN_TOKEN可用openssl rand -hex 10命令生成,或者填入一串自定义的字符串
数据库同步
1
# su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化fernet秘钥。
令牌的生成方式参考:http://blog.csdn.net/miss_yang_cloud/article/details/49633719
1
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
配置Apache
编辑/etc/httpd/conf/httpd.conf
更改一下内容
1
ServerName controller
创建/etc/httpd/conf.d/wsgi-keystone.conf配置文件,加入以下内容
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias /
/usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat
"%{cu}t %M"
ErrorLog
/var/log/httpd/keystone-error
.log
CustomLog
/var/log/httpd/keystone-access
.log combined
<Directory
/usr/bin
>
Require all granted
<
/Directory
>
<
/VirtualHost
>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias /
/usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat
"%{cu}t %M"
ErrorLog
/var/log/httpd/keystone-error
.log
CustomLog
/var/log/httpd/keystone-access
.log combined
<Directory
/usr/bin
>
Require all granted
<
/Directory
>
<
/VirtualHost
>
配置启动项,启动
1
2
# systemctl enable httpd.service
# systemctl start httpd.service
创建service,API endpoint
为了避免不必要的篇幅,将admin_token,endpoint url配置到环境变量。
1
2
3
$
export
OS_TOKEN=ADMIN_TOKEN
$
export
OS_URL=http:
//controller
:35357
/v3
$
export
OS_IDENTITY_API_VERSION=3
创建service
1
2
$ openstack service create \
--name keystone --description
"OpenStack Identity"
identity
创建endpoint,依次有public,internal,admin
1
2
3
4
5
6
$ openstack endpoint create --region RegionOne \
identity public http:
//controller
:5000
/v3
$ openstack endpoint create --region RegionOne \
identity internal http:
//controller
:5000
/v3
$ openstack endpoint create --region RegionOne \
identity admin http:
//controller
:35357
/v3
创建域,项目,用户,角色 domain,project,user,role
创建domain
1
openstack domain create --description
"Default Domain"
default
创建project
1
2
openstack user create --domain default \
--password-prompt admin
创建admin role
1
openstack role create admin
将admin角色加入admin项目中
1
openstack role add --project admin --user admin admin
创建service项目
1
2
openstack project create --domain default \
--description
"Service Project"
service
创建demo项目
1
2
openstack project create --domain default \
--description
"Demo Project"
demo
创建demo用户
1
2
openstack user create --domain default \
--password-prompt demo
创建user角色
1
openstack role create user
将user角色加入到demo项目中
1
openstack role add --project demo --user demo user
注:记住创建用户时的密码。
验证admin用户
1
2
3
4
unset
OS_TOKEN OS_URL
openstack --os-auth-url http:
//controller
:35357
/v3
\
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:14:07.056119Z |
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
验证demo用户
1
2
3
$ openstack --os-auth-url http:
//controller
:5000
/v3
\
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:15:39.014479Z |
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
| project_id | ed0b60bf607743088218b0a533d5943f |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+------------+-----------------------------------------------------------------+
如果有以上格式返回,验证通过
admin,demo用户的环境变量脚本
正常情况下,当然吧诸如os-xxxx的参数放在环境变量中,为了更快的在admin,demo用户之间切换,创建环境脚本
创建admin-openrc
1
2
3
4
5
6
7
8
export
OS_PROJECT_DOMAIN_NAME=default
export
OS_USER_DOMAIN_NAME=default
export
OS_PROJECT_NAME=admin
export
OS_USERNAME=admin
export
OS_PASSWORD=ADMIN_PASS
export
OS_AUTH_URL=http:
//controller
:35357
/v3
export
OS_IDENTITY_API_VERSION=3
export
OS_IMAGE_API_VERSION=2
创建demo-openrc
1
2
3
4
5
6
7
8
export
OS_PROJECT_DOMAIN_NAME=default
export
OS_USER_DOMAIN_NAME=default
export
OS_PROJECT_NAME=demo
export
OS_USERNAME=demo
export
OS_PASSWORD=DEMO_PASS
export
OS_AUTH_URL=http:
//controller
:5000
/v3
export
OS_IDENTITY_API_VERSION=3
export
OS_IMAGE_API_VERSION=2
在此验证admin
首先 . admin-openrc
1
$ openstack token issue
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:44:35.659723Z |
| id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
镜像服务 Glance
配置数据库
1
2
3
4
5
6
$ mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO
'glance'
@
'localhost'
\
IDENTIFIED BY
'GLANCE_DBPASS'
;
GRANT ALL PRIVILEGES ON glance.* TO
'glance'
@
'%'
\
IDENTIFIED BY
'GLANCE_DBPASS'
;
创建service,user,role
1
2
3
$ . admin-openrc
$ openstack user create --domain default --password-prompt glance
$ openstack role add --project service --user glance admin
创建endpoint,依次有public,internal,admin
1
2
3
4
5
6
7
8
$ openstack service create --name glance \
--description
"OpenStack Image"
image
$ openstack endpoint create --region RegionOne \
image public http:
//controller
:9292
$ openstack endpoint create --region RegionOne \
image internal http:
//controller
:9292
$ openstack endpoint create --region RegionOne \
image admin http:
//controller
:9292
安装
1
# yum install openstack-glance
配置文件/etc/glance/glance-api.conf
数据库
1
2
3
[database]
...
connection = mysql+pymysql:
//glance
:GLANCE_DBPASS@controller
/glance
keystone认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[keystone_authtoken]
...
auth_uri = http:
//controller
:5000
auth_url = http:
//controller
:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
...
flavor = keystone
glance存储
1
2
3
4
5
[glance_store]
...
stores =
file
,http
default_store =
file
filesystem_store_datadir =
/var/lib/glance/images/
配置文件/etc/glance/glance-registry.conf
数据库
1
2
3
[database]
...
connection = mysql+pymysql:
//glance
:GLANCE_DBPASS@controller
/glance
keystone认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[keystone_authtoken]
...
auth_uri = http:
//controller
:5000
auth_url = http:
//controller
:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
...
flavor = keystone
同步数据库
1
# su -s /bin/sh -c "glance-manage db_sync" glance
启动
1
2
3
4
# systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
openstack-glance-registry.service
验证
1
$ . admin-openrc
下载cirros镜像
1
2
3
$ wget
http:
//download
.cirros-cloud.net
/0
.3.4
/cirros-0
.3.4-x86_64-disk.img
创建镜像
1
2
3
4
$ openstack image create
"cirros"
\
--
file
cirros-0.3.4-x86_64-disk.img \
--disk-
format
qcow2 --container-
format
bare \
--public
如果执行以下命令,显示如下,则成功
1
2
3
4
5
6
$ openstack image list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros |
+--------------------------------------+--------+
计算资源服务 nova
控制节点
数据库
1
2
3
4
5
6
7
8
9
10
11
$ mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO
'nova'
@
'localhost'
\
IDENTIFIED BY
'NOVA_DBPASS'
;
GRANT ALL PRIVILEGES ON nova_api.* TO
'nova'
@
'%'
\
IDENTIFIED BY
'NOVA_DBPASS'
;
GRANT ALL PRIVILEGES ON nova.* TO
'nova'
@
'localhost'
\
IDENTIFIED BY
'NOVA_DBPASS'
;
GRANT ALL PRIVILEGES ON nova.* TO
'nova'
@
'%'
\
IDENTIFIED BY
'NOVA_DBPASS'
;
创建service,user,role
1
2
3
4
5
6
$ . admin-openrc
$ openstack user create --domain default \
--password-prompt nova
$ openstack role add --project service --user nova admin
$ openstack service create --name nova \
--description
"OpenStack Compute"
compute
创建endpoint,依次有public,internal,admin
1
2
3
4
5
6
$ openstack endpoint create --region RegionOne \
compute public http:
//controller
:8774
/v2
.1/%\(tenant_id\)s
$ openstack endpoint create --region RegionOne \
compute internal http:
//controller
:8774
/v2
.1/%\(tenant_id\)s
$ openstack endpoint create --region RegionOne \
compute admin http:
//controller
:8774
/v2
.1/%\(tenant_id\)s
安装
1
2
3
# yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler
配置文件/etc/nova/nova.conf
启用的api
1
2
3
4
5
6
[DEFAULT]
...
enabled_apis = osapi_compute,metadata
[api_database]
...
connection = mysql+pymysql:
//nova
:NOVA_DBPASS@controller
/nova_api
数据库
1
2
3
[database]
...
connection = mysql+pymysql:
//nova
:NOVA_DBPASS@controller
/nova
rabbitmq队列
1
2
3
4
5
6
7
8
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http:
//controller
:5000
auth_url = http:
//controller
:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
绑定ip
1
2
3
[DEFAULT]
...
my_ip = 10.0.0.101
支持neutron
1
2
3
4
[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
vnc配置
1
2
3
4
[vnc]
...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
glance配置
1
2
3
[glance]
...
api_servers = http:
//controller
:9292
并发锁
1
2
3
[oslo_concurrency]
...
lock_path =
/var/lib/nova/tmp
同步数据库
1
2
# su -s /bin/sh -c "nova-manage api_db sync" nova
# su -s /bin/sh -c "nova-manage db sync" nova
启动
1
2
3
4
5
6
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
计算节点
安装
1
# yum install openstack-nova-compute
配置文件/etc/nova/nova.conf
rabbitmq队列
1
2
3
4
5
6
7
8
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http:
//controller
:5000
auth_url = http:
//controller
:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
绑定ip
1
2
3
[DEFAULT]
...
my_ip = 10.0.0.102
支持neutron
1
2
3
4
[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
配置VNC
1
2
3
4
5
6
[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http:
//controller
:6080
/vnc_auto
.html
配置Glance
1
2
3
[glance]
...
api_servers = http:
//controller
:9292
并发锁
1
2
3
[oslo_concurrency]
...
lock_path =
/var/lib/nova/tmp
虚拟化驱动
1
2
3
[libvirt]
...
virt_type = qemu
启动
1
2
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
验证
1
$ . admin-openrc
1
$ openstack compute service list
+----+--------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+--------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 |
| 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 |
+----+--------------------+------------+----------+---------+-------+----------------------------+
网络服务 neutron
控制节点
数据库
1
2
3
4
5
6
$ mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO
'neutron'
@
'localhost'
\
IDENTIFIED BY
'NEUTRON_DBPASS'
;
GRANT ALL PRIVILEGES ON neutron.* TO
'neutron'
@
'%'
\
IDENTIFIED BY
'NEUTRON_DBPASS'
;
创建service,user,role
1
2
3
4
5
$ . admin-openrc
$ openstack user create --domain default --password-prompt neutron
$ openstack role add --project service --user neutron admin
$ openstack service create --name neutron \
--description
"OpenStack Networking"
network
创建endpoint,依次有public,internal,admin
1
2
3
4
5
6
$ openstack endpoint create --region RegionOne \
network public http:
//controller
:9696
$ openstack endpoint create --region RegionOne \
network internal http:
//controller
:9696
$ openstack endpoint create --region RegionOne \
network admin http:
//controller
:9696
配置提供者网络 provider network,
参考:http://docs.openstack.org/mitaka/install-guide-rdo/neutron-controller-install-option1.html
安装
1
2
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
配置文件/etc/neutron/neutron.conf
数据库
1
2
3
[database]
...
connection = mysql+pymysql:
//neutron
:NEUTRON_DBPASS@controller
/neutron
启用二层插件,禁用其他插件
1
2
3
4
[DEFAULT]
...
core_plugin = ml2
service_plugins =
rabbitmq队列
1
2
3
4
5
6
7
8
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http:
//controller
:5000
auth_url = http:
//controller
:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
并发锁
1
2
3
[oslo_concurrency]
...
lock_path =
/var/lib/neutron/tmp
配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
驱动
1
2
3
[ml2]
...
type_drivers = flat,vlan
禁用个人(selfservice)网络
1
2
3
[ml2]
...
tenant_network_types =
启用linux网桥
1
2
3
[ml2]
...
mechanism_drivers = linuxbridge
端口安装扩展
1
2
3
[ml2]
...
extension_drivers = port_security
flat网络
1
2
3
[ml2_type_flat]
...
flat_networks = provider
启用ipset
1
2
3
[securitygroup]
...
enable_ipset = True
配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini
1
2
3
4
5
6
7
8
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = False
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
注:PROVIDER_INTERFACE_NAME为网络接口,如eth 1之类的
配置文件/etc/neutron/dhcp_agent.ini
1
2
3
4
5
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
配置文件/etc/neutron/metadata_agent.ini
1
2
3
4
[DEFAULT]
...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
配置文件/etc/nova/nova.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
[neutron]
...
url = http:
//controller
:9696
auth_url = http:
//controller
:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
软连接
1
ln
-s
/etc/neutron/plugins/ml2/ml2_conf
.ini
/etc/neutron/plugin
.ini
数据库同步
1
2
su
-s
/bin/sh
-c "neutron-db-manage --config-
file
/etc/neutron/neutron
.conf \
--config-
file
/etc/neutron/plugins/ml2/ml2_conf
.ini upgrade
head
" neutron
重启nova-api
1
systemctl restart openstack-nova-api.service
启动
1
2
3
4
5
6
7
8
9
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
计算节点
安装
1
yum
install
openstack-neutron-linuxbridge ebtables
配置文件 /etc/neutron/neutron.conf
rabbitmq队列
1
2
3
4
5
6
7
8
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http:
//controller
:5000
auth_url = http:
//controller
:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
并发锁
1
2
3
[oslo_concurrency]
...
lock_path =
/var/lib/neutron/tmp
配置文件/etc/nova/nova.conf
1
2
3
4
5
6
7
8
9
10
11
[neutron]
...
url = http:
//controller
:9696
auth_url = http:
//controller
:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
重启nova-compute
1
# systemctl restart openstack-nova-compute.service
启动
1
2
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service
验证
1
2
$ . admin-openrc
$ neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
| default-subnetpools | Default Subnetpools |
| network-ip-availability | Network IP Availability |
| network_availability_zone | Network Availability Zone |
| auto-allocated-topology | Auto Allocated Topology Services |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
............
Dashboard horizon
注:必须在控制节点
安装
1
# yum install openstack-dashboard
配置文件/etc/openstack-dashboard/local_settings
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
OPENSTACK_HOST =
"controller"
ALLOWED_HOSTS = [
'*'
, ]
SESSION_ENGINE =
'django.contrib.sessions.backends.cache'
CACHES = {
'default'
: {
'BACKEND'
:
'django.core.cache.backends.memcached.MemcachedCache'
,
'LOCATION'
:
'controller:11211'
,
}
}
OPENSTACK_KEYSTONE_URL =
"http://%s:5000/v3"
% OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity"
: 3,
"image"
: 2,
"volume"
: 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN =
"default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE =
"user"
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router'
: False,
'enable_quotas'
: False,
'enable_distributed_router'
: False,
'enable_ha_router'
: False,
'enable_lb'
: False,
'enable_firewall'
: False,
'enable_vpn'
: False,
'enable_fip_topology_check'
: False,
}
TIME_ZONE =
"Asia/Shanghai"
启动
1
# systemctl restart httpd.service memcached.service
验证
访问http://controller/dashboard
块存储 cinder
数据库
1
2
3
4
5
6
$ mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO
'cinder'
@
'localhost'
\
IDENTIFIED BY
'CINDER_DBPASS'
;
GRANT ALL PRIVILEGES ON cinder.* TO
'cinder'
@
'%'
\
IDENTIFIED BY
'CINDER_DBPASS'
;
创建service,user,role
1
2
3
$ . admin-openrc
$ openstack user create --domain default --password-prompt cinder
$ openstack role add --project service --user cinder admin
注意,这里创建两个service
1
2
3
4
$ openstack service create --name cinder \
--description
"OpenStack Block Storage"
volume
$ openstack service create --name cinderv2 \
--description
"OpenStack Block Storage"
volumev2
创建endpoint,依次有public,internal,admin
1
2
3
4
5
6
$ openstack endpoint create --region RegionOne \
volume public http:
//controller
:8776
/v1/
%\(tenant_id\)s
$ openstack endpoint create --region RegionOne \
volume internal http:
//controller
:8776
/v1/
%\(tenant_id\)s
$ openstack endpoint create --region RegionOne \
volume admin http:
//controller
:8776
/v1/
%\(tenant_id\)s
注意,每个service对应三个endpoint
1
2
3
4
5
6
$ openstack endpoint create --region RegionOne \
volumev2 public http:
//controller
:8776
/v2/
%\(tenant_id\)s
$ openstack endpoint create --region RegionOne \
volumev2 internal http:
//controller
:8776
/v2/
%\(tenant_id\)s
$ openstack endpoint create --region RegionOne \
volumev2 admin http:
//controller
:8776
/v2/
%\(tenant_id\)s
安装
控制节点
1
# yum install openstack-cinder
配置文件/etc/cinder/cinder.conf
数据库
1
2
3
[database]
...
connection = mysql+pymysql:
//cinder
:CINDER_DBPASS@controller
/cinder
rabbitmq队列
1
2
3
4
5
6
7
8
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http:
//controller
:5000
auth_url = http:
//controller
:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
绑定ip
1
2
3
[DEFAULT]
...
my_ip = 10.0.0.11
并行锁
1
2
3
[oslo_concurrency]
...
lock_path =
/var/lib/cinder/tmp
同步数据库
1
# su -s /bin/sh -c "cinder-manage db sync" cinder
配置文件/etc/nova/nova.conf
1
2
[cinder]
os_region_name = RegionOne
重启nova-api
1
# systemctl restart openstack-nova-api.service
启动
1
2
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
其他节点,可在计算节点加一块硬盘
注:需要另外一块硬盘
安装
1
2
3
# yum install lvm2
# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service
创建逻辑卷
1
2
# pvcreate /dev/sdb
Physical volume
"/dev/sdb"
successfully created
1
2
# vgcreate cinder-volumes /dev/sdb
Volume group
"cinder-volumes"
successfully created
配置文件/etc/lvm/lvm.conf
1
2
3
devices {
...
filter = [
"a/sdb/"
,
"r/.*/"
]
注:新添加的硬盘一般为sdb,如果有sdc,sde等,则为filter = [ "a/sdb/", "a/sdb/","a/sdb/","r/.*/"],以此类推
安装
1
# yum install openstack-cinder targetcli
配置文件/etc/cinder/cinder.conf
数据库
1
2
3
[database]
...
connection = mysql+pymysql:
//cinder
:CINDER_DBPASS@controller
/cinder
rabbitmq队列
1
2
3
4
5
6
7
8
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http:
//controller
:5000
auth_url = http:
//controller
:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
绑定ip
1
2
3
[DEFAULT]
...
my_ip = 10.0.0.102
增加[lvm]及其内容
1
2
3
4
5
6
[lvm]
...
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
后端启用lvm
1
2
3
[DEFAULT]
...
enabled_backends = lvm
配置Glance API
1
2
3
[DEFAULT]
...
glance_api_servers = http:
//controller
:9292
并行锁
1
2
3
[oslo_concurrency]
...
lock_path =
/var/lib/cinder/tmp
启动
1
2
# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service
验证
1
2
$ . admin-openrc
$ cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
| cinder-volume | block1@lvm | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
至此。基本上完成了,所有的安装,你可以在dashboard上首先用admin用户创建一个网络,然后用新建一个实例
后记:虽然手动安装一整套实在有点夸张,这里还是用yum的呢~但是至少得这么手动来一次,其他时候就脚本或者安装工具吧,复制粘贴都把我复制的眼花了~
其他组件就另起一篇文章了,值得注意的是,官方文档才是最好的文档
本文出自 “又耳笔记” 博客,请务必保留此出处http://youerning.blog.51cto.com/10513771/1769358
(责任编辑:IT)
前言:openstack真是一个庞然大物,想要吃透还真不容易,所以在对openstack大概有了一个了解的时候,就应该是部署,虽然openstack的安装方式有rdo或者devstack等一键安装工具,但是最好浅尝辄止,有了大概的使用经验之后就应该是从头到尾的安装一遍了,不然对于那些报错,以及故障的解决一定是不够气定神闲的,因此,当你有了openstack的基本认识后,开始安装吧~
注:openstack的官方文档写得真的是,好的不要不要的,但是看英文总是感觉有点不溜,因此在官方文档的基础上写得这篇笔记。 参考:http://docs.openstack.org/mitaka/install-guide-rdo/
首先应该是大概的规划,需要几个节点,选择什么操作系统,网络怎么划分~ 下面是我的大概规划 节点数:2 (控制节点,计算节点) 操作系统:CentOS Linux release 7.2.1511 (Core) 网络配置: 控制节点: 10.0.0.101 192.168.15.101 结算节点: 10.0.0.102 192.168.15.102
先决条件: The following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances: Controller Node: 1 processor, 4 GB memory, and 5 GB storage Compute Node: 1 processor, 2 GB memory, and 10 GB storage 官方建议概念验证的最小硬件需求。 控制节点 1 处理器,4 GB内存,5 GB硬盘 计算节点 1 处理器,2 GB内存,10 GB硬盘 参考:http://docs.openstack.org/mitaka/install-guide-rdo/environment.html
注:如果你是用手动一步一步的创建操作系统,配置网络,那么笔者就得好好的鄙视你了~~研究研究vagrant吧,通过下面的配置文件你就能一条命令生成两个虚拟机,并配置好网络了,vagrant简易教程参考:http://youerning.blog.51cto.com/10513771/1745102
通过vagrant up一条命令,稍等一会,两个热腾腾的虚拟机就出炉了,我们的环境就OK了~~ 环境如下 操作系统:CentOS Linux release 7.2.1511 (Core) 网络配置: 控制节点: 10.0.0.101 192.168.15.101 结算节点: 10.0.0.102 192.168.15.102
注意:上面的config.vm.box = "centos7",首先需要有个centos7的box
在开始部署前,我们先捋一捋openstack安装步骤 首先是软件环境准备,我们需要将一些通用的软件以及源仓库等进行配置,基本如下 NTP服务器 控制节点,其他节点 openstack 安装包仓库 通用组件: SQL 数据库 ===> MariaDB NoSQL 数据库 ==> MongoDB(基本组件不需要,) 消息队列 ==> RabbitMQ Memcached
再就是openstack整个框架下的各个组件,基本组件如下 认证服务 ===> Keystone 镜像服务 ===> Glance 计算资源服务 ===> Nova 网络资源服务 ===> Neutron Dashboard ===> Horizon 块存储服务 ===> Cinder
其他存储服务,如下 文件共享服务 ===> Manila 对象存储服务 ===> Swift
其他组件,如下 编排服务 ===> Heat 遥测服务 ===> Ceilometer 数据库服务 ===> Trove
环境准备 域名解析: 在各个节点编辑hosts文件,加入以下配置 10.0.0.101 controller 10.0.0.102 compute
ntp时间服务器 控制节点 1) 安装chrony软件包 yum install chrony
2) 编辑配置文件 /etc/chrony.conf,添加以下内容,202.108.6.95可根据自己需求自行更改。 server 202.108.6.95 iburst allow 10.0.0.0/24
3)加入自启动,并启动 # systemctl enable chronyd.service # systemctl start chronyd.service
其他节点 1) 安装chrony软件包 yum install chrony
2) 编辑配置文件 /etc/chrony.conf,添加以下内容 server controller iburst allow 10.0.0.0/24
3)加入自启动,并启动 # systemctl enable chronyd.service # systemctl start chronyd.service
验证: 控制节点
210 Number of sources = 2 MS Name/IP address Stratum Poll Reach LastRx Last sample ============================================================= ^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/- 43ms ^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/- 68ms
其他节点
210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* controller 3 9 377 421 +15us[ -87us] +/- 15ms
openstack 安装包仓库 安装相应openstack版本yum源
系统更新
安装openstackclient,openstack-selinux
注:如果报什么 Package does not match intended download,则yum clean all或者直接下载rpm包安装吧。 参考下载地址:http://ftp.usf.edu/pub/centos/7/cloud/x86_64/openstack-kilo/common/
SQL数据库 安装
创建/etc/my.cnf.d/openstack.cnf配置文件,加入以下内容
配置启动项,启动等
数据库初始化,创建root密码等,操作如下
消息队列rabbitmq 安装
配置启动项,启动
添加openstack用户
设置openstack用户的权限,依次分别为写,读,访问
NoSQL Mongodb 安装
配置/etc/mongod.conf配置文件
配置启动项,启动
Memcached 安装
配置启动项,启动
至此,openstack整个框架的软件环境基本搞定,下面就是各组件了。 安装各组件很有意思,除了keystone基本上是差不多的步骤,唯一的区别就是创建时指定的名字不同而已,基本是一般以下步骤。
1)配置数据库
2)安装
3)配置文件 配置各项服务的连接,比如数据库,rabbitmq等 认证配置 特定配置 5)数据库同步 创建需要的表 4)加入启动项,启动
5)创建用户,service,endpoint等
6)验证服务是否成功
注:配置文件的配置建议首先备份,然后为了省略不必要的篇幅,在此说明配置文件的编辑方式,如下。 [DEFAULT] ... admin_token = ADMIN_TOKEN 上面的内容,指明在[DEFAULT]的段落加入admin_token = ADMIN_TOKEN内容。
各组件安装 认证服务 Keystone 配置数据库
安装
配置文件/etc/keystone/keystone.conf admin令牌
数据库
令牌生成方式
注:上面的ADMIN_TOKEN可用openssl rand -hex 10命令生成,或者填入一串自定义的字符串
数据库同步
初始化fernet秘钥。 令牌的生成方式参考:http://blog.csdn.net/miss_yang_cloud/article/details/49633719
配置Apache 编辑/etc/httpd/conf/httpd.conf 更改一下内容
创建/etc/httpd/conf.d/wsgi-keystone.conf配置文件,加入以下内容
配置启动项,启动
创建service,API endpoint 为了避免不必要的篇幅,将admin_token,endpoint url配置到环境变量。
创建service
创建endpoint,依次有public,internal,admin
创建域,项目,用户,角色 domain,project,user,role 创建domain
创建project
创建admin role
将admin角色加入admin项目中
创建service项目
创建demo项目
创建demo用户
创建user角色
将user角色加入到demo项目中
注:记住创建用户时的密码。 验证admin用户
验证demo用户
如果有以上格式返回,验证通过
admin,demo用户的环境变量脚本 正常情况下,当然吧诸如os-xxxx的参数放在环境变量中,为了更快的在admin,demo用户之间切换,创建环境脚本 创建admin-openrc
创建demo-openrc
在此验证admin 首先 . admin-openrc
镜像服务 Glance 配置数据库
创建service,user,role
创建endpoint,依次有public,internal,admin
安装
配置文件/etc/glance/glance-api.conf 数据库
keystone认证
glance存储
配置文件/etc/glance/glance-registry.conf 数据库
keystone认证
同步数据库
启动
验证
下载cirros镜像
创建镜像
如果执行以下命令,显示如下,则成功
计算资源服务 nova 控制节点 数据库
创建service,user,role
创建endpoint,依次有public,internal,admin
安装
配置文件/etc/nova/nova.conf 启用的api
数据库
rabbitmq队列
keystone认证
绑定ip
支持neutron
vnc配置
glance配置
并发锁
同步数据库
启动
计算节点 安装
配置文件/etc/nova/nova.conf rabbitmq队列
keystone认证
绑定ip
支持neutron
配置VNC
配置Glance
并发锁
虚拟化驱动
启动
验证
+----+--------------------+------------+----------+---------+-------+----------------------------+ | Id | Binary | Host | Zone | Status | State | Updated At | +----+--------------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 | | 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 | | 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 | | 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 | +----+--------------------+------------+----------+---------+-------+----------------------------+
网络服务 neutron 控制节点 数据库
创建service,user,role
创建endpoint,依次有public,internal,admin
配置提供者网络 provider network, 参考:http://docs.openstack.org/mitaka/install-guide-rdo/neutron-controller-install-option1.html
安装
配置文件/etc/neutron/neutron.conf 数据库
启用二层插件,禁用其他插件
rabbitmq队列
keystone认证
并发锁
配置文件/etc/neutron/plugins/ml2/ml2_conf.ini 驱动
禁用个人(selfservice)网络
启用linux网桥
端口安装扩展
flat网络
启用ipset
配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini
注:PROVIDER_INTERFACE_NAME为网络接口,如eth 1之类的
配置文件/etc/neutron/dhcp_agent.ini
配置文件/etc/neutron/metadata_agent.ini
配置文件/etc/nova/nova.conf
软连接
数据库同步
重启nova-api
启动
计算节点 安装
配置文件 /etc/neutron/neutron.conf rabbitmq队列
keystone认证
并发锁
配置文件/etc/nova/nova.conf
重启nova-compute
启动
验证
+---------------------------+-----------------------------------------------+ | alias | name | +---------------------------+-----------------------------------------------+ | default-subnetpools | Default Subnetpools | | network-ip-availability | Network IP Availability | | network_availability_zone | Network Availability Zone | | auto-allocated-topology | Auto Allocated Topology Services | | ext-gw-mode | Neutron L3 Configurable external gateway mode | | binding | Port Binding | ............
Dashboard horizon 注:必须在控制节点 安装
配置文件/etc/openstack-dashboard/local_settings
启动
验证 访问http://controller/dashboard
块存储 cinder 数据库
创建service,user,role
注意,这里创建两个service
创建endpoint,依次有public,internal,admin
注意,每个service对应三个endpoint
安装 控制节点
配置文件/etc/cinder/cinder.conf 数据库
rabbitmq队列
keystone认证
绑定ip
并行锁
同步数据库
配置文件/etc/nova/nova.conf
重启nova-api
启动
其他节点,可在计算节点加一块硬盘 注:需要另外一块硬盘 安装
创建逻辑卷
配置文件/etc/lvm/lvm.conf
注:新添加的硬盘一般为sdb,如果有sdc,sde等,则为filter = [ "a/sdb/", "a/sdb/","a/sdb/","r/.*/"],以此类推
安装
配置文件/etc/cinder/cinder.conf 数据库
rabbitmq队列
keystone认证
绑定ip
增加[lvm]及其内容
后端启用lvm
配置Glance API
并行锁
启动
验证
+------------------+------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None | | cinder-volume | block1@lvm | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
至此。基本上完成了,所有的安装,你可以在dashboard上首先用admin用户创建一个网络,然后用新建一个实例
后记:虽然手动安装一整套实在有点夸张,这里还是用yum的呢~但是至少得这么手动来一次,其他时候就脚本或者安装工具吧,复制粘贴都把我复制的眼花了~ 其他组件就另起一篇文章了,值得注意的是,官方文档才是最好的文档 本文出自 “又耳笔记” 博客,请务必保留此出处http://youerning.blog.51cto.com/10513771/1769358 (责任编辑:IT) |