侧边栏壁纸
博主头像
口鸟人 博主等级

言念君子 . 温其如玉

  • 累计撰写 39 篇文章
  • 累计创建 16 个标签
  • 累计收到 14 条评论

目 录CONTENT

文章目录

OpenStack云平台部署--计算节点配置

koniaoer
2024-10-22 / 0 评论 / 0 点赞 / 77 阅读 / 0 字 / 正在检测是否收录...
温馨提示:
本文最后更新于2024-12-17,若内容或图片失效,请留言反馈。 部分素材来自网络,若不小心影响到您的利益,请联系我们删除。

一,计算节点配置

1)主机网卡配置

建议设置两张网卡(内网/外网)
内网网段(192.168.64.0)
外网网段(192.168.130.0)

2) 配置主机名及hosts文件

hostnamectl set-hostname computer			#修改主机名 reboot或者exit刷新
echo "192.168.130.100 controller" >> /etc/hosts #添加控制主机的域名解析
echo "192.168.130.101 computer"  >> /etc/hosts

3) 关闭防火墙,selinux,及libvirtd服务

systemctl disable firewalld #关闭防火墙

systemctl disable libvirtd.service #关闭libvirtd服务

4) 添加指向控制节点配置

vi /etc/chrony.conf
server controller iburst				#controller是指管理节点名
systemctl enable chronyd   
systemctl restart chronyd

hostnamectl set-hostname compute	#修改被控主机名称exit or reboot刷新

vi /etc/hosts
	[主机ip] compute				#添加本地域名配置, 管理节点也要记得添加控制节点的本地配置


chronyc sources						#进行测试

5)在yum源中配置openstack源

vi openstack.repo

[base]
name=base
baseurl=http://repo.huaweicloud.com/centos/7/os/x86_64/
enable=1
gpgcheck=0

[extras]
name=extras
baseurl=http://repo.huaweicloud.com/centos/7/extras/x86_64/
enable=1
gpgcheck=0

[updates]
name=updates
baseurl=http://repo.huaweicloud.com/centos/7/updates/x86_64/
enable=1
gpgcheck=0

[train]
name=train
baseurl=http://repo.huaweicloud.com/centos/7/cloud/x86_64/openstack-train/
enable=1
gpgcheck=0

[virt]
name=virt
baseurl=http://repo.huaweicloud.com/centos/7/virt/x86_64/kvm-common/
enable=1
gpgcheck=0

3)下载OpenStack云计算平台框架和客户端

yum list |grep openstack

yum -y install centos-release-openstack-[版本号(a~z)].noarch				#字母越靠后版本越新

yum list |grep openstackclient
yum -y install python3-openstackclient.noarch								#客户端

openstack --version		#查看版本

vi /etc/selinux/config
更改策略
SELINUX=disabled

查找包:
yum list |grep openstack-selinux
安装
yum -y install openstack-selinux.noarch					#要下对应版本号的selinux,在Redhat系统中要下载对应的rdo包

在这个网站查找对应系统的对应rdo包
https://cbs.centos.org/koji/packageinfo?packageID=3212

二,nova安装

在计算节点只需要安装Nova的计算模块“nova-compute”,按照以下方法进行安装

l 安装软件

l 查询用户信息

yum -y install openstack-nova-compute 
cat /etc/passwd |grep nova
cat /etc/group |grep nova

l 修改配置

cp /etc/nova/nova.conf /etc/nova/nova.bak
grep -Ev '^$|#' /etc/nova/nova.bak >/etc/nova/nova.conf
vi /etc/nova/nova.conf
[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 123456
region_name = RegionOne

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = 123456

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller:5672
my_ip = 192.168.64.101# 计算节点的仅主机ip
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.130.100:6080/vnc_auto.html  #控制节点的ip

[libvirt]
virt_type = qemu

三,安装neutron

1 安装包
yum -y install openstack-neutron-linuxbridge
cat /etc/passwd|grep neutron
cat /etc/group|grep neutron

2 配置文件修改

l 备份

l 无效字符

l 修改neutron配置

cp /etc/neutron/neutron.conf /etc/neutron/neutron.bak
grep -Ev '^$|#' /etc/neutron/neutron.bak>/etc/neutron/neutron.conf
vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:123456@controller:5672
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = project
username = neutron
password = 123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

修改linuxbridge配置

vi  /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:ens33  #ens33对应计算节点nat网卡名称

[vxlan]
enable_vxlan = false

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

修改nova配置

vi /etc/nova/nova.conf 
[DEFAULT]
vif_plugging_is_fatal = false
vif_plugging_timeout = 0

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = project
username = neutron
password = 123456

3 重启nova和neutron服务
systemctl restart openstack-nova-compute
systemctl status openstack-nova-compute
systemctl enable neutron-linuxbridge-agent
systemctl start neutron-linuxbridge-agent
systemctl status neutron-linuxbridge-agent

四,仪表盘Dashboard安装

Dashboard为OpenStack提供了一个Web前端的管理界面.本项目将在计算节点为OpenStack云计算平台安装Dashboard组件,Dashboard的主要功能是让用户通过网页上的操作完成对云计算平台的配置与管理.

1 基本工作流程

 

2 下载安装包
yum -y install openstack-dashboard.noarch

3 修改配置
vi /etc/openstack-dashboard/local_settings

ALLOWED_HOSTS = ['*']
OPENSTACK_HOST = "controller" 
TIME_ZONE = "Asia/Shanghai"
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
    'enable_auto_allocated_network': False,
    'enable_distributed_router': False,
    'enable_fip_topology_check': False,
    'enable_ha_router': False,
    'enable_ipv6': False,
    'enable_quotas': False,
    'enable_rbac_policy': False,
    'enable_router': False,
}

4 在apache发布服务

l 进入目录

cd /usr/share/openstack-dashboard

l 编译生成Dashboard的WEB服务配置文件

python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf

l 建立软连接

ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf

l 重启apache的服务

systemctl enable httpd
systemctl restart httpd
systemctl status httpd

5 测试服务

浏览器输入地址:http://192.168.64.122/  其中ip为计算节点的仅主机ip。

【域】文本框中填入域名“Default”,

【用户名】文本框中填入“admin”,

【密码】文本框中填入“123456”

五,新加卷

1 增加一块硬盘(10g以上)

创建完之后重启计算节点

2 搭建存储节点——创建卷组

l 查看系统硬盘挂载情况

lsblk

l 创建LVM物理卷组

pvcreate /dev/sdb

l 将物理卷归并为卷组

vgcreate cinder-volumes /dev/sdb

3 修改lvm配置
 vi /etc/lvm/lvm.conf

在devices参数下添加以下字段:

filter = [ "a/sdb/","r/.*/"]

代码中“a”表示接受,“r”表示拒绝

4 重启lvm服务
systemctl enable lvm2-lvmetad
systemctl start lvm2-lvmetad
systemctl status lvm2-lvmetad

5 安装cinder包
yum -y install openstack-cinder targetcli python-keystone

6 修改cinder配置
cp /etc/cinder/cinder.conf /etc/cinder/cinder.bak
grep -Ev '^$|#' /etc/cinder/cinder.bak > /etc/cinder/cinder.conf
vi /etc/cinder/cinder.conf

[DEFAULT]
auth_strategy = keystone
enabled_backends = lvm
transport_url = rabbit://rabbitmq:123456@controller:5672
glance_api_servers = http://controller:9292


[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = cinder
password = 123456

[database]
connection = mysql+pymysql://cinder:123456@controller/cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm

7 重启cinder服务
systemctl enable openstack-cinder-volume target
systemctl start openstack-cinder-volume target
systemctl status openstack-cinder-volume target

在控制节点上检测
openstack volume service list

创建8G的卷

openstack volume create --size 8 volume1

0

评论区