Jump to: navigation, search

Difference between revisions of "Main Page/cobbler"

(Created page with "关键技术点: 1、制作livecd 介绍:livecd是redhat体系提供的一种镜像格式,制作的iso可以直接启动,当然也可以安装到硬盘上。现在cent...")
 
Line 1: Line 1:
关键技术点:
+
Hi:
1、制作livecd
 
介绍:livecd是redhat体系提供的一种镜像格式,制作的iso可以直接启动,当然也可以安装到硬盘上。现在centos、fedora每个版本发布的repo都包含了已编译出的livecd iso。例如CentOS-6.4-x86_64-LiveCD.iso。也可以到其官方网站下载
 
步骤:
 
(1)执行以下命令,安装redhat openstack rdo的yum文件。注意根据需要安装的openstack版本选择合适的rpm包
 
yum install -y http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly-2.noarch.rpm
 
(2)制作ks文件:可以在互联网上找一个标准的centos的ks文件作为模板,然后进行修改,将openstack的rpm包信息添加到ks文件中,则出iso时会自动安装openstack的rpm包
 
(3)安装livecd-tools rpm包
 
(4)执行以下命令出iso
 
livecd-creator --config=centos-livecd-minimal.ks --fslabel=openstack-LiveCD -d --shell
 
困难:
 
实际操作过程中,碰到的最大困难是一开始找了个裁剪了的centos ks文件,结果制作的包含openstack的iso,安装后因为缺一些rpm包,出现各种莫名奇妙问题。最后解决办法是先制作一个非裁剪的centos os,然后在openstack可以正常运行前提下,再做系统裁剪瘦身
 
相关参考:
 
https://fedoraproject.org/wiki/How_to_create_and_use_a_Live_CD/zh-cn
 
http://www.centos.org/docs/5/html/Installation_Guide-en-US/s1-kickstart2-file.html
 
http://jgershater.ulitzer.com/node/2701636?page=0,1
 
  
2、写作脚本,将livcd安装到OS中,流程及所用到系统命令如下:
+
I have deployed a RDO havana version's openstack environment for a period.
# 找到livecd中文件系统所挂载的设备,例如ext3fs.img挂载到了/dev/loop3上
 
losetup -a |grep ext3fs.img
 
# 将img拷贝到目的磁盘分区中,例如/dev/vda1
 
cat /dev/loop3 /dev/vda1
 
# 安装bootloader,用的是extlinux,这样下次就可以从硬盘启动了
 
cat /usr/share/syslinux/mbr.bin > dev/vda #写入mbr
 
blkid -o value -s UUID /dev/vda1 #生成分区的uuid
 
mkdir /mnt/tmp #生成临时目录
 
mount /dev/vda1 /mnt/tmp #将根分区mount到临时目录
 
extlinux -i /mnt/tmp/boot/ #使用extlinux工具制作启动目录
 
extlinux --clear-once /mnt/tmp/boot/
 
cp -rf /dev/.initramfs/live/isolinux/* /mnt/tmp/boot/
 
mv /mnt/tmp/boot/isolinux.cfg /mnt/tmp/boot/extlinux.conf
 
sed -i 's/live:CDLABEL=.* /UUID=uuid /' /mnt/tmp/boot/extlinux.conf #uuid是上面生成的
 
sed -i 's/ ro / /' /mnt/tmp/boot/extlinux.conf
 
sed -i 's/ rd.live.image / /' /mnt/tmp/boot/extlinux.conf
 
umount /dev/vda1
 
到此,重启服务器就可以从硬盘启动了
 
相关参考:
 
http://www.syslinux.org/wiki/index.php/EXTLINUX
 
http://molinux.blog.51cto.com/2536040/548247
 
  
3、配置pxe服务器,通过pxe方式批量安装openstack节点
+
By chance I tested APIs of ceilometer, in the first I defined a new meter like meter_test, then used 'ceilometer sample-list -m yjmeter' command to check the result.
介绍:在网上看到有许多相关的集成部署工具,例如xcat、cobbler,其支持的功能大同小异。本次考虑使用cobbler。
+
I found that the query cost about 13 seconds, why it was so slow with one record?
步骤:
+
 
1、cobbler网上资料较多,参考网上资料即可完成cobbler的基本搭建
+
Now that RDO used mongodb as default DB for ceilometer, I logging into it and found that the meter collection had about 2,700,000 records. Maybe the mass records slowed the query.
2、将livecd iso部署到cobbler中作为repo
+
 
注:livce本身是不支持pxe加载的,为支持此功能,需先使用livecd-iso-to-pxeboot工具,将iso生成vmlinuz和initrd0.img文件,然后将这两个文件部署到cobbler中。
+
Take production environemnt into account, If there are 100 hosts , each host contains 30 VM.
核心思想是将livced中的内存文件系统制作成ramdisk文件,就可以通过pxe流程引导启动到内存文件系统了
+
CeilometerI use default configuration of 11 compute pollsters and the pipeline interval is 600s.
相关命令如下:
+
Then one day system will generate 4,752,000 record, the formula is: 11(meter) * 30(VM) * 6(system run pollsters 6 times an hour) * 24(a day) * 100(host) = 4,752,000 records.
iso="openstack"
+
 
livecd-iso-to-pxeboot $iso.iso
+
For above case I think it is necessary to restrict number of records by some mechanism, below are my immature idea:
cobbler distro add --name=$iso --kernel=/var/www/html/iso/$iso/tftpboot/vmlinuz0 --initrd=/var/www/html/iso/$iso/tftpboot/initrd0.img
+
1. Make restriction on max supported records by time or number of samples, discard old records.
cobbler distro edit --name=$iso --kopts='root=live:/'$iso.iso' rootfstype=auto rootflags=ro !text !lang !ksdevice installserverip='$serverIp
+
2. Providing API to delete samples by resource_id or some other conditions, so the third integration system may call this API to delete related samples when delete a resource.
cobbler profile add --name=$iso --distro=$iso
+
3. Running period task of accounting on raw samples, using 1min samples to generate 5min statistics samples, using 5min statistics to generate 30min statistics samples, and so on. Every period of sample has individual data table and has resriction on max supported records .
到此,就可以通过pxe加载openstack的livced iso中。加载完毕后,可以手工执行以上介绍“将livcd安装到OS中”脚本将系统安装到硬盘中。也可以在制作iso时写入系统自启动命令自动执行安装脚本
+
 
3、cobbler默认pxe引导方式是tftp,tftp是不可靠协议,而现在制作的ramdisk文件有200多MB,如果并发安装多个主机,很可能出现引导慢或者断连的情况。在网上遍历一把,发现可以用ipxe来替代pxe。ipxe的核心理念是用http协议代替tftp协议来下载vmlinuz和initrd0.img
+
I am not a ceilometer programmer and I apologize if I am missing something very obvious.
这里遇到很多问题,记录如下:
+
Can you give me some help to make me clear about them and how to implement my requirement?
(1)cobbler是否支持gpxe,经网上搜索,collber已经支持gpxe,对应配置项如下:
+
 
    sed -i "s/^enable_gpxe:.*/enable_gpxe: 1/g" /etc/cobbler/settings
+
Thanks
(1)pxe安装虚拟机正常,但是安装服务器则一直进入不到http下载vmlinuz和initrd0.img的流程。
+
 
    结果:定位发现服务器的网卡不支持gpxe
+
Hi
(2)有的网卡能支持gpxe、有的不支持,怎样的配置能够自动判断支持情况并选择用gpxe还是pxe呢
+
 
    结果:可以在cat /etc/dhcp/dhcpd.conf 写if...else进行逻辑判断,如果是ipxe/gpxe则返回http形式的vmlinuz和initrd0.img路径,否则返回undionly.kpxe这个pxe引导文件
+
I have a requirement of monitoring VMs, if a VM's meter like cpu_util become too high, then system generate an alarm for this VM with meter information.
    例子如下:
+
 
host generic_735135b1-d72f-47f2-9f1d-13c63e75dc9c {
+
I have tested alarm function of ceilometer, below are commands I used to create alarm object with meter and resource id or not:
  hardware ethernet 00:16:6D:AD:86:33;
+
ceilometer alarm-threshold-create  --name alarm1 --meter-name cpu_util --period 60 --evaluation-periods 1 --statistic avg --comparison-operator gt --threshold 1 -q resource_id=757dadaa-0707-4fad-808d-81edc11438aa
if exists user-class and option user-class = "iPXE" {
+
ceilometer alarm-threshold-create --name alarm1 --meter-name cpu_util --period 60 --evaluation-periods 1 --statistic avg --comparison-operator gt --threshold 1
    filename "http://186.100.8.248/cblr/svc/op/gpxe/system/735135b1-d72f-47f2-9f1d-13c63e75dc9c";
+
 
} else if exists user-class and option user-class = "gPXE" {
+
I have the following question:
    filename "http://186.100.8.248/cblr/svc/op/gpxe/system/735135b1-d72f-47f2-9f1d-13c63e75dc9c";
+
If I have to define alarm object for every VM and every meter?
} else {
+
Take 100 VM and 2 meter cpu_util, memory_util as an example, I will have to define 100*2 alarm objects for them.  
    filename "undionly.kpxe";
+
I think if I just define alarm object with meter not but VM(resource_id), then alarm evaluator will count all VM's meter.
}
+
 
next-server    186.100.8.248;
+
Another question produced by question above: I know that alarm evaluator will process alarm object one by one, so too many alarm object may result in performance problems too.
}
+
 
如果是ipxe、gxe则调用collber的脚本,产生http下载vmlinuz和initrd0.img的配置信息,例子如下:
+
I am not a ceilometer programmer and I apologize if I am missing something very obvious.
#!gpxe
+
Can you give me some help to make me clear about them and how to implement my requirement?
kernel http://186.100.8.248:80/cobbler/images/allinone/vmlinuz0
+
 
imgargs vmlinuz0  rootflags=ro root=live:/allinone.iso installserverip=186.100.8.248 rootfstype=auto kssendmac  ks=http://186.100.8.248/cblr/svc/op/ks/system/735135b1-d72f-47f2-9f1d-13c63e75dc9c
+
Thanks
initrd http://186.100.8.248:80/cobbler/images/allinone/initrd0.img
 
boot
 
相关参考:
 
http://www.ibm.com/developerworks/cn/linux/l-cobbler/
 

Revision as of 10:33, 3 April 2014

Hi:

I have deployed a RDO havana version's openstack environment for a period.

By chance I tested APIs of ceilometer, in the first I defined a new meter like meter_test, then used 'ceilometer sample-list -m yjmeter' command to check the result. I found that the query cost about 13 seconds, why it was so slow with one record?

Now that RDO used mongodb as default DB for ceilometer, I logging into it and found that the meter collection had about 2,700,000 records. Maybe the mass records slowed the query.

Take production environemnt into account, If there are 100 hosts , each host contains 30 VM. CeilometerI use default configuration of 11 compute pollsters and the pipeline interval is 600s. Then one day system will generate 4,752,000 record, the formula is: 11(meter) * 30(VM) * 6(system run pollsters 6 times an hour) * 24(a day) * 100(host) = 4,752,000 records.

For above case I think it is necessary to restrict number of records by some mechanism, below are my immature idea: 1. Make restriction on max supported records by time or number of samples, discard old records. 2. Providing API to delete samples by resource_id or some other conditions, so the third integration system may call this API to delete related samples when delete a resource. 3. Running period task of accounting on raw samples, using 1min samples to generate 5min statistics samples, using 5min statistics to generate 30min statistics samples, and so on. Every period of sample has individual data table and has resriction on max supported records .

I am not a ceilometer programmer and I apologize if I am missing something very obvious. Can you give me some help to make me clear about them and how to implement my requirement?

Thanks

Hi

I have a requirement of monitoring VMs, if a VM's meter like cpu_util become too high, then system generate an alarm for this VM with meter information.

I have tested alarm function of ceilometer, below are commands I used to create alarm object with meter and resource id or not: ceilometer alarm-threshold-create --name alarm1 --meter-name cpu_util --period 60 --evaluation-periods 1 --statistic avg --comparison-operator gt --threshold 1 -q resource_id=757dadaa-0707-4fad-808d-81edc11438aa ceilometer alarm-threshold-create --name alarm1 --meter-name cpu_util --period 60 --evaluation-periods 1 --statistic avg --comparison-operator gt --threshold 1

I have the following question: If I have to define alarm object for every VM and every meter? Take 100 VM and 2 meter cpu_util, memory_util as an example, I will have to define 100*2 alarm objects for them. I think if I just define alarm object with meter not but VM(resource_id), then alarm evaluator will count all VM's meter.

Another question produced by question above: I know that alarm evaluator will process alarm object one by one, so too many alarm object may result in performance problems too.

I am not a ceilometer programmer and I apologize if I am missing something very obvious. Can you give me some help to make me clear about them and how to implement my requirement?

Thanks