Jump to: navigation, search

KanyunMetering中文

monitoring用于监控nova-compute和nova-network节点的虚拟机信息以及节点的网络流量

拓扑

总览

                +-----------------+
                |    cassandra    |
                +-----------------+
                  |            |
                  |            |
            +-----------+ +-------------+
            |   server  | |  API-server |   <---- statisticer
            +-----------+ +-------------+
                   | 
                   |    
                 woker
                   |   
           ________|___________        
         /         |           \
 外网流量采集   虚拟机信息采集  其他采集模块 

Server


                         +--- <-- Worker's PUSH
                         |
                         |
                   +----------+
                   |   PULL   |     <-- feedback
               +---|==========|
   Client--->  |REP|  Server  |
               +---|==========|
                   |   PUB    |     <-- broadcast
                   +----------+
                         |
                         |
                         +----> Worker's SUB
                         +----> DB



通讯协议

数据通讯全部使用json格式

外网流量采集模块--->worker

格式:

  {'instanceid1':('IP', time, 'out_bytes'), 'instanceid2':('IP', time, 'out_bytes')}

示例:

{'instance-00000001': ('10.0.0.2', 1332409327, '0')}


虚拟机信息采集模块-->worker

格式:

[('cpu', 'total', (utc_time, cpu_usage)), 
('mem', 'total', (utc_time, max, free)), 
('nic', 'vnet8', (utc_time, incoming, outgoing(内网))), 
('blk', 'vda', (utc_time, read, write)), 
('blk', 'vdb', (utc_time, read, write))],


示例:

{'instance-000000ba@sws-yz-5': 
[('cpu', 'total', (1332400088.149444, 5)),  
('mem', 'total', (1332400088.149444, 8388608L, 8388608L)), 
('nic', 'vnet8', (1332400088.152285, 427360746L, 174445810L)), 
('blk', 'vda', (1332400088.156346, 298522624L, 5908590592L)), 
('blk', 'vdb', (1332400088.158202, 2159616L, 1481297920L))], 
其中cpu和mem为实际使用量


worker-->server

格式:

  ['msg_type': msg]
  'msg_type'取值:
    HEART_BEAT = '0'
    LOCAL_INFO = '1'
    TRAFFIC_ACCOUNTING = '2'
    AGENT = '3'


示例:

心跳:
['WORKER1', time.time(), status]
status:0为即将正常退出,服务器收到0就会取消对该worker的状态监控;1为正在工作中
数据:
['2', 
    '{"instance-00000001@pyw.novalocal": 
        [
         ["cpu", "total", [1332831360.029795, 53522870000000]], 
         ["mem", "total", [1332831360.029795, 131072, 131072]], 
         ["nic", "vnet0", [1332831360.037399, 21795245, 5775663]], 
         ["blk", "vda", [1332831360.04699, 474624, 4851712]], 
         ["blk", "vdb", [1332831360.049333, 122880, 0]]
        ]
     }'
]


server-->cassandra

格式:

{instance_id, {scf_str: {时间: 值} }}


示例:

Columnfamily为mem_max保存最大内存值,mem_free保存空闲内存值
instance_id, {'total': {1332831360: 131072}}
instance_id, {'total': {1332831360: 131072}}


statisticer-->server

格式:

[CMD, row_id, cf_str, scf_str, statistic, period, time_from, time_to]
CMD取值:
    STATISTIC = 'S'
    GETBYKEY = 'K'
    GET = 'G'
    LIST = 'L'
statistic取值:
    SUM     = 0
    MAXIMUM = 1
    MINIMUM = 2
    AVERAGE = 3
    SAMPLES = 4
period取值为要统计的间隔值,单位是分钟,例如传递5,则会计算每5分钟为单位的统计数据
time_from和time_to为数值型的时间,time_to为0的话为当前时间


示例:

协议请求数据示例

[u'S', u'instance-00000001@pyw.novalocal', u'cpu', u'total', 0, 5, 1332897600, 0]
[u'K', row_id, "blk_write"]
[u'G', row_id, "cpu", "total"]
[u'L', cf_str]


api_client示例

获取存在指定数据的全部实例列表: api_client vmnetwork
获取指定实例的数据: api_client -k instance-0000002
获取指定类型、指定实例、指定参数的数据: api_client instance-0000002 vmnetwork 10.0.0.2
	api_client instance-00000012@lx12 cpu
	api_client instance-00000012@lx12 mem mem_free
查询指定实例、指定类型、指定参数、指定统计类型的数据,以5分钟为统计单位、从指定时间开始到当前时间进行统计,返回统计结果: 
api_client instance-0000002 vmnetwork 10.0.0.2 0 5 1332897600 0


server-->statisticer

格式:

[{key:result}]

示例:

[ {"1332897600.0": 10} ]



数据库

结构:

+--------------+
| cf=vmnetwork |
+--------------+-------------------------------------------+
| scf=IP                                                   |
+===================+===========+=======+==================+
|                   | col=time1 | time2 | ...              |
+===================+===========+=======+==================+
| key=instance_id   |   val1    | val2  | ...              |
+==========================================================+

+------------------------------------------------------------------------+
| cf=cpu/mem_max/mem_free/nic_read/nic_write/blk_read/blk_write/...      |
+------------------------------------------------------------------------+
| scf=total/devname(vnet0/vda...)                  |
+=================+==============+=======+=========+
|                 | col=utc_time | time2 | ...     |
+=================+==============+=======+=========+
| key=instance_id | val1(subval) | val2  | ...     |
+==================================================+


建库

可以在数据库本地使用cassandra-cli -h 127.0.0.1连接数据库并执行以下命令建库:

create keyspace data;
use data;

create column family vmnetwork with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family cpu with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family mem_max with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family mem_free with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family nic_incoming with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family nic_outgoing with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family blk_read with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family blk_write with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';

assume vmnetwork keys as ascii;
assume cpu keys as ascii;
assume mem_max keys as ascii;
assume nic_incoming keys as ascii;
assume nic_outgoing keys as ascii;
assume blk_read keys as ascii;
assume blk_write keys as ascii;
assume mem_free keys as ascii;


schema

[default@data] show schema;
create keyspace data
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {datacenter1 : 1}
  and durable_writes = true;

use data;

create column family blk_read
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family blk_write
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family cpu
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family mem_free
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family mem_max
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family nic_incoming
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family nic_outgoing
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family nic_read
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family nic_write
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family vmnetwork
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';



配置文件样例

bin/kanyun.conf
[MySQL]
host: 127.0.0.1
passwd: nova
user: root
db: nova

[server]
handler_host: *
handler_port: 5553
broadcast_host: *
broadcast_port: 5552
feedback_host: *
feedback_port: 5551
db_host: 127.0.0.1

[api]
api_host: *
api_port: 5556
db_host: 127.0.0.1

[worker]
id: worker1
worker_timeout: 60
broadcast_host: 127.0.0.1
broadcast_port: 5552
feedback_host: 127.0.0.1
feedback_port: 5551
log: /tmp/kanyun-worker.log

[client]
api_host: 10.210.228.23
api_port: 5556


>>>KanyunMetering(English)