Jump to: navigation, search

Difference between revisions of "KanyunMetering"

m (Text replace - "__NOTOC__" to "")
 
Line 1: Line 1:
__NOTOC__
+
 
 
'''[[KanyunMetering]]'''
 
'''[[KanyunMetering]]'''
  

Latest revision as of 23:29, 17 February 2013

KanyunMetering

KanyunMetering used to monitor the network traffic and VM info of the virtual machine nova-compute and nova-network node.

Topology

Overview

                +-----------------+
                |    cassandra    |
                +-----------------+
                  |            |
                  |            |
            +-----------+ +-------------+
            |   server  | |  API-server |   <---- statisticer
            +-----------+ +-------------+
                   | 
                   |    
                 woker
                   |   
           ________|___________        
         /         |           \
 network traffic  VM info  other 


Server


                         +--- <-- Worker's PUSH
                         |
                         |
                   +----------+
                   |   PULL   |     <-- feedback
               +---|==========|
   Client--->  |REP|  Server  |
               +---|==========|
                   |   PUB    |     <-- broadcast
                   +----------+
                         |
                         |
                         +----> Worker's SUB
                         +----> DB



Moudle

Worker

The worker run in the nova-compute or nova-network node.A worker has some plugin to collect different information, such as networking traffic , cpu usage/mem free/etc. of virtual machine. The worker wokring each one minutes. The collected data will be sent to the server.

Server

The server receive the data sent by workers and save it into the cassandra database.

Database

KanyunMetering use the cassandra database to storage node information.

API-Server

API-Server is KanyunMetering's external interface. It provide data searching and data statistics service. KanyunMetering has a api-client tool. Network manager can use it to look the statistics info in Linux terminal.


protocol

All data communications use json format.

External network traffic data collection module --->worker

format:

  {'instanceid1':('IP', time, 'out_bytes'), 'instanceid2':('IP', time, 'out_bytes')}

example:

{'instance-00000001': ('10.0.0.2', 1332409327, '0')}


VM info collection-->worker

format:

[('cpu', 'total', (utc_time, cpu_usage)), 
('mem', 'total', (utc_time, max, free)), 
('nic', 'vnet8', (utc_time, incoming, outgoing)), 
('blk', 'vda', (utc_time, read, write)), 
('blk', 'vdb', (utc_time, read, write))],


example:

(Cpu and mem are actual usage)

{'instance-000000ba@sws-yz-5': 
[('cpu', 'total', (1332400088.149444, 5)),  
('mem', 'total', (1332400088.149444, 8388608L, 8388608L)), 
('nic', 'vnet8', (1332400088.152285, 427360746L, 174445810L)), 
('blk', 'vda', (1332400088.156346, 298522624L, 5908590592L)), 
('blk', 'vdb', (1332400088.158202, 2159616L, 1481297920L))], 


worker-->server

format:

  ['msg_type': msg]
  'msg_type' value range:
    HEART_BEAT = '0'
    LOCAL_INFO = '1'
    TRAFFIC_ACCOUNTING = '2'
    AGENT = '3'


example:

heartbeat:
['WORKER1', time.time(), status]
status:0:normal exit,The server receives a 0 will cancel the worker's status monitoring;1:working
data:
['2', 
    '{"instance-00000001@pyw.novalocal": 
        [
         ["cpu", "total", [1332831360.029795, 53522870000000]], 
         ["mem", "total", [1332831360.029795, 131072, 131072]], 
         ["nic", "vnet0", [1332831360.037399, 21795245, 5775663]], 
         ["blk", "vda", [1332831360.04699, 474624, 4851712]], 
         ["blk", "vdb", [1332831360.049333, 122880, 0]]
        ]
     }'
]


server-->cassandra

format:

{instance_id, {scf_str: {time: value} }}


example:

Columnfamily mem_max save max memory value, the mem_free save the free memory value

instance_id, {'total': {1332831360: 131072}}
instance_id, {'total': {1332831360: 131072}}


statisticer-->server

format:

[CMD, row_id, cf_str, scf_str, statistic, period, time_from, time_to]
CMD value:
    STATISTIC = 'S'
    GETBYKEY = 'K'
    GET = 'G'
    LIST = 'L'
statistic value:
    SUM     = 0
    MAXIMUM = 1
    MINIMUM = 2
    AVERAGE = 3
    SAMPLES = 4
period is interval minute values ​​for the statistics, for example, pass 5, it will calculate the statistics every 5 minutes
time_from and time_to numerical is IntType. Use the current time if time_to given 0


example:

request data

[u'S', u'instance-00000001@pyw.novalocal', u'cpu', u'total', 0, 5, 1332897600, 0]
[u'K', row_id, "blk_write"]
[u'G', row_id, "cpu", "total"]
[u'L', cf_str]


api_client example

get the instance list of special super columnfamily: api_client vmnetwork
get the data of special instance: api_client -k instance-0000002
get the data of special parameter: api_client instance-0000002 vmnetwork 10.0.0.2
	api_client instance-00000012@lx12 cpu
	api_client instance-00000012@lx12 mem mem_free

Queries the specified instance with the specified type, and specified the parameters of data, and units to five minutes, starting from the specified time to the current time, return to the statistical results:
api_client instance-0000002 vmnetwork 10.0.0.2 0 5 1332897600 0


server-->statisticer

format:

[{key:result}]

example:

[ {"1332897600.0": 10} ]



Database

struct:

+--------------+
| cf=vmnetwork |
+--------------+-------------------------------------------+
| scf=IP                                                   |
+===================+===========+=======+==================+
|                   | col=time1 | time2 | ...              |
+===================+===========+=======+==================+
| key=instance_id   |   val1    | val2  | ...              |
+==========================================================+

+------------------------------------------------------------------------+
| cf=cpu/mem_max/mem_free/nic_read/nic_write/blk_read/blk_write/...      |
+------------------------------------------------------------------------+
| scf=total/devname(vnet0/vda...)                  |
+=================+==============+=======+=========+
|                 | col=utc_time | time2 | ...     |
+=================+==============+=======+=========+
| key=instance_id | val1(subval) | val2  | ...     |
+==================================================+


create

use "cassandra-cli -h 127.0.0.1" to connect localhost cassandra database, and execute the command to ceate database:

create keyspace data;
use data;

create column family vmnetwork with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family cpu with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family mem_max with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family mem_free with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family nic_incoming with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family nic_outgoing with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family blk_read with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';
create column family blk_write with column_type='Super' and comparator='AsciiType' and subcomparator='IntegerType' and default_validation_class='AsciiType';

assume vmnetwork keys as ascii;
assume cpu keys as ascii;
assume mem_max keys as ascii;
assume nic_incoming keys as ascii;
assume nic_outgoing keys as ascii;
assume blk_read keys as ascii;
assume blk_write keys as ascii;
assume mem_free keys as ascii;


schema

[default@data] show schema;
create keyspace data
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {datacenter1 : 1}
  and durable_writes = true;

use data;

create column family blk_read
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family blk_write
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family cpu
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family mem_free
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family mem_max
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family nic_incoming
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family nic_outgoing
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family nic_read
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family nic_write
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';

create column family vmnetwork
  with column_type = 'Super'
  and comparator = 'AsciiType'
  and subcomparator = 'IntegerType'
  and default_validation_class = 'AsciiType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';



example of config file

bin/kanyun.conf
[MySQL]
host: 127.0.0.1
passwd: nova
user: root
db: nova

[server]
handler_host: *
handler_port: 5553
broadcast_host: *
broadcast_port: 5552
feedback_host: *
feedback_port: 5551
db_host: 127.0.0.1

[api]
api_host: *
api_port: 5556
db_host: 127.0.0.1

[worker]
id: worker1
worker_timeout: 60
broadcast_host: 127.0.0.1
broadcast_port: 5552
feedback_host: 127.0.0.1
feedback_port: 5551
log: /tmp/kanyun-worker.log

[client]
api_host: 10.210.228.23
api_port: 5556


>>>KanyunMetering中文