MediaWiki API result

This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.

Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.

See the complete documentation, or the API help for more information.

{
    "batchcomplete": "",
    "continue": {
        "gapcontinue": "Rebuildforvms",
        "continue": "gapcontinue||"
    },
    "query": {
        "pages": {
            "307": {
                "pageid": 307,
                "ns": 0,
                "title": "ReadDeletedYesOrOnly",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "\n== Introduction ==\n\nThis is a list of places in the Nova code where we check for read_deleted being set to \"yes\" (which means to include both deleted and non-deleted rows) or \"only\" (which means to include only deleted rows).\n\nIf we're going to change the way we track deleted rows, we first need to fix all the places that currently rely on read_deleted.\n\nFor unit tests I only list the file, since the exact method will change when the corresponding tested module does.\n\n= nova/compute/instance_types.py =\n\nget_instance_type sets read_deleted=\"yes\" if param inactive is True\n\nget_instance_type_by_flavor_id has read_deleted param, default \"yes\"\n\n= nova/compute/manager.py =\n\nComputeManager._cleanup_running_deleted_instances read_deleted=\"yes\"\n\n= nova/context.py =\n\nRequestContext.__init__ has read_deleted param, default \"no\"\nRequestContext.read_deleted property\nRequestContext.elevated takes param read_deleted\n\nget_admin_context takes param read_deleted default \"no\"\n\n= nova/db/sqlalchemy/api.py =\n\nmodel_query takes kw param read_deleted, default \"no\"\n\nfixed_ip_get  \"yes\"\n\nfixed_ip_get_all \"yes\"\n\nfixed_ip_get_by_address \"yes\"\n\n(other fixed_ip functions do not have read_deleted \"yes\", which is inconsistent.  Bug?)\n\n_virtual_interface_query param read_deleted, default \"yes\"\n\n_ec2_volume_get_query \"yes\"\n\n_ec2_snapshot_get_query \"yes\"\n\nvolume_get_iscsi_target_num \"yes\"\n\nmigration_get \"yes\"\n\nmigration_get_by_instance_and_status \"yes\"\n\nmigration_get_unconfirmed_by_dest_compute \"yes\"\n\nconsole_get_by_pool_instance \"yes\"\n\nconsole_get_all_by_instance \"yes\"\n\nconsole_get \"yes\"\n\ninstance_type_get_all  \"yes\" if param inactive is True\n\n_instance_type)_access_query \"yes\"\n\nagent_build_destroy \"yes\"\n\nagent_build_update \"yes\"\n\nbw_usage_get \"yes\"\n\nbw_usage_get_by_uuids \"yes\"\n\nbw_usage_update \"yes\"\n\ns3_image_get \"yes\"\n\ns3_image_get_by_uuid \"yes\"\n\naggregate_metadata_get_item \"yes\"\n\naggregate-host_add \"yes\"\n\n_ec2_instance_get_query \"yes\"\n\n_security_group_get_query \"only\"\n\n= nova/network/manager.py =\n\nFloatingIP.deallocate_for_instance \"yes\"\n\nNetworkManager._do_trigger_security_group_members_refresh_for_instance \"yes\"\n\nNetworkManager.deallocate_fixed_ip \"yes\"\n\n= nova/network/quantum/nova_ipam_lib.py =\n\nQuantumNovaIPAMLib.deallocate_ips_by_vif \"yes\"\n\n= nova/notifications.py =\n\nbandwidth_usage \"yes\"\n\n= nova/openstack/common/rpc/common.py =\n\nCommmonRpcContext.elevated  read_deleted param\n\n= nova/tests/api/ec2/test_cinder_cloud.py =\n\n= nova/tests/api/ec2/test_cloud.py =\n\n= nova/tests/api/openstack/compute/contrib/test_flavor_manage.py =\n\n= nova/tests/compute/test_compute.py =\n\n= nova/tests/compute/test_compute_utils.py =\n\n= nova/tests/network/test_manager.py =\n\n= nova/tests/test_context.py =\n\n= nova/tests/test_db_api.py =\n\n= nova/tests/test_instance_types.py =\n\n= nova/virt/baremetal/db/sqlalchemy/api.py =\n\nmodel_query kw param read_deleted\n\n= tools/xenserver/vm_vdi_cleaner.py =\n\nfind_orphaned_instances  \"only\""
                    }
                ]
            },
            "573": {
                "pageid": 573,
                "ns": 0,
                "title": "RealDeployments",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "\n= Real deployments =\n\nThis page documents details of real OpenStack deployments. \n\n== Mediawiki ==\n\nContact: [http://ryandlane.com Ryan Lane]\n\n=== Documentation  ===\n* http://www.mediawiki.org/wiki/Wikimedia_Labs\n* http://wikitech.wikimedia.org/view/OpenStack\n* http://www.mediawiki.org/wiki/Extension:OpenStackManager\n* http://ryandlane.com/blog/wp-content/uploads/2012/02/Infrastructure-as-an-Open-Source-Project-FOSDEM-publish.odp\n\nThe ODP file of the FOSDEM talk has full notes if you switch to notes view.\n\n=== Deployment scripts ===\n\nPuppet repository, which has OpenStack manifests (for swift and nova) and some scripts used for managing gluster, nfs and\nganglia in a per-project way: \n\n* https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git;a=summary\n* https://wikitech.wikimedia.org/wiki/Help:Git#Restrictions_and_Anonymous_access\n\n=== Blog posts ===\n\nRyan Lane's blog about how certain things are handled:\n\n* http://ryandlane.com/blog/2012/04/24/per-project-sudo-policies-using-sudo-ldap-and-puppet/\n* http://ryandlane.com/blog/2011/11/01/sharing-home-directories-to-instances-within-a-project-using-puppet-ldap-autofs-and-nova/\n* http://ryandlane.com/blog/2011/11/02/a-process-for-puppetization-of-a-service-using-nova/\n* http://ryandlane.com/blog/2011/01/24/announcing-openstackmanager-extension-for-mediawiki/\n* http://ryandlane.com/blog/2011/01/02/building-a-test-and-development-infrastructure-using-openstack/\n\nWikimedia blog about design decisions:\n\n* http://blog.wikimedia.org/2012/04/16/introduction-to-wikimedia-labs/\n\n== Argonne National Labs (DOE Magellan) ==\n\n=== Current Diablo environment ===\n\n* ubuntu 10.11 oneiric\n* [http://trac.mcs.anl.gov/projects/bcfg2 Bcfg2] configuration management\n* openstack Diablo via managed IT PPA\n* nova network 10GigE, VLAN manager\n* nova volume serivce using iscsi over ipoib\n* nginx load balancer / HA (frontend for all client API connections)\n* 2 x nova api servers, each with 4 instances\n* glance on gluster (over native ib to compute nodes)\n* keystone\n* dashboard\n* euca2ools via EC2 api\n* 500 compute nodes\n* IBM iDataplex\n* 2 x 2.6 intel nehalem\n* 24GB memory\n* 1GigE NIC\n* QDR infiniband (only used for storage atm)\n* ~100 users spread across ~15 tenants\n\n=== Planned Essex environment ===\n\n* TBD\n\n== [[TryStack]] Dell Region ==\n\nContact: [[JayPipes]]\n\nThe first region established for \n1TryStack features server hardware from Dell. There are 20 servers contained in five (5) Dell C6105s 2U server enclosures. Each server (four (4) in each of the 6105s server enclosures) contains:\n\n* 96GB RAM\n* 2 12-core Intel Xeon processors X5650 or AMD Opteron 4176HE\n* Two (2) 1GB network interface cards\n* ~5 TB usable disk space -- managed in a RAID10 setup\n\nOne (1) server -- freecloud-mgmt -- is used as a management server and runs the following services:\n\n* dnsmasq -- Used by all compute nodes to determine VMs IP addressing\n* chef-server -- http://localhost:4040/ -- The configuration management server used to deploy services into the service nodes user/passwd: admin/openstack\n* munin -- http://localhost:8081/ A networked resource monitoring tool useful in tracking performance and usage of resources. user/passwd: munin/openstack\n* nagios -- http://localhost:8082/ -- A different resource monitoring tool that does not need to have an agent installed on the tracked nodes (unlike Munin) user/passwd: munin/openstack\n* jenkins -- http://localhost:8080/ -- A continuous integration and deployment platform, used for running automated tasks\n\nIn addition to the above services, the management server also is responsible for:\n\n* git repositories for:\n  -       The [[TryStack]] Chef cookbooks and recipes are at /root/openstack-chef/\n  -       The canonical repo is available on GitHub: https://github.com/trystack/openstack-chef/\n\n* Doing base operating system deploys into other service nodes\n  -       Done using PXE installs\n\nThe other nineteen (19) servers are used as service nodes and run a variety of [[OpenStack]] servers and services that [[OpenStack]] depends on. These services may include one or more of the following:\n\n* mysql-server -- A MySQL database server\n* rabbitmq-server -- A RabbitMQ message queueing service\n* nova-api -- The [[OpenStack]] Compute API server\n* nova-scheduler -- The [[OpenStack]] Compute instance scheduling service\n* nova-compute -- The [[OpenStack]] Compute VM management service -- listens for messages sent from the nova-scheduler service and is responsible for performing actions such as launching, terminating or rebooting virtual machines\n* nova-network -- The [[OpenStack]] Compute networking service -- responds to messages sent from the nova-scheduler and nova-compute services to handle setting up of networking information for virtual machines\n* keystone -- The [[OpenStack]] Identity API server\n* glance-api -- The [[OpenStack]] Images API server\n* glance-registry -- The [[OpenStack]] Images Registry server\n* dashboard -- The [[OpenStack]] Dashboard server -- web-based console for users and administrators of [[TryStack]]\n\n=== Network Architecture ===\n\nA single Cisco 4948-10GE switch is in use and it is used to route a private management network for the 20 server nodes as well as provide access to the public Internet.\n\nThe freecloud-mgmt server runs a dnsmasq server and publishes a gateway for the rest of the other host machines at 10.0.100.1. The other 19 hosts set their default gateway to 10.0.100.1 and their eth0 interfaces are set to 10.0.100.101 through 10.0.100.118, making the management network. eth1 interfaces are used for the public network addresses of nodes, if any are needed.\n\n=== High Availability (HA) Service Configuration ===\n\nThere are six (6) service nodes that are deployed with heartbeat and DRBD. Three (3) nodes are set as the active servers and three are set as the standby servers. Thus, each combination of critical [[OpenStack]] services run on a pair of servers, with heartbeat monitoring the health of the active server and, on failure of the active server, redirects traffic from the IP address of the failed node to the standby node.\n\nThe pairs of active/standby servers act as redundant nodes providing a given set of related services:\n\n    \u201cFront-end Web\u201d -- nova-api, nova-scheduler, keystone, horizon\n    \u201cDatabase and Message Queue Server\u201d -- mysql-server, rabbitmq-server\n    \u201cImage Service\u201d -- glance-api, glance-registry\n\n== CERN ==\n\nContact: Tim Bell (tim.bell@cern.ch)\n\n=== Presentations and Documentation ===\n* San Diego 2012 Summit - http://www.slideshare.net/noggin143/20121017-openstack-accelerating-science\n* Overall project description (including other components) - http://cern.ch/go/N8wp\n* User guide for the facility is at http://clouddocs.web.cern.ch/clouddocs/\n\n=== Deployment ===\n\nThe environment is largely based on Scientific Linux 6, which is Red Hat compatible. We use KVM as our primary hypervisor although tests are ongoing with Hyper-V on Windows Server 2008.\n\nWe use the puppetlabs [[OpenStack]] modules to configure Nova, Glance, Keystone and Horizon. Puppet is used widely within the guest configuration also and Foreman as a GUI for reporting and VM provisioning.\n\nUsers and Groups are managed through Active Directory and imported into Keystone using LDAP.\n\nCLIs are available for Nova and Euca2ools.\n\n=== Areas currently being investigated ===\n\n* Block storage for live migration and Cinder\n* Integration with CERN Single Sign On\n\n=== Current Status ===\n\nWe currently are running around 250 hypervisors with around 1000 VMs."
                    }
                ]
            }
        }
    }
}