https://wiki.openstack.org/w/api.php?action=feedcontributions&user=David+Moreau+Simard&feedformat=atomOpenStack - User contributions [en]2024-03-29T15:12:50ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=172249Meetings/InfraTeamMeeting2019-09-03T19:01:40Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** OpenStack election season is upon us<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.)<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management]<br />
*** topic:update-cfg-mgmt<br />
*** Zuul as CD engine<br />
** OpenDev<br />
<br />
* General topics<br />
** Trusty Upgrade Progress (clarkb 201900903)<br />
*** Wiki updates<br />
** static.openstack.org (ianw 20190903)<br />
*** Sign up for tasks at https://etherpad.openstack.org/p/static-services<br />
** AFS mirroring status (ianw 20190903)<br />
*** Did debugging additions help?<br />
*** Do we think rsync updates are to blame? Perhaps in newer rsync on Bionic?<br />
** Project Renaming September 16 (clarkb 20190903)<br />
** PTG Planning (clarkb 20190903)<br />
*** https://etherpad.openstack.org/p/OpenDev-Shanghai-PTG-2019<br />
** Volume of files from ARA html reports is problematic (dmsimard 20190903)<br />
*** https://etherpad.openstack.org/p/Vz5IzxlWFz<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* Rename x/ansible-role-cloud-launcher -> opendev/ansible-role-cloud-launcher, https://review.opendev.org/662530<br />
* Rename x/kayobe{,-config,-config-dev} -> openstack/kayobe{,-config,-config-dev}, https://review.opendev.org/669298<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=168881Meetings/InfraTeamMeeting2019-03-19T15:29:30Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** Clarkb on vacation March 25-28<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts (Standing meeting agenda items. Please expand if you have subtopics.)<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/update-config-management.html Update Config Management]<br />
*** topic:puppet-4 and topic:update-cfg-mgmt<br />
*** Zuul as CD engine<br />
** OpenDev<br />
*** https://storyboard.openstack.org/#!/story/2004627<br />
*** git:// to https:// translation (ianw 20190312 clarkb 20190319)<br />
**** Clarkb wanted to followup on this and make sure we aren't stuck or confused about what the path forward is.<br />
**** http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html<br />
<br />
* General topics<br />
** Trusty server upgrade progress (clarkb 20190319)<br />
*** https://etherpad.openstack.org/p/201808-infra-server-upgrades-and-cleanup<br />
*** AFS upgrades complete. Still a few more services remaining. Please grab them on the etherpad if you can do the upgrade.<br />
** PTG planning (clarkb 20190319)<br />
*** https://etherpad.openstack.org/2019-denver-ptg-infra-planning<br />
*** [https://www.openstack.org/ptg#tab_schedule Draft schedule] We have Friday and Saturday in a shared room with the QA team.<br />
** Keeping ARA on GitHub (dmsimard 20190319)<br />
*** http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003842.html<br />
*** https://review.openstack.org/#/q/topic:upload-git-mirror<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* AJKavanagh [tinwood] - Rename charm-lxd to charm-nova-lxd, with a project-config change at https://review.openstack.org/644584 .<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=161498Meetings/InfraTeamMeeting2018-05-29T19:03:18Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** https://review.openstack.org/349831 Survey tool spec<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** Modern Config Management<br />
*** https://review.openstack.org/449933 Puppet 4 Infra<br />
*** https://review.openstack.org/469983 Ansible Infra<br />
*** https://review.openstack.org/565550 Containerized infra<br />
<br />
* General topics<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=161309Meetings/InfraTeamMeeting2018-05-15T20:09:07Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** Summit/OpenDev next week<br />
** clarkb out May 15-17, dmsimard will chair<br />
** No meeting May 22 due to Summit/OpenDev<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** https://review.openstack.org/349831 Survey tool spec<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** Modern Config Management<br />
*** https://review.openstack.org/449933 Puppet 4 Infra<br />
*** https://review.openstack.org/469983 Ansible Infra<br />
*** https://review.openstack.org/565550 Containerized infra<br />
<br />
* General topics<br />
** (dmsimard) Gauging interest in an ARA data aggregation idea: https://etherpad.openstack.org/p/ara-aggregation<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=161307Meetings/InfraTeamMeeting2018-05-15T19:28:00Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** Summit/OpenDev in two weeks<br />
** clarkb out May 15-17. Will need volunteer to chair meeting.<br />
** No meeting May 22 due to Summit/OpenDev<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** https://review.openstack.org/349831 Survey tool spec<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** Modern Config Management<br />
*** https://review.openstack.org/449933 Puppet 4 Infra<br />
*** https://review.openstack.org/469983 Ansible Infra<br />
*** https://review.openstack.org/565550 Containerized infra<br />
<br />
* General topics<br />
** (dmsimard) Gauging interest in an ARA data aggregation idea: https://etherpad.openstack.org/p/ara-aggregation<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=161306Meetings/InfraTeamMeeting2018-05-15T18:57:46Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** Summit/OpenDev in two weeks<br />
** clarkb out May 15-17. Will need volunteer to chair meeting.<br />
** No meeting May 22 due to Summit/OpenDev<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** https://review.openstack.org/349831 Survey tool spec<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** Modern Config Management<br />
*** https://review.openstack.org/449933 Puppet 4 Infra<br />
*** https://review.openstack.org/469983 Ansible Infra<br />
*** https://review.openstack.org/565550 Containerized infra<br />
<br />
* General topics<br />
** Rocky virtual sprint (pabelanger)<br />
*** Upgrade control plane servers to Ubuntu Xenial part 2<br />
***https://etherpad.openstack.org/p/infra-sprint-xenial-upgrades-part2<br />
** (dmsimard) Gauging interest in an ARA data aggregation idea: https://etherpad.openstack.org/p/ara-aggregation<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=161052Meetings/InfraTeamMeeting2018-05-01T19:37:52Z<p>David Moreau Simard: /* Weekly Project Infrastructure team meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** Modern Config Management<br />
*** https://review.openstack.org/449933 Puppet 4 Infra<br />
*** https://review.openstack.org/469983 Ansible Infra<br />
*** https://review.openstack.org/565550 Containerized infra<br />
<br />
* General topics<br />
** Gerrit server replacement scheduled for May 2nd 2018 (pabelanger)<br />
*** http://lists.openstack.org/pipermail/openstack-dev/2018-April/129022.html<br />
*** https://etherpad.openstack.org/p/review01-xenial-upgrade<br />
<br />
* Open discussion<br />
** Impending release of ARA 0.15.0, rc1 is out: https://github.com/openstack/ara/releases/tag/0.15.0.0rc1<br />
*** This will finally allow us to land improvements for os-loganalyze and ara on logs.o.o https://review.openstack.org/#/c/558688/<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=161051Meetings/InfraTeamMeeting2018-05-01T19:36:56Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** Modern Config Management<br />
*** https://review.openstack.org/449933 Puppet 4 Infra<br />
*** https://review.openstack.org/469983 Ansible Infra<br />
*** https://review.openstack.org/565550 Containerized infra<br />
<br />
* General topics<br />
** Gerrit server replacement scheduled for May 2nd 2018 (pabelanger)<br />
*** http://lists.openstack.org/pipermail/openstack-dev/2018-April/129022.html<br />
*** https://etherpad.openstack.org/p/review01-xenial-upgrade<br />
<br />
* Open discussion<br />
** Impending release of ARA 0.15.0, rc1 is out: https://github.com/openstack/ara/releases/tag/0.15.0.0rc1<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=160634Meetings/InfraTeamMeeting2018-04-03T19:36:01Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** [https://review.openstack.org/#/c/557772/ Zuul v3 Done]<br />
** [https://review.openstack.org/#/c/555104/ Project Hosting Amendment]<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** We should reevaluate current projects and add a priority effort or two.<br />
*** Gerrit 2.14/2.15 upgrade<br />
*** Control plane operating system upgrades<br />
*** Wiki?<br />
*** Other ideas?<br />
<br />
* General topics<br />
** (dmsimard) Meetbot abuse<br />
*** http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-04-01.log.html#t2018-04-01T16:39:49<br />
*** http://eavesdrop.openstack.org/irclogs/%23openstack-operators/%23openstack-operators.2018-04-01.log.html#t2018-04-01T16:59:48<br />
** Gerrit replacement - (R-17) May 02, 2018 (pabelanger)<br />
*** review01.o.o online<br />
*** ML post: https://etherpad.openstack.org/p/HITPVWQ5Vr<br />
<br />
* Open discussion<br />
** David did a talk about openstack-infra<br />
*** Meta: Video about adding this agenda item to the agenda: https://www.youtube.com/watch?v=6gTsL7E7U7Q&t=1697<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=160633Meetings/InfraTeamMeeting2018-04-03T19:34:37Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** [https://review.openstack.org/#/c/557772/ Zuul v3 Done]<br />
** [https://review.openstack.org/#/c/555104/ Project Hosting Amendment]<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** We should reevaluate current projects and add a priority effort or two.<br />
*** Gerrit 2.14/2.15 upgrade<br />
*** Control plane operating system upgrades<br />
*** Wiki?<br />
*** Other ideas?<br />
<br />
* General topics<br />
** (dmsimard) Meetbot abuse<br />
*** http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-04-01.log.html#t2018-04-01T16:39:49<br />
*** http://eavesdrop.openstack.org/irclogs/%23openstack-operators/%23openstack-operators.2018-04-01.log.html#t2018-04-01T16:59:48<br />
** Gerrit replacement - (R-17) May 02, 2018 (pabelanger)<br />
*** review01.o.o online<br />
*** ML post: https://etherpad.openstack.org/p/HITPVWQ5Vr<br />
<br />
* Open discussion<br />
** David did a talk about openstack-infra<br />
*** Meta: Video about adding this agenda item to the agenda: https://www.youtube.com/watch?v=6gTsL7E7U7Q&t=1630<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=160617Meetings/InfraTeamMeeting2018-04-02T20:57:38Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
<br />
* General topics<br />
** (dmsimard) Meetbot abuse<br />
*** http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-04-01.log.html#t2018-04-01T16:39:49<br />
*** http://eavesdrop.openstack.org/irclogs/%23openstack-operators/%23openstack-operators.2018-04-01.log.html#t2018-04-01T16:59:48<br />
<br />
* Open discussion<br />
** David did a talk about openstack-infra<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=160533Meetings/InfraTeamMeeting2018-03-27T23:31:12Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
<br />
* General topics<br />
<br />
* Open discussion<br />
** David did a talk about openstack-infra<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=160161Meetings/InfraTeamMeeting2018-03-13T13:08:53Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** PTG topic brainstorming happening now https://etherpad.openstack.org/p/infra-rocky-ptg<br />
** PTG Schedule at https://ethercalc.openstack.org/cvro305izog2<br />
** No meeting next week, February 27.<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** Improve IRC discoverability: https://review.openstack.org/#/c/550550/<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
<br />
* General topics<br />
** AFS and reprepo (ianw 2018-03-13)<br />
*** we're having significant issues keeping this stable<br />
*** later version of afs client?<br />
*** alternative server implementations<br />
** ARM64 update (ianw 2018-03-13)<br />
*** update on jobs, mirrors, etc<br />
** ARA sqlite middleware once again ready for review (dmsimard 2018-03-13)<br />
*** https://review.openstack.org/#/q/topic:ara-sqlite-middleware<br />
** Meta: Add meeting time/location in #openstack-infra topic ? (dmsimard 2018-03-13)<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* collectd-ceilometer-plugin->collectd-openstack-plugins https://review.openstack.org/#/c/500768<br />
* Something related to sdks/shade (via mordred)<br />
<br />
<br />
Proposed date is March 16, depends on release team getting cycle trailing projects out earlier in the week so sync up with them first.<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=160049Meetings/InfraTeamMeeting2018-03-06T19:02:41Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** PTG topic brainstorming happening now https://etherpad.openstack.org/p/infra-rocky-ptg<br />
** PTG Schedule at https://ethercalc.openstack.org/cvro305izog2<br />
** No meeting next week, February 27.<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
<br />
* General topics<br />
** Github replication issues (ianw 2018-03-06)<br />
*** see notes in http://lists.openstack.org/pipermail/openstack-infra/2018-March/005842.html<br />
*** run recovery (http://lists.openstack.org/pipermail/openstack-dev/2017-June/119166.html) and reindex?<br />
** ARM64 update (ianw 2018-03-06)<br />
*** update on jobs, mirrors, etc<br />
** ARA sqlite middleware once again ready for review<br />
*** https://review.openstack.org/#/q/topic:ara-sqlite-middleware<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* collectd-ceilometer-plugin->collectd-openstack-plugins https://review.openstack.org/#/c/500768<br />
* Something related to sdks/shade (via mordred)<br />
<br />
<br />
Proposed date is March 16, depends on release team getting cycle trailing projects out earlier in the week so sync up with them first.<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/Kolla&diff=159167Meetings/Kolla2018-01-24T15:43:27Z<p>David Moreau Simard: /* Agenda for next meeting (Jan. 17th, 2018) */</p>
<hr />
<div><br />
= Weekly Kolla team meeting =<br />
'''MEETING TIME: Wednesdays at 16:00 UTC <code><nowiki>#openstack-meeting-4</nowiki></code>.<br />
<br />
== Agenda for next meeting (Jan. 24th, 2018) ==<br />
<br />
<pre><br />
* Roll-call<br />
* Announcements<br />
* Deploying a cloud with Kolla for the RDO test days (dmsimard)<br />
</pre><br />
<br />
=== Regular agenda ===<br />
Copy/Paste into IRC to kick the meeting off:<br />
<pre><br />
#startmeeting kolla<br />
</pre>Kolla target BPs and revision of release for every deliverable<br />
<br />
Then, once the bot has caught up and everyone is settled:<br />
<br />
<pre><br />
#topic rollcall<br />
</pre><br />
<br />
Once folks have checked in, run the agenda by the group present:<br />
<br />
<pre><br />
#topic agenda<br />
cut and paste agenda from above<br />
</pre><br />
<br />
===Copy/Paste for IRC===<br />
<br />
* https://bugs.launchpad.net/kolla/<br />
* https://blueprints.launchpad.net/kolla<br />
* https://blueprints.launchpad.net/kolla/+spec/multiarch-and-arm64-coKolla target BPs and revision of release for every deliverablentainers<br />
* https://review.openstack.org/#/c/434946/<br />
* https://review.openstack.org/#/c/504801<br />
<br />
<pre><br />
#link https://bugs.launchpad.net/kolla<br />
#link https://blueprints.launchpad.net/kolla<br />
#link https://blueprints.launchpad.net/kolla/+spec/multiarch-and-arm64-containers<br />
#link https://review.openstack.org/#/c/434946/<br />
</pre><br />
<br />
== Previous meetings ==<br />
<br />
* IRC logs [http://eavesdrop.openstack.org/meetings/kolla]</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=158813Meetings/InfraTeamMeeting2018-01-09T18:51:58Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** Clarkb missing January 23rd meeting due to travel.<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
*** Clean up [https://etherpad.openstack.org/p/zuulv3-issues Zuul v3 Issues Etherpad]<br />
<br />
* General topics<br />
** Meltdown/Spectre<br />
** (sshnaidm) Allowing jobs to send data to graphite.openstack.org<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* collectd-ceilometer-plugin->collectd-openstack-plugins https://review.openstack.org/#/c/500768<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/Zuul&diff=158765Meetings/Zuul2018-01-08T19:46:40Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div>= Weekly Zuul meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings about Zuul and Nodepool development in <code><nowiki>#openstack-meeting-alt</nowiki></code>, Mondays at 2200 UTC. Everyone interested in Zuul, Nodepool, and related development is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
* General topics<br />
** Roadmap<br />
*** http://lists.openstack.org/pipermail/openstack-infra/2017-November/005657.html<br />
** RAM governor for the executors<br />
*** executors are generally loaded, either running into OOM kills or swapping out to disk<br />
* Open Discussion<br />
<br />
== Previous meetings ==<br />
<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/zuul/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=158700Meetings/InfraTeamMeeting2018-01-02T16:22:50Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
** Skipping December 26th meeting<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
*** Clean up [https://etherpad.openstack.org/p/zuulv3-issues Zuul v3 Issues Etherpad] and move remaining issues to storyboard?<br />
<br />
* General topics<br />
** Freenode IRC spam -- continued (dmsimard)<br />
*** Several spam waves since 2017-12-18 (see https://wiki.openstack.org/wiki/Infrastructure_Status)<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* collectd-ceilometer-plugin->collectd-openstack-plugins https://review.openstack.org/#/c/500768<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=158576Meetings/InfraTeamMeeting2017-12-17T19:42:35Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** [https://review.openstack.org/524024 Top-level project hosting]<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
<br />
* General topics<br />
** Writing puppet modules for new projects/deployments ? (dmsimard)<br />
** Creating new projects now require 3 different patches to project-config (dmsimard)<br />
*** https://review.openstack.org/#/c/528375/<br />
** Control Plane Upgrades Sprint (clarkb)<br />
*** How is it going?<br />
*** Boilerplate for adding digits to server names<br />
** Gerrit downtime to fix nova-specs (maybe coupled with project rename below, are we ready for that now?) (clarkb)<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* collectd-ceilometer-plugin->collectd-openstack-plugins https://review.openstack.org/#/c/500768<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=158575Meetings/InfraTeamMeeting2017-12-17T19:42:03Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** [https://review.openstack.org/524024 Top-level project hosting]<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
<br />
* General topics<br />
** Writing puppet modules for new projects/deployments ? (dmsimard)<br />
** Creating new projects now require 3 different patches to project-config (dmsimard)<br />
** Control Plane Upgrades Sprint (clarkb)<br />
*** How is it going?<br />
*** Boilerplate for adding digits to server names<br />
** Gerrit downtime to fix nova-specs (maybe coupled with project rename below, are we ready for that now?) (clarkb)<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* collectd-ceilometer-plugin->collectd-openstack-plugins https://review.openstack.org/#/c/500768<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=158571Meetings/InfraTeamMeeting2017-12-15T22:22:46Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
** [https://review.openstack.org/524024 Top-level project hosting]<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
<br />
* General topics<br />
** Writing puppet modules for new projects/deployments ? (dmsimard)<br />
** Control Plane Upgrades Sprint (clarkb)<br />
*** How is it going?<br />
*** Boilerplate for adding digits to server names<br />
** Gerrit downtime to fix nova-specs (maybe coupled with project rename below, are we ready for that now?) (clarkb)<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* collectd-ceilometer-plugin->collectd-openstack-plugins https://review.openstack.org/#/c/500768<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/Zuul&diff=158212Meetings/Zuul2017-11-27T22:06:24Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div>= Weekly Zuul meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings about Zuul and Nodepool development in <code><nowiki>#openstack-meeting-alt</nowiki></code>, Mondays at 2200 UTC. Everyone interested in Zuul, Nodepool, and related development is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
* General topics<br />
** Roadmap<br />
*** http://lists.openstack.org/pipermail/openstack-infra/2017-November/005657.html<br />
* Open Discussion<br />
<br />
== Previous meetings ==<br />
<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/zuul/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/Zuul&diff=158211Meetings/Zuul2017-11-27T22:04:04Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div>= Weekly Zuul meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings about Zuul and Nodepool development in <code><nowiki>#openstack-meeting-alt</nowiki></code>, Mondays at 2200 UTC. Everyone interested in Zuul, Nodepool, and related development is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
* General topics<br />
** Roadmap<br />
*** http://lists.openstack.org/pipermail/openstack-infra/2017-November/005657.html<br />
** Update ARA version on executors<br />
*** https://review.openstack.org/#/c/516740/<br />
* Open Discussion<br />
<br />
== Previous meetings ==<br />
<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/zuul/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=157365Meetings/InfraTeamMeeting2017-10-20T14:19:32Z<p>David Moreau Simard: /* Agenda for next meeting */</p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
<br />
* Actions from last meeting<br />
<br />
* Specs approval<br />
<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html A Task Tracker for OpenStack]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
*** Per-project private key backups (jeblair)<br />
*** Safety of modifying v2 scripts for use in v3 jobs.<br />
<br />
* General topics<br />
** Using 'config-core' ping instead of 'project-config-core' (saving 8 keystrokes!)<br />
<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding project-config rename change in Gerrit)<br />
<br />
* collectd-ceilometer-plugin->collectd-openstack-plugins https://review.openstack.org/#/c/500768<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=126076Meetings/InfraTeamMeeting2016-05-31T13:46:20Z<p>David Moreau Simard: </p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
{{:Header}}<br />
<br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
* Actions from last meeting<br />
* Specs approval<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/ansible_puppet_apply.html Ansible Puppet Apply]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/dib-nodepool.html Use Diskimage Builder in Nodepool]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/infra-cloud.html Infra-cloud]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/logs-in-swift.html Store Build Logs in Swift]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/maniphest.html maniphest migration]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html Common OpenStack CI Solution]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
* [https://specs.openstack.org/openstack-infra/infra-specs/specs/jenkins-job-builder_2.0.0-api-changes.html Jenkins Job Builder v2 API] (waynr)<br />
* has been rebased on to JJB master branch and is ready for review<br />
**Next steps:<br />
*** Delete the feature/2.0.0 branch from the jenkins-job-builder repo since it is now targeting master branch.<br />
*** I (Wayne Warren) am still working to flesh out the API docstrings and write unit tests for the API (where possible) but there are currently 25 commits that are ready to be reviewed.<br />
*** Is it okay to begin merging earlier commits in this series if I am still working on documentation and unit tests?<br />
* OpenID implementation (notmorgan, puiterwijk)<br />
** This came up in an internal meeting at Red Hat and there is a general interest in what the timelines are and where resources can be contributed<br />
* (dmsimard) Would appreciate a yay or nay on new project creation at https://review.openstack.org/#/c/321226/ to move forward or consider alternatives ASAP.<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding change in Gerrit)<br />
<br />
openstack/openstack-ansible-ironic -> openstack/openstack-ansible-os_ironic https://review.openstack.org/299192<br />
<br />
openstack-infra/ansible-puppet -> openstack-infra/ansible-role-puppet<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Meetings/InfraTeamMeeting&diff=123697Meetings/InfraTeamMeeting2016-04-08T20:48:11Z<p>David Moreau Simard: </p>
<hr />
<div><br />
<!-- ## page was renamed from Meetings/CITeamMeeting --><br />
{{:Header}}<br />
<br />
= Weekly Project Infrastructure team meeting =<br />
<br />
The OpenStack Project Infrastructure Team holds public weekly meetings in <code><nowiki>#openstack-meeting</nowiki></code>, Tuesdays at 1900 UTC. Everyone interested in infrastructure and process surrounding automated testing and deployment is encouraged to attend.<br />
<br />
Please feel free to add agenda items (and your IRC nick in parenthesis).<br />
<br />
== Agenda for next meeting ==<br />
<br />
* Announcements<br />
* Actions from last meeting<br />
** [https://etherpad.openstack.org/p/gerrit_server_replacement yolanda draft a maintenance plan for the gerrit server replacement]<br />
** [http://lists.openstack.org/pipermail/openstack-dev/2016-April/091274.html yolanda send maintenance reminder announcement to the mailing list on April 4]<br />
* Specs approval<br />
* Priority Efforts<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/ansible_puppet_apply.html Ansible Puppet Apply]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/dib-nodepool.html Use Diskimage Builder in Nodepool]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/infra-cloud.html Infra-cloud]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/logs-in-swift.html Store Build Logs in Swift]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/maniphest.html maniphest migration]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html Common OpenStack CI Solution]<br />
** [http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html Zuul v3]<br />
* Proposal jobs: (AJaeger) We currently have some reviews for extra proposal jobs, what are our polices for adding them? Those run on the proposal jobs and thus we have to review for security:<br />
** https://review.openstack.org/#/c/301375/<br />
** https://review.openstack.org/#/c/267941/<br />
** https://review.openstack.org/#/c/291514/<br />
** https://review.openstack.org/#/c/291517/<br />
* Virtual Machines are provided with inconsistent swap configuration (dmsimard)<br />
** https://review.openstack.org/#/c/300122/<br />
* Open discussion<br />
<br />
== Upcoming Project Renames ==<br />
(any additions should mention original->new full names and link to the corresponding change in Gerrit)<br />
<br />
openstack/openstack-ansible-ironic -> openstack/openstack-ansible-os_ironic https://review.openstack.org/299192<br />
<br />
== Previous meetings ==<br />
Previous meetings, with their notes and logs, can be found at http://eavesdrop.openstack.org/meetings/infra/ and earlier at http://eavesdrop.openstack.org/meetings/ci/</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=34859Puppet/ceph-blueprint2013-11-05T15:38:40Z<p>David Moreau Simard: /* Implementor components */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single [https://launchpad.net/puppet-ceph puppet-ceph module].<br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the [[#Related_tools_and_implementations inventory|of the existing efforts]] Having a puppet ceph module under the umbrella of the stackforge infrastructure helps federate the efforts while providing a workflow that will improve the overall quality of the module.<br />
<br />
* [https://review.openstack.org/#/q/status:open+project:stackforge/puppet-ceph,n,z gerrit review page]<br />
* [https://launchpad.net/puppet-ceph puppet-ceph at launchpad]<br />
<br />
== Roadmap ==<br />
<br />
Almost each component of this module deserve a discussion and it would take a long time to agree on everything before getting something useful. The following list sets the order in which each module is going to be implemented. Each step must be a useable puppet module, unit tested and including integration tests.<br />
<br />
* [[#conf|conf]]<br />
* [[#key|key]]<br />
* [[#mon|mon]]<br />
* [[#osd|osd]]<br />
* [[#pool|pool]]<br />
* [[#rbd|rbd]]<br />
<br />
<br />
== User Stories ==<br />
<br />
=== I want to try this module, heard of ceph, want to see it in action ===<br />
<br />
/node/ { <br />
ceph::conf { auth_enable: false };<br />
ceph::mon; <br />
ceph::osd { '/srv/osd1' }; <br />
ceph::osd { '/srv/osd2' }; <br />
}<br />
<br />
* install puppet, <br />
* paste this in site.pp and replace /node/ with the name of your current node, <br />
* puppet apply site.pp , <br />
* ceph -s and see that it works<br />
<br />
=== I want to run benchmarks on three new machines ===<br />
<br />
* There are four machines, 3 OSD, 1 MON and one machine that is the client from which the user runs commands.<br />
* install puppetmaster and create site.pp with:<br />
<br />
/ceph-default/ {<br />
ceph::conf { 'global':<br />
auth_enable => false,<br />
'mon host' => 'node1'<br />
};<br />
}<br />
<br />
/node1/ inherits ceph-default { <br />
ceph::mon; <br />
ceph::osd { disk => 'discover' }; <br />
}<br />
<br />
/node2/, /node3/ inherits ceph-default { <br />
ceph::osd { disk => 'discover' }; <br />
}<br />
<br />
/client/ inherits ceph-default { <br />
ceph::client;<br />
}<br />
<br />
* ssh client<br />
* rados bench <br />
* interpret the results<br />
<br />
=== I want to operate a production cluster ===<br />
<br />
$admin_key = 'AQCTg71RsNIHORAAW+O6FCMZWBjmVfMIPk3MhQ=='<br />
$mon_key = 'AQDesGZSsC7KJBAAw+W/Z4eGSQGAIbxWjxjvfw=='<br />
$boostrap_osd_key = 'AQABsWZSgEDmJhAAkAGSOOAJwrMHrM5Pz5On1A=='<br />
<br />
/ceph-default/ {<br />
ceph::conf { 'mon host' => 'mon1,mon2,mon3' }; <br />
}<br />
<br />
/mon[123]/ inherits ceph-default { <br />
ceph::mon { key => $mon_key }<br />
ceph::key { 'client.admin':<br />
secret => $admin_key,<br />
caps_mon => '*',<br />
caps_osd => '*',<br />
inject => true,<br />
}<br />
ceph::key { 'client.bootstrap-osd': <br />
secret => $bootstrap_osd_key,<br />
caps_mon => 'profile bootstrap-osd'<br />
inject => true,<br />
}<br />
}<br />
<br />
/osd*/ inherits ceph-default { <br />
ceph::osd { disk => 'discover' }; <br />
ceph::key { 'client.bootstrap-osd':<br />
keyring => '/var/lib/ceph/bootstrap-osd/ceph.keyring',<br />
secret => $bootstrap_osd_key,<br />
}<br />
}<br />
<br />
/client/ inherits ceph-default { <br />
ceph::key { 'client.admin':<br />
secret => $admin_key<br />
}<br />
ceph::client;<br />
}<br />
<br />
* the '''osd*''' nodes only contain disks that are used for OSD and using the '''discover''' option to automatically use new disks and provision them as part of the cluster is acceptable, there is no risk of destroying unrelated data.<br />
* when a node dies, all its disks can be placed in another machines and the OSDs will automatically be re-inserted in the cluster, even if an external journal is used<br />
<br />
=== I want to spawn a cluster configured with a puppetmaster as part of a continuous integration effort ===<br />
<br />
''Leveraging vagrant, vagrant-openstack, openstack''<br />
* Ceph is used as a backend storage for various use cases<br />
* There are tests to make sure the Ceph cluster was instantiated properly<br />
* There are tests to make sure various other infrastructure components (or products) can use the Ceph cluster<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions ====<br />
<br />
The Operating System versions supported must be tested with integration on the actual operating system. Although it is fairly to add support for an Operating System, it is prone to regressions if not tested. The per Operating System support strategy mimics the way OpenStack modules do it. <br />
<br />
The supported versions of the components that deal with the environment in which Ceph is used ( OpenStack, Cloudstack, Ganeti etc. ) are handled by each component on a case by case basis. There probably is too much heterogeneity to set a rule.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
=== Prefer cli over REST ===<br />
<br />
The ceph cli is preferred because the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] requires the installation of an additional daemon.<br />
<br />
=== Module versioning ===<br />
<br />
Create a branch for each Ceph release ( stable/cuttlefish, stable/dumpling etc. ) and follow the same pattern as the OpenStack modules<br />
<br />
=== Support Ceph versions from cuttlefish ===<br />
<br />
Do not support Ceph versions released before cuttlefish<br />
<br />
== Integration tests ==<br />
<br />
All scenarios can probably be covered with 2 virtual machines, 2 interfaces and one disk attached to one of the machines. A number of scenarios can be based on a single machine, using directories instead of disks and a single interface.<br />
<br />
* use https://github.com/puppetlabs/rspec-system-puppet and check that it can be used with the vagrant openstack backend https://github.com/cloudbau/vagrant-openstack-plugin<br />
* use openstack by running a script like this with a dedicated tenant to prevent breakage ( see http://ci.openstack.org/third_party.html )<br />
<pre><br />
export OS_PASSWORD=admin_pass<br />
export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/<br />
export OS_USERNAME=admin<br />
export OS_TENANT_NAME=openstack<br />
<br />
ssh -p 29418 review.example.com gerrit stream-events |<br />
while read event ; do<br />
if event is commit ; then<br />
git clone puppet-ceph from gerrit<br />
cd puppet-ceph <br />
bundle exec rake spec:system # https://github.com/puppetlabs/rspec-system-puppet#run-spec-tests<br />
if fail ; then<br />
ssh -p 29418 review.example.com \<br />
gerrit review -m '"Test failed"' --verified=-1 c0ff33<br />
fi<br />
fi<br />
done<br />
</pre><br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
Although the key separator can either be space or underscore, only '''space''' is allowed to help with consistency. <br />
<br />
::[[User:Xarses|Xarses]] ([[User talk:Xarses|talk]]) hyphen is a also valid seperator. From my testing i found that the ini_file provider can not tell the difference in keys between "auth supported" and "auth_supported" which will lead to duplicate entries in ceph.conf if we aren't careful.<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Dachary|talk]]) : since [https://github.com/ceph/ceph-deploy/commit/53f46a8451dd8e10a3b9e8f2b191044f9863ae83 ceph-deploy enforces the use of _] and for the sake of consistency, it is probably better to use undercore instead of space.<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Dachary|talk]]) : bodep prefers ini, dalgaaf prefers concat, loic does not care but voted ini because there is more expertise, xarses and dmsimard +1 ini, mgagne does not object ini : it is going to be implemented as a provider such as nova_config.<br />
<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) If the ini_file provider has a problems with space and underscore as separator, it would be one more reason to use concat since this would force the type of key separator used in the template. <br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
* auth_enable - true or false, enables/disables cephx, defaults to true<br />
::If enable is true, set the following in the [global] section of the conf file:<br />
<br />
auth cluster required = cephx<br />
auth service required = cephx<br />
auth client required = cephx<br />
auth supported = cephx<br />
<br />
::If enable is false, set the following in the [global] section of the conf file:<br />
<br />
auth cluster required = none<br />
auth service required = none<br />
auth client required = none<br />
auth supported = none<br />
<br />
<br />
::It should support [http://ceph.com/docs/master/rados/operations/authentication/#disabling-cephx disabling] or [http://ceph.com/docs/master/rados/operations/authentication/#enabling-cephx enabling] cephx when the values change. If it does not support updating, it must fail when changed on an existing Ceph cluster.<br />
<br />
Using a [https://github.com/puppetlabs/puppetlabs-inifile inifile] child provider ( such as [https://github.com/stackforge/puppet-cinder/blob/master/lib/puppet/provider/cinder_config/ini_setting.rb cinder_config] ) a setting would look like<br />
<br />
ceph_conf {<br />
'GLOBAL/fsid': value => $fsid;<br />
}<br />
<br />
And create '''/etc/ceph/ceph.conf''' such as:<br />
<br />
[global]<br />
fsid = 918340183294812038<br />
<br />
Improvements to be implemented later:<br />
* If a key/value pair is modified in the *mon*, *osd* or *mds* sections, all daemons are [http://ceph.com/docs/master/rados/configuration/ceph-conf/#runtime-changes notified of the change] with ceph {daemon} tell * ....<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper and update the /etc/ceph/ceph.conf file with [osd.X] sections matching the osd found in /var/lib/ceph/osd<br />
* '''interface''':<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD.<br />
** bootstrap-osd - the bootstrap-osd secret key (optional if cephx = none )<br />
** dmcrypt - options needed to encrypt disks (optional)<br />
<br />
The generated [osd.X] section must contain the host and disk so that rcscript run the osd daemon at boot time.<br />
<br />
If the directory/disk is set to '''discover''', ceph-disk list is used to find unknown disks or partitions. All unknown disks are prepared with ceph-disk prepare. That effectively allows someone to say : use whatever disks are not in use for ceph and leave the rest alone. An operator would only have to add new disk and way for the next puppet client pass to have them integrated in the cluster. If a disk is removed, the OSD is not launched at boot time and there is nothing to do. <br />
<br />
Support [https://github.com/ceph/ceph/blob/v0.61.9/src/ceph-disk#L2027 ceph-disk suppress]<br />
<br />
Here is what should happen on a node with at least one OSD<br />
* common to all OSD on the same node:<br />
** the /etc/ceph/ceph.conf file is setup with the IPs of the monitors<br />
** [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/osd.py#L47 the /var/lib/ceph/bootstrap-osd/{cluster}.bootstrap-osd.keyring] file contains a user/key that is used to to create an OSD. The bootstrap-osd user key is usually the same for all OSD. For instance:<br />
[client.bootstrap-osd]<br />
key = AQCUg71RYEi7DxAAxlyC1KExxSnNJgim6lmuGA==<br />
** The user bootstrap-osd with this key with caps to bootstrap an OSD:<br />
$ ceph auth list<br />
...<br />
client.bootstrap-osd<br />
key: AQCUg71RYEi7DxAAxlyC1KExxSnNJgim6lmuGA==<br />
caps: [mon] allow profile bootstrap-osd<br />
* for each OSD <br />
** in the same way [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/osd.py#L104 ceph-deploy prepare the disk] call ceph-disk-prepare that will [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1031 set magic partition uuid] and trigger [https://github.com/ceph/ceph/blob/master/udev/95-ceph-osd.rules#L11 udev rules] to [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L453 ceph osd create]. When udev settles, the new osd is integrated into the cluster and uses its own key, created, registered to the MON and stored locally as a side effect of [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1301 --mkkey]. The osd daemon is also run as a side effect of udev detecting the disk and calling [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1414 /etc/init/ceph-osd.conf]. ceph-disk contains a [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L18 high level description of the process]<br />
** '''dmcrypt''' is also handled by the udev logic ( details ??? keys ??? )<br />
<br />
At boot time the [https://github.com/ceph/ceph/blob/master/src/upstart/ceph-osd-all-starter.conf#L14 /var/lib/ceph/osd directory is explored] to discover all OSDs that need to be started. Operating systems for which the same logic is not implemented will need an additional script run at boot time to perform the same exploration until the default script is updated to add this capability.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
Creates the files and keyring supporting a mon, runs the daemon<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON<br />
* '''interface''':<br />
** cluster - cluster name ( defaults to ceph )<br />
** id - the id of the mon<br />
** ip_address - the ip addresses of the mon<br />
** key - the mon. user key<br />
<br />
* add a [mon.$id] section to the conf file (depends on ceph::conf to write the base part of the config)<br />
* if auth == cephx<br />
** the mon. key is mandatory and need to be set by the user to be a valid ceph key. The documentation should contain an example key and explanations about how to [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/new.py#L20 create an auth key].<br />
** [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/new.py#L86 writes the keyring] <br />
* installs the packages<br />
* ceph-mon --id 0 --mkfs<br />
* runs the mon daemon<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': maps and mounts a rbd image, taking care of dependencies (packages, rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== init === <br />
<br />
* '''proposed name''': ceph::init<br />
* '''purpose''': Should ultimately be a small class that takes care of installing/configuring the common dependencies of each classes.<br />
* '''interface''':<br />
** ?<br />
<br />
=== params === <br />
<br />
* '''proposed name''': ceph::params<br />
* '''purpose''': A class that is used to store variables, likely defaults and/or constants, to be used in various classes<br />
* '''interface''':<br />
** None ?<br />
<br />
=== repository === <br />
<br />
Inspired by [https://github.com/stackforge/puppet-openstack/blob/master/manifests/repo.pp openstack::repo].<br />
<br />
* '''proposed name''': ceph::repo<br />
* '''purpose''': use puppetlabs/apt to configure the official ceph repository so we can install ceph packages<br />
* '''interface''':<br />
** release: target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': setup /etc/ceph/ceph.conf to connect to the Ceph cluster and install the ceph cli <br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
Keyring management, authentication. It would be a class to create keys for new users (e.g. a user that can create RBDs or use the Objectstore) which may require special access rights. But would also be used by the other classes like ceph::mon or ceph::osd to place e.g. the shared 'client.admin' or 'mon.' keys. <br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, inject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
See [https://github.com/TelekomCloud/puppet-ceph/blob/rc/eisbrecher/manifests/key.pp key.pp] for an example implementation of this semantic.<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster such as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes). RGW Keystone is noted below<br />
<br />
:: --[[User:Xarses|xarses]] ([[User talk:Xarses|talk]]) RGW keystone should be included in the ceph module as RGW is the consumer of the keystone service. Unlike cinder/glance where they are consumers of ceph.<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
:: --[[User:Xarses|xarses]] ([[User talk:Xarses|talk]]) agree with Danny; we are missing:<br />
** user - user to run rados as as well as own files<br />
** host - hostname for this ini section<br />
** keyring_path - path to key file<br />
** log_file - where to write logs to<br />
** rgw_dns_name - dns name (may include wildcard ) to use with s3 api calls<br />
** rgw_socket_path - path to socket file<br />
** rgw_print_continue - (bool) if we are going to send 100 codes to the client<br />
<br />
:: --[[User:Xarses|xarses]] ([[User talk:Xarses|talk]]) also we should include apache magic here to setup vhost and script-server. in which case we should also support *port* param.<br />
<br />
=== rgw kestone ===<br />
<br />
* '''proposed name''': ceph::rgw::keystone<br />
* '''purpose''': extends radosgw configuration to be able to retrieve auth from keystone tokens and setup keystone endpoint<br />
* '''interface''':<br />
** rgw_keystone_url - the internal or admin url for keystone<br />
** rgw_keystone_admin_token - the admin token for keystone<br />
** rgw_keystone_accepted_roles - which roles should we accept from keystone<br />
** rgw_keystone_token_cache_size - how many tokens to keep cached, not useful if not using PKI as every token is checked <br />
** rgw_keystone_revocation_interval - interval to check for expired tokens, not useful if not using PKI tokens (if not, set to high value)<br />
** use_pki - (bool) to determine if keystone is using token_format = PKI and if so do PKI signing parts<br />
** nss_db_path - path to NSS < - > keystone tokens db files<br />
<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33602Puppet/ceph-blueprint2013-10-22T18:12:28Z<p>David Moreau Simard: /* package */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single [https://launchpad.net/puppet-ceph puppet-ceph module].<br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the [#Related_tools_and_implementations|inventory of the existing efforts] Having a puppet ceph module under the umbrella of the stackforge infrastructure helps federate the efforts while providing a workflow that will improve the overall quality of the module.<br />
<br />
* [https://review.openstack.org/#/q/status:open+project:stackforge/puppet-ceph,n,z gerrit review page]<br />
* [https://launchpad.net/puppet-ceph puppet-ceph at launchpad]<br />
<br />
== Roadmap ==<br />
<br />
Almost each component of this module deserve a discussion and it would take a long time to agree on everything before getting something useful. The following list sets the order in which each module is going to be implemented. Each step must be a useable puppet module, unit tested and including integration tests.<br />
<br />
* [[#conf|conf]]<br />
* [[#key|key]]<br />
* [[#mon|mon]]<br />
* [[#osd|osd]]<br />
* [[#pool|pool]]<br />
* [[#rbd|rbd]]<br />
<br />
<br />
== User Stories ==<br />
<br />
=== I want to try this module, heard of ceph, want to see it in action ===<br />
<br />
/node/ { <br />
ceph::conf { auth_enable: false };<br />
ceph::mon; <br />
ceph::osd { '/srv/osd1' }; <br />
ceph::osd { '/srv/osd2' }; <br />
}<br />
<br />
* install puppet, <br />
* paste this in site.pp and replace /node/ with the name of your current node, <br />
* puppet apply site.pp , <br />
* ceph -s and see that it works<br />
<br />
=== I want to run benchmarks on three new machines ===<br />
<br />
* There are four machines, 3 OSD, 1 MON and one machine that is the client from which the user runs commands.<br />
* install puppetmaster and create site.pp with:<br />
<br />
/ceph-default/ {<br />
ceph::conf { auth_enable: false };<br />
ceph::conf { 'mon host': 'node1' }; <br />
}<br />
<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Is it really the plan to have to call ceph::conf for each parameter? Would be very redundant! Is this the reason that there is no ordering of the keys?<br />
<br />
/node1/ inherits ceph-default { <br />
ceph::mon; <br />
ceph::osd { disk: 'discover' }; <br />
}<br />
<br />
/node2/, /node3/ inherits ceph-default { <br />
ceph::osd { disk: 'discover' }; <br />
}<br />
<br />
/client/ inherits ceph-default { <br />
ceph::client;<br />
}<br />
<br />
* ssh client<br />
* rados bench <br />
* interpret the results<br />
<br />
=== I want to operate a production cluster ===<br />
<br />
$admin_key = 'AQCTg71RsNIHORAAW+O6FCMZWBjmVfMIPk3MhQ=='<br />
$mon_key = 'AQDesGZSsC7KJBAAw+W/Z4eGSQGAIbxWjxjvfw=='<br />
$boostrap_osd_key = 'AQABsWZSgEDmJhAAkAGSOOAJwrMHrM5Pz5On1A=='<br />
<br />
/ceph-default/ {<br />
ceph::conf { 'mon host': 'mon1,mon2,mon3' }; <br />
}<br />
<br />
/mon[123]/ inherits ceph-default { <br />
ceph::mon { key: $mon_key }<br />
ceph::key { 'client.admin':<br />
secret: $admin_key,<br />
caps_mon: '*',<br />
caps_osd: '*',<br />
}<br />
ceph::key { 'client.bootstrap-osd': <br />
secret: $bootstrap_osd_key,<br />
caps_mon: 'profile bootstrap-osd'<br />
}<br />
}<br />
<br />
/osd*/ inherits ceph-default { <br />
ceph::osd { disk: 'discover' }; <br />
ceph::key { 'client.bootstrap-osd':<br />
keyring: '/var/lib/ceph/bootstrap-osd/ceph.keyring',<br />
secret: $bootstrap_osd_key,<br />
}<br />
}<br />
<br />
/client/ inherits ceph-default { <br />
ceph::key { 'client.admin':<br />
secret: $admin_key<br />
}<br />
ceph::client;<br />
}<br />
<br />
* the '''osd*''' nodes only contain disks that are used for OSD and using the '''discover''' option to automatically use new disks and provision them as part of the cluster is acceptable, there is no risk of destroying unrelated data.<br />
* when a node dies, all its disks can be placed in another machines and the OSDs will automatically be re-inserted in the cluster, even if an external journal is used<br />
<br />
=== I want to spawn a cluster configured with a puppetmaster as part of a continuous integration effort ===<br />
<br />
''Leveraging vagrant, vagrant-openstack, openstack''<br />
* Ceph is used as a backend storage for various use cases<br />
* There are tests to make sure the Ceph cluster was instantiated properly<br />
* There are tests to make sure various other infrastructure components (or products) can use the Ceph cluster<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions ====<br />
<br />
The Operating System versions supported must be tested with integration on the actual operating system. Although it is fairly to add support for an Operating System, it is prone to regressions if not tested. The per Operating System support strategy mimics the way OpenStack modules do it. <br />
<br />
The supported versions of the components that deal with the environment in which Ceph is used ( OpenStack, Cloudstack, Ganeti etc. ) are handled by each component on a case by case basis. There probably is too much heterogeneity to set a rule.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
=== Prefer cli over REST ===<br />
<br />
The ceph cli is preferred because the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] requires the installation of an additional daemon.<br />
<br />
=== Module versioning ===<br />
<br />
Create a branch for each Ceph release ( stable/cuttlefish, stable/dumpling etc. ) and follow the same pattern as the OpenStack modules<br />
<br />
=== Support Ceph versions from cuttlefish ===<br />
<br />
Do not support Ceph versions released before cuttlefish<br />
<br />
== Integration tests ==<br />
<br />
All scenarios can probably be covered with 2 virtual machines, 2 interfaces and one disk attached to one of the machines. A number of scenarios can be based on a single machine, using directories instead of disks and a single interface.<br />
<br />
Use https://github.com/puppetlabs/rspec-system-puppet<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Dachary|talk]]) : I mean it would have to somehow leverage things ( what ? how ? ) in the openstack-ci infrastructure ?<br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
Although the key separator can either be space or underscore, only '''space''' is allowed to help with consistency. <br />
<br />
::[[User:Xarses|Xarses]] ([[User talk:Xarses|talk]]) hyphen is a also valid seperator. From my testing i found that the ini_file provider can not tell the difference in keys between "auth supported" and "auth_supported" which will lead to duplicate entries in ceph.conf if we aren't careful.<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Dachary|talk]]) : since [https://github.com/ceph/ceph-deploy/commit/53f46a8451dd8e10a3b9e8f2b191044f9863ae83 ceph-deploy enforces the use of _] and for the sake of consistency, it is probably better to use undercore instead of space.<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Dachary|talk]]) : bodep prefers ini, dalgaaf prefers concat, loic does not care but voted ini because there is more expertise, xarses and dmsimard +1 ini, mgagne does not object ini : it is going to be implemented as a provider such as nova_config.<br />
<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) If the ini_file provider has a problems with space and underscore as separator, it would be one more reason to use concat since this would force the type of key separator used in the template. <br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
* auth_enable - true or false, enables/disables cephx, defaults to true<br />
::If enable is true, set the following in the [global] section of the conf file:<br />
<br />
auth cluster required = cephx<br />
auth service required = cephx<br />
auth client required = cephx<br />
auth supported = cephx<br />
<br />
::If enable is false, set the following in the [global] section of the conf file:<br />
<br />
auth cluster required = none<br />
auth service required = none<br />
auth client required = none<br />
auth supported = none<br />
<br />
<br />
::It should support [http://ceph.com/docs/master/rados/operations/authentication/#disabling-cephx disabling] or [http://ceph.com/docs/master/rados/operations/authentication/#enabling-cephx enabling] cephx when the values change. If it does not support updating, it must fail when changed on an existing Ceph cluster.<br />
<br />
Using a [https://github.com/puppetlabs/puppetlabs-inifile inifile] child provider ( such as [https://github.com/stackforge/puppet-cinder/blob/master/lib/puppet/provider/cinder_config/ini_setting.rb cinder_config] ) a setting would look like<br />
<br />
ceph_conf {<br />
'GLOBAL/fsid': value => $fsid;<br />
}<br />
<br />
And create '''/etc/ceph/ceph.conf''' such as:<br />
<br />
[global]<br />
fsid = 918340183294812038<br />
<br />
Improvements to be implemented later:<br />
* If a key/value pair is modified in the *mon*, *osd* or *mds* sections, all daemons are [http://ceph.com/docs/master/rados/configuration/ceph-conf/#runtime-changes notified of the change] with ceph {daemon} tell * ....<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper<br />
* '''interface''':<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD.<br />
** bootstrap-osd - the bootstrap-osd secret key (optional if cephx = none )<br />
** dmcrypt - options needed to encrypt disks (optional)<br />
<br />
If the directory/disk is set to '''discover''', ceph-disk list is used to find unknown disks or partitions. All unknown disks are prepared with ceph-disk prepare. That effectively allows someone to say : use whatever disks are not in use for ceph and leave the rest alone. An operator would only have to add new disk and way for the next puppet client pass to have them integrated in the cluster. If a disk is removed, the OSD is not launched at boot time and there is nothing to do. <br />
<br />
Here is what should happen on a node with at least one OSD<br />
* common to all OSD on the same node:<br />
** the /etc/ceph/ceph.conf file is setup with the IPs of the monitors<br />
** [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/osd.py#L47 the /var/lib/ceph/bootstrap-osd/{cluster}.bootstrap-osd.keyring] file contains a user/key that is used to to create an OSD. The bootstrap-osd user key is usually the same for all OSD. For instance:<br />
[client.bootstrap-osd]<br />
key = AQCUg71RYEi7DxAAxlyC1KExxSnNJgim6lmuGA==<br />
** The user bootstrap-osd with this key with caps to bootstrap an OSD:<br />
$ ceph auth list<br />
...<br />
client.bootstrap-osd<br />
key: AQCUg71RYEi7DxAAxlyC1KExxSnNJgim6lmuGA==<br />
caps: [mon] allow profile bootstrap-osd<br />
* for each OSD <br />
** in the same way [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/osd.py#L104 ceph-deploy prepare the disk] call ceph-disk-prepare that will [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1031 set magic partition uuid] and trigger [https://github.com/ceph/ceph/blob/master/udev/95-ceph-osd.rules#L11 udev rules] to [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L453 ceph osd create]. When udev settles, the new osd is integrated into the cluster and uses its own key, created, registered to the MON and stored locally as a side effect of [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1301 --mkkey]. The osd daemon is also run as a side effect of udev detecting the disk and calling [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1414 /etc/init/ceph-osd.conf]. ceph-disk contains a [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L18 high level description of the process]<br />
** '''dmcrypt''' is also handled by the udev logic ( details ??? keys ??? )<br />
<br />
At boot time the [https://github.com/ceph/ceph/blob/master/src/upstart/ceph-osd-all-starter.conf#L14 /var/lib/ceph/osd directory is explored] to discover all OSDs that need to be started. Operating systems for which the same logic is not implemented will need an additional script run at boot time to perform the same exploration until the default script is updated to add this capability.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
Creates the files and keyring supporting a mon, runs the daemon<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON<br />
* '''interface''':<br />
** cluster - cluster name ( defaults to ceph )<br />
** id - the id of the mon<br />
** ip_address - the ip addresses of the mon<br />
** key - the mon. user key<br />
<br />
* add a [mon.$id] section to the conf file (depends on ceph::conf to write the base part of the config)<br />
* if auth == cephx<br />
** the mon. key is mandatory and need to be set by the user to be a valid ceph key. The documentation should contain an example key and explanations about how to [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/new.py#L20 create an auth key].<br />
** [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/new.py#L86 writes the keyring] <br />
* installs the packages<br />
* ceph-mon --id 0 --mkfs<br />
* runs the mon daemon<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': maps and mounts a rbd image, taking care of dependencies (packages, rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== repository === <br />
<br />
* '''proposed name''': ceph::repo<br />
* '''purpose''': use puppetlabs/apt to configure the official ceph repository so we can install ceph packages<br />
* '''interface''':<br />
** release: target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': setup /etc/ceph/ceph.conf to connect to the Ceph cluster and install the ceph cli <br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
Keyring management, authentication. It would be a class to create keys for new users (e.g. a user that can create RBDs or use the Objectstore) which may special access rights. But would also be used by the other classes like ceph::mon or ceph::osd to place e.g. the shared 'client.admin' or 'mon.' keys. I would handle here all key related tasks. <br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster such as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33334Puppet/ceph-blueprint2013-10-21T14:39:43Z<p>David Moreau Simard: /* I want to spawn a cluster and configure it through a puppetmaster as part of a continuous integration effort */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single [https://launchpad.net/puppet-ceph puppet-ceph module].<br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the [#Related_tools_and_implementations|inventory of the existing efforts] Having a puppet ceph module under the umbrella of the stackforge infrastructure helps federate the efforts while providing a workflow that will improve the overall quality of the module.<br />
<br />
* [https://review.openstack.org/#/q/status:open+project:stackforge/puppet-ceph,n,z gerrit review page]<br />
* [https://launchpad.net/puppet-ceph puppet-ceph at launchpad]<br />
<br />
== Roadmap ==<br />
<br />
Almost each component of this module deserve a discussion and it would take a long time to agree on everything before getting something useful. The following list sets the order in which each module is going to be implemented. Each step must be a useable puppet module, unit tested and including integration tests.<br />
<br />
* [[#conf|conf]]<br />
* [[#key|key]]<br />
* [[#mon|mon]]<br />
* [[#osd|osd]]<br />
* [[#pool|pool]]<br />
<br />
== User Stories ==<br />
<br />
=== I want to try this module, heard of ceph, want to see it in action ===<br />
<br />
/node/ { <br />
ceph::conf { auth_enable: false };<br />
ceph::mon; <br />
ceph::osd { '/srv/osd1' }; <br />
ceph::osd { '/srv/osd2' }; <br />
}<br />
<br />
* install puppet, <br />
* paste this in site.pp and replace /node/ with the name of your current node, <br />
* puppet apply site.pp , <br />
* ceph -s and see that it works<br />
<br />
=== I want to run benchmarks on three new machines ===<br />
<br />
* There are four machines, 3 OSD, 1 MON and one machine that is the client from which the user runs commands.<br />
* install puppetmaster and create site.pp with:<br />
<br />
/ceph-default/ {<br />
ceph::conf { auth_enable: false };<br />
ceph::conf { 'mon host': 'node1' }; <br />
}<br />
<br />
/node1/ inherits ceph-default { <br />
ceph::mon; <br />
ceph::osd { '/dev/sdb' }; <br />
}<br />
<br />
/node2/, /node3/ inherits ceph-default { <br />
ceph::osd { '/dev/sdb' }; <br />
ceph::osd { '/dev/sdc' }; <br />
}<br />
<br />
/client/ inherits ceph-default { <br />
ceph::client;<br />
}<br />
<br />
* ssh client<br />
* rados bench <br />
* interpret the results<br />
<br />
=== I want to spawn a cluster configured with a puppetmaster as part of a continuous integration effort ===<br />
<br />
''Leveraging vagrant, vagrant-openstack, openstack''<br />
* Ceph is used as a backend storage for various use cases<br />
* There are tests to make sure the Ceph cluster was instantiated properly<br />
* There are tests to make sure various other infrastructure components (or products) can use the Ceph cluster<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions ====<br />
<br />
The Operating System versions supported must be tested with integration on the actual operating system. Although it is fairly to add support for an Operating System, it is prone to regressions if not tested. The per Operating System support strategy mimics the way OpenStack modules do it. <br />
<br />
The supported versions of the components that deal with the environment in which Ceph is used ( OpenStack, Cloudstack, Ganeti etc. ) are handled by each component on a case by case basis. There probably is too much heterogeneity to set a rule.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
=== Prefer cli over REST ===<br />
<br />
The ceph cli is preferred because the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] requires the installation of an additional daemon.<br />
<br />
=== Module versioning ===<br />
<br />
Create a branch for each Ceph release ( stable/cuttlefish, stable/dumpling etc. ) and follow the same pattern as the OpenStack modules<br />
<br />
=== Support Ceph versions from cuttlefish ===<br />
<br />
Do not support Ceph versions released before cuttlefish<br />
<br />
== Integration tests ==<br />
<br />
All scenarios can probably be covered with 2 virtual machines, 2 interfaces and one disk attached to one of the machines. A number of scenarios can be based on a single machine, using directories instead of disks and a single interface.<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Dachary|talk]]) : how are the integration test resource provisioned ? Where to look to learn more ?<br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
* auth_enable - true or false, enables/disables cephx, defaults to true<br />
::If enable is true, set the following in the [global] section of the conf file:<br />
<br />
auth cluster required = cephx<br />
auth service required = cephx<br />
auth client required = cephx<br />
auth supported = cephx<br />
<br />
::If enable is false, set the following in the [global] section of the conf file:<br />
<br />
auth cluster required = none<br />
auth service required = none<br />
auth client required = none<br />
auth supported = none<br />
<br />
<br />
::It should support [http://ceph.com/docs/master/rados/operations/authentication/#disabling-cephx disabling] or [http://ceph.com/docs/master/rados/operations/authentication/#enabling-cephx enabling] cephx when the values change. If it does not support updating, it must fail when changed on an existing Ceph cluster.<br />
<br />
<br />
Using a [https://github.com/puppetlabs/puppetlabs-inifile inifile] child provider ( such as [https://github.com/stackforge/puppet-cinder/blob/master/lib/puppet/provider/cinder_config/ini_setting.rb cinder_config] ) a setting would look like<br />
<br />
ceph_conf {<br />
'GLOBAL/fsid': value => $fsid;<br />
}<br />
<br />
And create '''/etc/ceph/ceph.conf''' such as:<br />
<br />
[global]<br />
fsid = 918340183294812038<br />
<br />
Improvements to be implemented later:<br />
* If a key/value pair is modified in the *mon*, *osd* or *mds* sections, all daemons are [http://ceph.com/docs/master/rados/configuration/ceph-conf/#runtime-changes notified of the change] with *ceph {daemon} tell * ...*.<br />
<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper<br />
* '''interface''':<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** bootstrap-osd - the bootstrap-osd secret key<br />
** dmcrypt - options needed to encrypt disks<br />
<br />
Here is what should happen on a node with at least one OSD<br />
* common to all OSD on the same node:<br />
** the /etc/ceph/ceph.conf file is setup with the IPs of the monitors<br />
** [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/osd.py#L47 the /var/lib/ceph/bootstrap-osd/{cluster}.bootstrap-osd.keyring] file contains a user/key that is used to to create an OSD. The bootstrap-osd user key is usually the same for all OSD. For instance:<br />
[client.bootstrap-osd]<br />
key = AQCUg71RYEi7DxAAxlyC1KExxSnNJgim6lmuGA==<br />
** The user bootstrap-osd with this key with caps to bootstrap an OSD:<br />
$ ceph auth list<br />
...<br />
client.bootstrap-osd<br />
key: AQCUg71RYEi7DxAAxlyC1KExxSnNJgim6lmuGA==<br />
caps: [mon] allow profile bootstrap-osd<br />
* for each OSD <br />
** in the same way [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/osd.py#L104 ceph-deploy prepare the disk] call ceph-disk-prepare that will [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1031 set magic partition uuid] and trigger [https://github.com/ceph/ceph/blob/master/udev/95-ceph-osd.rules#L11 udev rules] to [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L453 ceph osd create]. When udev settles, the new osd is integrated into the cluster and uses its own key, created, registered to the MON and stored locally as a side effect of [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1301 --mkkey]. The osd daemon is also run as a side effect of udev detecting the disk and calling [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1414 /etc/init/ceph-osd.conf]. ceph-disk contains a [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L18 high level description of the process]<br />
** '''dmcrypt''' is also handled by the udev logic ( details ??? keys ??? )<br />
<br />
At boot time the [https://github.com/ceph/ceph/blob/master/src/upstart/ceph-osd-all-starter.conf#L14 /var/lib/ceph/osd directory is explored] to discover all OSDs that need to be started. Operating systems for which the same logic is not implemented will need an additional script run at boot time to perform the same exploration until the default script is updated to add this capability.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
Creates the files and keyring supporting a mon, runs the daemon<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON<br />
* '''interface''':<br />
** cluster - cluster name ( defaults to ceph )<br />
** id - the id of the mon<br />
** ip_address - the ip addresses of the mon<br />
** key - the mon. user key<br />
<br />
* add a [mon.$id] section to the conf file (depends on ceph::conf to write the base part of the config)<br />
* if auth == cephx<br />
** the mon. key is mandatory and need to be set by the user to be a valid ceph key. The documentation should contain an example key and explanations about how to [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/new.py#L20 create an auth key].<br />
** [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/new.py#L86 writes the keyring] <br />
* installs the packages<br />
* ceph-mon --id 0 --mkfs<br />
* runs the mon daemon<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': maps and mounts a rbd image, taking care of dependencies (packages, rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== package === <br />
<br />
Although some distributions include packages for Ceph, it is recommended to install from the packages available from ceph.com http://ceph.com/docs/next/install/. It is not recommended to install the Ceph package provided by the standard repositories. This will change over time and the need to use the repository provided by ceph.com will gradually become less common. <br />
<br />
The use of the repositories from ceph.com should be documented in the README.md with a apt {} based example to clarify that the user is expected to get up to date packages repositories.<br />
<br />
* Mimic https://github.com/stackforge/puppet-nova/blob/master/manifests/params.pp#L8 for cross distribution package name<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': setup /etc/ceph/ceph.conf to connect to the Ceph cluster and install the ceph cli <br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
Keyring management, authentication. It would be a class to create keys for new users (e.g. a user that can create RBDs or use the Objectstore) which may special access rights. But would also be used by the other classes like ceph::mon or ceph::osd to place e.g. the shared 'client.admin' or 'mon.' keys. I would handle here all key related tasks. <br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster such as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33333Puppet/ceph-blueprint2013-10-21T14:38:58Z<p>David Moreau Simard: /* User Stories */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single [https://launchpad.net/puppet-ceph puppet-ceph module].<br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the [#Related_tools_and_implementations|inventory of the existing efforts] Having a puppet ceph module under the umbrella of the stackforge infrastructure helps federate the efforts while providing a workflow that will improve the overall quality of the module.<br />
<br />
* [https://review.openstack.org/#/q/status:open+project:stackforge/puppet-ceph,n,z gerrit review page]<br />
* [https://launchpad.net/puppet-ceph puppet-ceph at launchpad]<br />
<br />
== Roadmap ==<br />
<br />
Almost each component of this module deserve a discussion and it would take a long time to agree on everything before getting something useful. The following list sets the order in which each module is going to be implemented. Each step must be a useable puppet module, unit tested and including integration tests.<br />
<br />
* [[#conf|conf]]<br />
* [[#key|key]]<br />
* [[#mon|mon]]<br />
* [[#osd|osd]]<br />
* [[#pool|pool]]<br />
<br />
== User Stories ==<br />
<br />
=== I want to try this module, heard of ceph, want to see it in action ===<br />
<br />
/node/ { <br />
ceph::conf { auth_enable: false };<br />
ceph::mon; <br />
ceph::osd { '/srv/osd1' }; <br />
ceph::osd { '/srv/osd2' }; <br />
}<br />
<br />
* install puppet, <br />
* paste this in site.pp and replace /node/ with the name of your current node, <br />
* puppet apply site.pp , <br />
* ceph -s and see that it works<br />
<br />
=== I want to run benchmarks on three new machines ===<br />
<br />
* There are four machines, 3 OSD, 1 MON and one machine that is the client from which the user runs commands.<br />
* install puppetmaster and create site.pp with:<br />
<br />
/ceph-default/ {<br />
ceph::conf { auth_enable: false };<br />
ceph::conf { 'mon host': 'node1' }; <br />
}<br />
<br />
/node1/ inherits ceph-default { <br />
ceph::mon; <br />
ceph::osd { '/dev/sdb' }; <br />
}<br />
<br />
/node2/, /node3/ inherits ceph-default { <br />
ceph::osd { '/dev/sdb' }; <br />
ceph::osd { '/dev/sdc' }; <br />
}<br />
<br />
/client/ inherits ceph-default { <br />
ceph::client;<br />
}<br />
<br />
* ssh client<br />
* rados bench <br />
* interpret the results<br />
<br />
=== I want to spawn a cluster and configure it through a puppetmaster as part of a continuous integration effort ===<br />
<br />
''Leveraging vagrant, vagrant-openstack, openstack''<br />
* Ceph is used as a backend storage for various use cases<br />
* There are tests to make sure the Ceph cluster was instantiated properly<br />
* There are tests to make sure various other infrastructure components (or products) can use the Ceph cluster<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions ====<br />
<br />
The Operating System versions supported must be tested with integration on the actual operating system. Although it is fairly to add support for an Operating System, it is prone to regressions if not tested. The per Operating System support strategy mimics the way OpenStack modules do it. <br />
<br />
The supported versions of the components that deal with the environment in which Ceph is used ( OpenStack, Cloudstack, Ganeti etc. ) are handled by each component on a case by case basis. There probably is too much heterogeneity to set a rule.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
=== Prefer cli over REST ===<br />
<br />
The ceph cli is preferred because the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] requires the installation of an additional daemon.<br />
<br />
=== Module versioning ===<br />
<br />
Create a branch for each Ceph release ( stable/cuttlefish, stable/dumpling etc. ) and follow the same pattern as the OpenStack modules<br />
<br />
=== Support Ceph versions from cuttlefish ===<br />
<br />
Do not support Ceph versions released before cuttlefish<br />
<br />
== Integration tests ==<br />
<br />
All scenarios can probably be covered with 2 virtual machines, 2 interfaces and one disk attached to one of the machines. A number of scenarios can be based on a single machine, using directories instead of disks and a single interface.<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Dachary|talk]]) : how are the integration test resource provisioned ? Where to look to learn more ?<br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
* auth_enable - true or false, enables/disables cephx, defaults to true<br />
::If enable is true, set the following in the [global] section of the conf file:<br />
<br />
auth cluster required = cephx<br />
auth service required = cephx<br />
auth client required = cephx<br />
auth supported = cephx<br />
<br />
::If enable is false, set the following in the [global] section of the conf file:<br />
<br />
auth cluster required = none<br />
auth service required = none<br />
auth client required = none<br />
auth supported = none<br />
<br />
<br />
::It should support [http://ceph.com/docs/master/rados/operations/authentication/#disabling-cephx disabling] or [http://ceph.com/docs/master/rados/operations/authentication/#enabling-cephx enabling] cephx when the values change. If it does not support updating, it must fail when changed on an existing Ceph cluster.<br />
<br />
<br />
Using a [https://github.com/puppetlabs/puppetlabs-inifile inifile] child provider ( such as [https://github.com/stackforge/puppet-cinder/blob/master/lib/puppet/provider/cinder_config/ini_setting.rb cinder_config] ) a setting would look like<br />
<br />
ceph_conf {<br />
'GLOBAL/fsid': value => $fsid;<br />
}<br />
<br />
And create '''/etc/ceph/ceph.conf''' such as:<br />
<br />
[global]<br />
fsid = 918340183294812038<br />
<br />
Improvements to be implemented later:<br />
* If a key/value pair is modified in the *mon*, *osd* or *mds* sections, all daemons are [http://ceph.com/docs/master/rados/configuration/ceph-conf/#runtime-changes notified of the change] with *ceph {daemon} tell * ...*.<br />
<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper<br />
* '''interface''':<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** bootstrap-osd - the bootstrap-osd secret key<br />
** dmcrypt - options needed to encrypt disks<br />
<br />
Here is what should happen on a node with at least one OSD<br />
* common to all OSD on the same node:<br />
** the /etc/ceph/ceph.conf file is setup with the IPs of the monitors<br />
** [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/osd.py#L47 the /var/lib/ceph/bootstrap-osd/{cluster}.bootstrap-osd.keyring] file contains a user/key that is used to to create an OSD. The bootstrap-osd user key is usually the same for all OSD. For instance:<br />
[client.bootstrap-osd]<br />
key = AQCUg71RYEi7DxAAxlyC1KExxSnNJgim6lmuGA==<br />
** The user bootstrap-osd with this key with caps to bootstrap an OSD:<br />
$ ceph auth list<br />
...<br />
client.bootstrap-osd<br />
key: AQCUg71RYEi7DxAAxlyC1KExxSnNJgim6lmuGA==<br />
caps: [mon] allow profile bootstrap-osd<br />
* for each OSD <br />
** in the same way [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/osd.py#L104 ceph-deploy prepare the disk] call ceph-disk-prepare that will [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1031 set magic partition uuid] and trigger [https://github.com/ceph/ceph/blob/master/udev/95-ceph-osd.rules#L11 udev rules] to [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L453 ceph osd create]. When udev settles, the new osd is integrated into the cluster and uses its own key, created, registered to the MON and stored locally as a side effect of [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1301 --mkkey]. The osd daemon is also run as a side effect of udev detecting the disk and calling [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L1414 /etc/init/ceph-osd.conf]. ceph-disk contains a [https://github.com/ceph/ceph/blob/master/src/ceph-disk#L18 high level description of the process]<br />
** '''dmcrypt''' is also handled by the udev logic ( details ??? keys ??? )<br />
<br />
At boot time the [https://github.com/ceph/ceph/blob/master/src/upstart/ceph-osd-all-starter.conf#L14 /var/lib/ceph/osd directory is explored] to discover all OSDs that need to be started. Operating systems for which the same logic is not implemented will need an additional script run at boot time to perform the same exploration until the default script is updated to add this capability.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
Creates the files and keyring supporting a mon, runs the daemon<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON<br />
* '''interface''':<br />
** cluster - cluster name ( defaults to ceph )<br />
** id - the id of the mon<br />
** ip_address - the ip addresses of the mon<br />
** key - the mon. user key<br />
<br />
* add a [mon.$id] section to the conf file (depends on ceph::conf to write the base part of the config)<br />
* if auth == cephx<br />
** the mon. key is mandatory and need to be set by the user to be a valid ceph key. The documentation should contain an example key and explanations about how to [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/new.py#L20 create an auth key].<br />
** [https://github.com/ceph/ceph-deploy/blob/v1.2.7/ceph_deploy/new.py#L86 writes the keyring] <br />
* installs the packages<br />
* ceph-mon --id 0 --mkfs<br />
* runs the mon daemon<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': maps and mounts a rbd image, taking care of dependencies (packages, rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== package === <br />
<br />
Although some distributions include packages for Ceph, it is recommended to install from the packages available from ceph.com http://ceph.com/docs/next/install/. It is not recommended to install the Ceph package provided by the standard repositories. This will change over time and the need to use the repository provided by ceph.com will gradually become less common. <br />
<br />
The use of the repositories from ceph.com should be documented in the README.md with a apt {} based example to clarify that the user is expected to get up to date packages repositories.<br />
<br />
* Mimic https://github.com/stackforge/puppet-nova/blob/master/manifests/params.pp#L8 for cross distribution package name<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': setup /etc/ceph/ceph.conf to connect to the Ceph cluster and install the ceph cli <br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
Keyring management, authentication. It would be a class to create keys for new users (e.g. a user that can create RBDs or use the Objectstore) which may special access rights. But would also be used by the other classes like ceph::mon or ceph::osd to place e.g. the shared 'client.admin' or 'mon.' keys. I would handle here all key related tasks. <br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster such as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33162Puppet/ceph-blueprint2013-10-18T21:27:00Z<p>David Moreau Simard: /* rbd */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': maps and mounts a rbd image, taking care of dependencies (packages, rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
<br />
* '''proposed name''': ceph::package<br />
* '''purpose''': configures the the necessary repositories/sources, installs required common packages<br />
* '''interface''':<br />
** release - target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster such as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
=== image ===<br />
<br />
* '''proposed name''': ceph::image<br />
* '''purpose''': manage operations on the images of a specified pool in the cluster such as: create/delete images<br />
* '''interface''':<br />
** image_name - name of the image<br />
** pool_name - pool in which the image operation is to take place<br />
** create - if to create a new image<br />
** delete - if to delete an existing image<br />
** size - size of the image<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33161Puppet/ceph-blueprint2013-10-18T21:26:14Z<p>David Moreau Simard: /* Implementor components */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': maps and mounts a rbd image, taking care of dependencies (packages, rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ? Also, should this be taking care of formatting the rbd before mounting it ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
<br />
* '''proposed name''': ceph::package<br />
* '''purpose''': configures the the necessary repositories/sources, installs required common packages<br />
* '''interface''':<br />
** release - target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster such as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
=== image ===<br />
<br />
* '''proposed name''': ceph::image<br />
* '''purpose''': manage operations on the images of a specified pool in the cluster such as: create/delete images<br />
* '''interface''':<br />
** image_name - name of the image<br />
** pool_name - pool in which the image operation is to take place<br />
** create - if to create a new image<br />
** delete - if to delete an existing image<br />
** size - size of the image<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33160Puppet/ceph-blueprint2013-10-18T21:26:05Z<p>David Moreau Simard: /* rbd */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': maps and mounts a rbd image, taking care of dependencies (packages, rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ? Also, should this be taking care of formatting the rbd before mounting it ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
<br />
* '''proposed name''': ceph::package<br />
* '''purpose''': configures the the necessary repositories/sources, installs required common packages<br />
* '''interface''':<br />
** release - target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33159Puppet/ceph-blueprint2013-10-18T21:17:02Z<p>David Moreau Simard: /* rbd */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': maps and mounts a rbd image, taking care of dependencies (packages, rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
<br />
* '''proposed name''': ceph::package<br />
* '''purpose''': configures the the necessary repositories/sources, installs required common packages<br />
* '''interface''':<br />
** release - target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33158Puppet/ceph-blueprint2013-10-18T21:16:48Z<p>David Moreau Simard: /* rbd */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': maps and mounts a rbd image, taking care of dependencies (rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
<br />
* '''proposed name''': ceph::package<br />
* '''purpose''': configures the the necessary repositories/sources, installs required common packages<br />
* '''interface''':<br />
** release - target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33157Puppet/ceph-blueprint2013-10-18T21:16:19Z<p>David Moreau Simard: /* cephfs */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': mount a rbd image, taking care of dependencies (rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': mounts a cephfs filesystem, taking care of dependencies (e.g, fstab, packages)<br />
* '''interface''':<br />
** Lots - See http://ceph.com/docs/next/man/8/mount.ceph/<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
<br />
* '''proposed name''': ceph::package<br />
* '''purpose''': configures the the necessary repositories/sources, installs required common packages<br />
* '''interface''':<br />
** release - target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33151Puppet/ceph-blueprint2013-10-18T21:02:26Z<p>David Moreau Simard: /* rbd */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': mount a rbd image, taking care of dependencies (rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Should ceph::client be a dependency ?<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': add a line in /etc/fstab to mount the file system at a given point <br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) is it even necessary ?<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Albeit a simple class, essentially consisting of mounting a resource, I believe this is necessary for the sake of consistency and ensure that ceph::mds is set as a dependency.<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
<br />
* '''proposed name''': ceph::package<br />
* '''purpose''': configures the the necessary repositories/sources, installs required common packages<br />
* '''interface''':<br />
** release - target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=33150Puppet/ceph-blueprint2013-10-18T21:00:44Z<p>David Moreau Simard: /* rbd */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== rbd ===<br />
<br />
* '''proposed name''': ceph::rbd<br />
* '''purpose''': mount a rbd image, taking care of dependencies (rbd kernel module, /etc/ceph/rbdmap, fstab)<br />
* '''interface''':<br />
** name - the name of the image<br />
** pool - the pool in which the image is<br />
** mount_point - where the image will be mounted<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': add a line in /etc/fstab to mount the file system at a given point <br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) is it even necessary ?<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Albeit a simple class, essentially consisting of mounting a resource, I believe this is necessary for the sake of consistency and ensure that ceph::mds is set as a dependency.<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
<br />
* '''proposed name''': ceph::package<br />
* '''purpose''': configures the the necessary repositories/sources, installs required common packages<br />
* '''interface''':<br />
** release - target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=32947Puppet/ceph-blueprint2013-10-17T17:24:28Z<p>David Moreau Simard: /* package management */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== krbd ===<br />
<br />
* '''proposed name''': ceph::krbd<br />
* '''purpose''': configures a rbd (kernel module) to be used on the host, update /etc/ceph/rbdmap ...<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) not sure about the parameters<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': add a line in /etc/fstab to mount the file system at a given point <br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) is it even necessary ?<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Albeit a simple class, essentially consisting of mounting a resource, I believe this is necessary for the sake of consistency and ensure that ceph::mds is set as a dependency.<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
<br />
* '''proposed name''': ceph::package<br />
* '''purpose''': configures the the necessary repositories/sources, installs required common packages<br />
* '''interface''':<br />
** release - target ceph release (cuttlefish, dumpling, etc)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=32945Puppet/ceph-blueprint2013-10-17T16:50:31Z<p>David Moreau Simard: /* cephfs */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules. <br />
<br />
Very much like vswitch, Ceph is not exclusively used in the context of OpenStack. There is however a significant community relying on puppet to deploy Ceph in the context of OpenStack, as is shown in the inventory of the existing efforts in this blueprint. Having a puppet ceph module under the umbrela of the stackforge infrastructure would help federate the efforst while providing a workflow that will improve the overall quality of the module.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
==== Supporting versions of everything ====<br />
<br />
It is worth capturing the supported versions of openstack, ceph, and what distros/versions we will be targeting with this work.<br />
<br />
==== Provide sensible defaults ====<br />
<br />
If the high level components ( osd + mon + mds + rgw for instance ) are included without any parameter, the result must be a functional Ceph cluster.<br />
<br />
==== Architectured to leverage Ceph to its full potential ====<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Puppet user components ==<br />
<br />
This section outlines the roles and well as configuration components that are visible to the puppet user. They must be understandable for the system administrator willing to deploy Ceph for the first time.<br />
<br />
=== conf ===<br />
<br />
* '''proposed name''': ceph::conf<br />
* '''purpose''': keeps and writes [http://ceph.com/docs/master/rados/configuration/osd-config-ref/ config] and their options for the top level sections of the ceph config. This includes these sections:<br />
** [global]<br />
** [mon]<br />
** [osd]<br />
** [mds]<br />
* '''interface''': every key that is needed to write the base config ... the whole list would be to long to written down here<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) use inject args in addition to config to update running daemons dynamically<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionally set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** dmcrypt - options needed to encrypt disks<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) Not sure if we need an option for the osd keys. This is very well handled by the ceph tools itself since each OSD gets an own key by default.<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the ID of the user wouldn't be needed since the MONs have all the same ID for cephx ('mon.')<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the directory/disk information could be obtained via ceph::conf<br />
<br />
=== krbd ===<br />
<br />
* '''proposed name''': ceph::krbd<br />
* '''purpose''': configures a rbd (kernel module) to be used on the host, update /etc/ceph/rbdmap ...<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) not sure about the parameters<br />
<br />
=== cephfs ===<br />
<br />
* '''proposed name''': ceph::cephfs<br />
* '''purpose''': add a line in /etc/fstab to mount the file system at a given point <br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) is it even necessary ?<br />
::[[User:dmsimard|David Moreau Simard]] ([[User talk:David Moreau Simard|talk]]) Albeit a simple class, essentially consisting of mounting a resource, I believe this is necessary for the sake of consistency and ensure that ceph::mds is set as a dependency.<br />
<br />
== Implementor components ==<br />
<br />
These components are dependencies of the Puppet user components and can be used by other components. They should be a library of components where the code common to at least two independant components ( think OpenStack and Cloudstack ) is included.<br />
<br />
=== rest-api ===<br />
run the [http://ceph.com/docs/next/man/8/ceph-rest-api/ rest-api] daemon<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) on the same host as the first mon by default ?<br />
::[[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) I wouldn't try to define a 'first monitor' this would lead only to trouble in the setup (e.g. dependencies). ATM it's possible to startup all MONs in parallel.<br />
<br />
=== package management === <br />
components for package management (repo/source setup, basic package installation ... )<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** client_id - name of the client to find the correct id for key<br />
** keypath - path to the clients key file<br />
<br />
=== key ===<br />
<br />
keyring management, authentication<br />
<br />
* '''proposed name''': ceph::key<br />
* '''purpose''': handles ceph keys (cephx), generates keys, creates keyring files, incject keys into or delete keys from the cluster/keyring via ceph and ceph-authtool tools.<br />
* '''interface''':<br />
** secret - key secret<br />
** keyring_path - path to the keyring<br />
** cap_mon/cap_osd/cap_mds - cephx capabilities<br />
** user/group/mode: settings for the keyring file if needed<br />
** inject - options to inject a key into the cluster<br />
<br />
::[[User:Dachary|Loic Dachary]] ([[User talk:Loic Dachary|talk]]) idealy the puppet user would only need to activate authentication ( i.e. ceph::conf auth_supported = cephx ) and all key management will happen in the backend. Is it realistic ?<br />
<br />
=== pool ===<br />
<br />
* '''proposed name''': ceph::pool<br />
* '''purpose''': manage operations on the pools in the cluster as: create/delete pools, set PG/PGP number<br />
* '''interface''':<br />
** pool_name - name of the pool<br />
** create - if to create a new pool<br />
** delete - if to delete an existing pool<br />
** pg_num - number of Placement Groups (PGs) for a pool, if the pool already exists this may increase the number of PGs if the current value is lower<br />
** pgp_num - same as for pg_num<br />
** replica_level - increase or decrease the replica level of a pool<br />
<br />
== OpenStack components ==<br />
<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
== RadosGW components ==<br />
<br />
The RadosGW is developped as an integral part of Ceph. It is however not required to deploy a cluster and should be treated as any client application of the cluster.<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers <br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
** rgw_data - the path where the radosgw data should be stored<br />
** fcgi_file - path to the fcgi file e.g. /var/www/s3gw.fcgi<br />
:: [[User:Danny Al-Gaaf|Danny Al-Gaaf]] ([[User talk:Danny Al-Gaaf|talk]]) the monitor_ips are not needed: IMO ceph::conf should provide these information to all other<br />
<br />
=== rgw_user ===<br />
<br />
* '''proposed name''': ceph::rgw_user<br />
* '''purpose''': create/remove users and Swift users for the RadosGW S3/Swift API<br />
* '''interface''':<br />
** user - username<br />
** key - secret key (could get generated if needed)<br />
** swift_user - username for the Swift API user<br />
** swift_key - secret key for the Swift API user<br />
<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=32658Puppet/ceph-blueprint2013-10-15T21:02:44Z<p>David Moreau Simard: /* Related tools and implementations */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
=== Architecture to leverage Ceph to its full potential ===<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Components ==<br />
<br />
This section outlines the roles and well as configuration components that are needed:<br />
<br />
=== backend components ===<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
<br />
<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simardhttps://wiki.openstack.org/w/index.php?title=Puppet/ceph-blueprint&diff=32657Puppet/ceph-blueprint2013-10-15T20:56:45Z<p>David Moreau Simard: /* Related tools and implementations */</p>
<hr />
<div>== Overview ==<br />
<br />
This document is intended to capture requirements for a single puppet-ceph module that is going to be written and contributed as a part of the stackforge modules.<br />
<br />
== Requirements ==<br />
<br />
=== High level requirements ===<br />
<br />
==== No complex cross host orchestration ====<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. Provided that it's dependencies have already been configured and are known, each component should support being adding without having to run Puppet on more than one node.<br />
<br />
For example<br />
<br />
* cinder-volume instances should be configured to join a Ceph cluster simply by running Puppet on that node<br />
* OSD instances should be configured to join a cluster simply by running puppet agent on a node and targeting that role.<br />
<br />
All cross host orchestration should be assumed to be managed outside of Puppet. The Puppet implementation should only be concerned with<br />
<br />
* what components need to be defined (where these are implemented as classes)<br />
* what data is required for those components (where that data is passed in a class parameters)<br />
<br />
=== Architecture to leverage Ceph to its full potential ===<br />
<br />
It means talking to the MON when configuring or modifying the cluster, using ceph-disk as a low level tool to create the storage required for an OSD, creating a minimal /etc/ceph/ceph.conf to allow a client to connect to the Ceph cluster. The MON exposes a very rich API ( either via the ceph cli or a REST API ) and it offers a great flexibility to the system administrator. It is unlikely that the first versions of the puppet module captures all of it. But it should be architectured to allow the casual contributor to add a new feature or a new variation without the need to workaround architectural limitations. <br />
<br />
The ceph-deploy utility is developed as part of the ceph project, to help people get up to speed as quickly as possible for test and POCs. Alfredo Deza made a compeling argument against using ceph-deploy as a helper for a puppet module. Because it is designed to hide some of the flexibility ceph offers for the sake of simplicity. An inconvenience that is incompatible with the goal of a puppet module designed to accommodate all use cases. <br />
<br />
== Components ==<br />
<br />
This section outlines the roles and well as configuration components that are needed:<br />
<br />
=== backend components ===<br />
ceph specific configuration for cinder/glance (already provided by the puppet-cinder and puppet-glance modules in the volume/rdb and backend/rdb classes)<br />
<br />
=== ceph client implementation ===<br />
<br />
* '''proposed name''': ceph::client <br />
* '''purpose''': configures the openstack roles as clients by configuring /etc/ceph/ceph.conf<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
<br />
=== osd ===<br />
<br />
* '''proposed name''': ceph::osd <br />
* '''purpose''': configures a ceph OSD using the ceph-disk helper, setup /etc/ceph/ceph.conf with the MONs IPs, declare the OSD to the cluster via the MON, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the OSD<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== rgw ===<br />
<br />
* '''proposed name''': ceph::rgw<br />
* '''purpose''': configures a ceph radosgw , setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the OSD to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mds ===<br />
<br />
* '''proposed name''': ceph::mds <br />
* '''purpose''': configures a ceph MDS, setup /etc/ceph/ceph.conf with the MONs IPs, declare the MDS to the cluster via the MON, optionaly set the key to allow the MDS to connect to the MONs<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
=== mon ===<br />
<br />
* '''proposed name''': ceph::mon<br />
* '''purpose''': configures a ceph MON setup /etc/ceph/ceph.conf with the MONs IPs, optionaly set the key to allow the MON to connect to the other MONs or initialize the MON if it is the first in the cluster<br />
* '''interface''':<br />
** monitor_ips - list of ip addresses used to connect to the monitor servers<br />
** directory/disk - a disk or a directory to be used as a storage for the MON<br />
** key - the secret key for the id user<br />
** id - the id of the user<br />
<br />
== Related tools and implementations ==<br />
<br />
* deploy ceph : ceph-deploy<br />
<br />
for test / POC purposes<br />
https://github.com/ceph/ceph-deploy<br />
<br />
maintainer: Alfredo Deza<br />
<br />
* deploy ceph with puppet : puppet-cephdeploy<br />
<br />
relies on ceph-deploy<br />
https://github.com/dontalton/puppet-cephdeploy/<br />
<br />
maintainer: Don Talton<br />
<br />
* deploy ceph with puppet : puppet-ceph<br />
<br />
developped in 2012 but still useful, upstream<br />
https://github.com/enovance/puppet-ceph<br />
<br />
maintainer: community<br />
--<br />
fork of puppet-ceph, updated recently<br />
https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher<br />
<br />
maintainer: Deutsche Telekom AG (DTAG)<br />
<br />
* ceph + openstack : ceph docs<br />
<br />
manual integration<br />
http://ceph.com/docs/next/rbd/rbd-openstack/<br />
maintainer: John Wilkins + Josh Durgin<br />
<br />
* ceph + openstack with puppet : stackforge<br />
<br />
https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp<br />
https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp<br />
<br />
maintainer: community<br />
<br />
* ceph + openstack with puppet : COI<br />
<br />
targeting Cisco use case<br />
https://github.com/CiscoSystems/puppet-coe/tree/grizzly/manifests/ceph<br />
http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation<br />
<br />
maintainer : Don Talton + Robert Starmer<br />
<br />
* ceph + openstack with puppet : mirantis<br />
<br />
in the context of Fuel<br />
https://github.com/Mirantis/fuel/tree/master/deployment/puppet/ceph<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/cinder/manifests/volume/ceph.pp<br />
https://github.com/Mirantis/fuel/blob/master/deployment/puppet/glance/manifests/backend/ceph.pp<br />
<br />
maintainer : Andrew Woodward<br />
<br />
* openstack with puppet : openstack-installer<br />
<br />
data driven approach to deploy OpenStack<br />
https://github.com/CiscoSystems/openstack-installer/<br />
<br />
maintainer: Robert Starmer + Dan Bode</div>David Moreau Simard