Difference between revisions of "Milk"
(→Conceptual Services To build) |
(→Sprint Work Ideas) |
||
Line 37: | Line 37: | ||
* build Transform daemon | * build Transform daemon | ||
* build String client and daemons | * build String client and daemons | ||
+ | |||
+ | {| class="wikitable sortable" | ||
+ | |- | ||
+ | ! API / daemon names !! function | ||
+ | |- | ||
+ | | IO API client || generate checksum, client will post projectID, data type, original date&time, data binary, auth, extension for client defined metadata in XML format | ||
+ | |- | ||
+ | | IO API || auth through keystone API, passes to IO daemon | ||
+ | |- | ||
+ | | IO daemon || support 15-20 data types, generate reference URN, listens to API, put raw binary into swift API | ||
+ | |- | ||
+ | | state daemon || zookeeper storage, monitor workflow, trigger alert on workflow timeouts, errors | ||
+ | |- | ||
+ | | transform daemon || trigger on IO swift change, check on data type, if match some data types, execute transform, FFMBC for transform, support collaborative standards like scripte XML transform | ||
+ | |- | ||
+ | | String metadata API || register service against asset ID, register metadata (?), register user/service provider (for now, use keystone later), register service, auth, lookup asset ID, add parent/peer Spring service, configure synchronization set | ||
+ | |- | ||
Revision as of 21:31, 4 June 2014
Contents
ETC Cloud API Prototype project
- Project Lead Sean Roberts (sarob)
purpose
The Entertainment Technology Center @ the University of Southern California (ETC) formally launched “Production in the Cloud,” a new project that brings together a core group of key media and cloud-resource leaders to develop guidelines and accelerate innovation and adoption of next-gen cloud-based content creation, production, and distribution tools and processes. Senior executives from the six major studios in coordination with Rackspace, EMC, EVault, Front Porch Digital, DAX, Google and other cloud companies convened recently to serve as governing body to collectively guide this process. The project is looking at the life cycle of film and media production, from pre-production collaboration, production, post production and through to archiving.
This specific part of the effort is around
- Bringing together various competitive organizations to work on a common goal
- Goal of developing an interoperable cloud framework
- First steps of socializing well underway
- Next important step of creating a prototype was discussed in October
- Now we need to execute on design and implementation
group structure
- mailing list milk-dev@etcusc.org
- mailing list history
- developer IRC on freednode.net channel #milk-dev
- weekly meetings agenda
- weekly meeting iCal schedule through openstack IRC meeting list
- meeting history Past IRC Meetings
- gerrit review group ACLs here
design
High Level Goals
- Ingesting data from the many changes throughout the media pipeline
- Metadata from the data stored and the pipeline activities
- URL/URN marker identification for assets
What the Services Will Do
- Client Ingest and Server Ingest that use REST API
- Create metadata on data ingest (e.g. Exif)
- Create additional metadata on post-ingest (e.g. hashed ID)
Sprint Work Ideas
- document IO, String REST APIs
- build IO client and daemon
- build State daemon
- build Transform daemon
- build String client and daemons
API / daemon names | function |
---|---|
IO API client | generate checksum, client will post projectID, data type, original date&time, data binary, auth, extension for client defined metadata in XML format |
IO API | auth through keystone API, passes to IO daemon |
IO daemon | support 15-20 data types, generate reference URN, listens to API, put raw binary into swift API |
state daemon | zookeeper storage, monitor workflow, trigger alert on workflow timeouts, errors |
transform daemon | trigger on IO swift change, check on data type, if match some data types, execute transform, FFMBC for transform, support collaborative standards like scripte XML transform |
String metadata API | register service against asset ID, register metadata (?), register user/service provider (for now, use keystone later), register service, auth, lookup asset ID, add parent/peer Spring service, configure synchronization set |