Jump to: navigation, search


Perform NVP operations in a appropriate driver



1) More maintainable code, as all the code needed to interact with the underlying NVP platform is moved to the driver 2) Relatively easy transition to the asynchronous plugin model (put link to blueprint) 3) More scalable code, thanks to shorter SQL transactions

API changes

No API change. The behaviour of the API will remain the same. However, we are considering introducing a 'SYNCHRONIZING' status. This because the greenthread handling the API operation might now yield between the DB operation and the NVP operation. In that lag of time, data can be read by another API operation, which will return the update info for the object, even if the corresponding NVP operation has not yet been executed. Status == SYNCHRONIZING will let the caller now the operation is still in progress. We are not considering at the moment locking the object until the transaction is completed. Locking at the DB layer is something that in the NVP plugin architecture should be avoided.

Data Model Changes

Implementation details

Relationship with ML2

As the ML2 plugin adopts a driver-based model too, it is legit to ask whether the Nicira plugin might fit in the ML2 driver model too. The answer is no, but yes. (No I'm not Vicky Pollard from Little Britain). It's NO because the Nicira plugin needs a driver which is capable to handle many more features than Layer-2. Namely L3, security groups, qos queues, L2 network gateways, and others. it's NO because NVP virtualized networks cannot be simply 'bridged' with segments managed by other drivers, or employing different segmentation techniques. However, it's yes, because the NVP infrastructure provides some means for bridging domain of different natures by mapping distinct segments. Nevertheless, this is outside the scope of this blueprint and most likely outside the scope of Havana.