[cinder] Target classes in Cinder

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

[cinder] Target classes in Cinder

John Griffith-2
Hey Everyone,

So quite a while back we introduced a new model for dealing with target management in the drivers (ie initialize_connection, ensure_export etc). 

Just to summarize a bit:  The original model was that all of the target related stuff lived in a base class of the base drivers.  Folks would inherit from said base class and off they'd go.  This wasn't very flexible, and it's why we ended up with things like two drivers per backend in the case of FibreChannel support.  So instead of just say having "driver-foo", we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their own CI, configs etc.  Kind of annoying.

So we introduced this new model for targets, independent connectors or fabrics so to speak that live in `cinder/volume/targets`.  The idea being that drivers were no longer locked in to inheriting from a base class to get the transport layer they wanted, but instead, the targets class was decoupled, and your driver could just instantiate whichever type they needed and use it.  This was great in theory for folks like me that if I ever did FC, rather than create a second driver (the pattern of 3 classes: common, iscsi and FC), it would just be a config option for my driver, and I'd use the one you selected in config (or both).

Anyway, I won't go too far into the details around the concept (unless somebody wants to hear more), but the reality is it's been a couple years now and currently it looks like there are a total of 4 out of the 80+ drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd (and I implemented 3 of them I think... so that's not good). 

What I'm wondering is, even though I certainly think this is a FAR SUPERIOR design to what we had, I don't like having both code-paths and designs in the code base.  Should we consider reverting the drivers that are using the new model back and remove cinder/volume/targets?  Or should we start flagging those new drivers that don't use the new model during review?  Also, what about the legacy/burden of all the other drivers that are already in place?

Like I said, I'm biased and I think the new approach is much better in a number of ways, but that's a different debate.  I'd be curious to see what others think and what might be the best way to move forward.

Thanks,
John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [hidden email]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [cinder] Target classes in Cinder

Kendall Nelson
I personally agree that the target classes route is a much cleaner and more efficient way of doing it.  Also, that it doesn't make sense to have all the code duplication to support doing it both ways.

If other people agree with that- maybe we can start with not taking new drivers that do it the common/iscsi/fc way? And then pick a release to refactor drivers and make that the focus kind of like we did with ocata being a stabilization release? Assuming that asking the larger number of drivers to switch formats isn't asking the impossible. I dunno, just a thought :) 

-Kendall (diablo_rojo)

On Fri, Jun 2, 2017 at 2:48 PM John Griffith <[hidden email]> wrote:
Hey Everyone,

So quite a while back we introduced a new model for dealing with target management in the drivers (ie initialize_connection, ensure_export etc). 

Just to summarize a bit:  The original model was that all of the target related stuff lived in a base class of the base drivers.  Folks would inherit from said base class and off they'd go.  This wasn't very flexible, and it's why we ended up with things like two drivers per backend in the case of FibreChannel support.  So instead of just say having "driver-foo", we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their own CI, configs etc.  Kind of annoying.

So we introduced this new model for targets, independent connectors or fabrics so to speak that live in `cinder/volume/targets`.  The idea being that drivers were no longer locked in to inheriting from a base class to get the transport layer they wanted, but instead, the targets class was decoupled, and your driver could just instantiate whichever type they needed and use it.  This was great in theory for folks like me that if I ever did FC, rather than create a second driver (the pattern of 3 classes: common, iscsi and FC), it would just be a config option for my driver, and I'd use the one you selected in config (or both).

Anyway, I won't go too far into the details around the concept (unless somebody wants to hear more), but the reality is it's been a couple years now and currently it looks like there are a total of 4 out of the 80+ drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd (and I implemented 3 of them I think... so that's not good). 

What I'm wondering is, even though I certainly think this is a FAR SUPERIOR design to what we had, I don't like having both code-paths and designs in the code base.  Should we consider reverting the drivers that are using the new model back and remove cinder/volume/targets?  Or should we start flagging those new drivers that don't use the new model during review?  Also, what about the legacy/burden of all the other drivers that are already in place?

Like I said, I'm biased and I think the new approach is much better in a number of ways, but that's a different debate.  I'd be curious to see what others think and what might be the best way to move forward.

Thanks,
John
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@...?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [hidden email]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [cinder] Target classes in Cinder

Clay Gerrard
In reply to this post by John Griffith-2


On Fri, Jun 2, 2017 at 12:47 PM, John Griffith <[hidden email]> wrote:


What I'm wondering is, even though I certainly think this is a FAR SUPERIOR design to what we had, I don't like having both code-paths and designs in the code base. 

Might be useful to enumerate those?  Perhaps drawing attention to the benefits would spur some driver maintainers that haven't made the switch to think they could leverage the work into something impactful?
 
Should we consider reverting the drivers that are using the new model back and remove cinder/volume/targets?

Probably not anytime soon if it means dropping 76 of 80 drivers?  Or at least that's a different discussion ;)
 
Or should we start flagging those new drivers that don't use the new model during review?

Seems like a reasonable social construct to promote going forward - at least it puts a tourniquet on it.  Perhaps there some intree development documentation that could be updated to point people in the right direction or some warnings that can be placed around the legacy patterns to keep people for stumbling on bad examples?
 
Also, what about the legacy/burden of all the other drivers that are already in place?


What indeed... but that's down the road right - for the moment it's just figuring how to give things a bit of a kick in the pants?  Or maybe admitting w/o a kick in the pants - living with the cruft is the plan of record?

I'm curious to see how this goes, Swift has some plugin interfaces that have been exposed through the ages and the one thing constant with interface patterns is that the cruft builds up...

Good Luck!

-Clay


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [hidden email]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [cinder] Target classes in Cinder

Patrick East
In reply to this post by John Griffith-2

On Fri, Jun 2, 2017 at 12:47 PM, John Griffith <[hidden email]> wrote:
Hey Everyone,

So quite a while back we introduced a new model for dealing with target management in the drivers (ie initialize_connection, ensure_export etc). 

Just to summarize a bit:  The original model was that all of the target related stuff lived in a base class of the base drivers.  Folks would inherit from said base class and off they'd go.  This wasn't very flexible, and it's why we ended up with things like two drivers per backend in the case of FibreChannel support.  So instead of just say having "driver-foo", we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their own CI, configs etc.  Kind of annoying.

So we introduced this new model for targets, independent connectors or fabrics so to speak that live in `cinder/volume/targets`.  The idea being that drivers were no longer locked in to inheriting from a base class to get the transport layer they wanted, but instead, the targets class was decoupled, and your driver could just instantiate whichever type they needed and use it.  This was great in theory for folks like me that if I ever did FC, rather than create a second driver (the pattern of 3 classes: common, iscsi and FC), it would just be a config option for my driver, and I'd use the one you selected in config (or both).

Anyway, I won't go too far into the details around the concept (unless somebody wants to hear more), but the reality is it's been a couple years now and currently it looks like there are a total of 4 out of the 80+ drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd (and I implemented 3 of them I think... so that's not good). 

What I'm wondering is, even though I certainly think this is a FAR SUPERIOR design to what we had, I don't like having both code-paths and designs in the code base.  Should we consider reverting the drivers that are using the new model back and remove cinder/volume/targets?  Or should we start flagging those new drivers that don't use the new model during review?  Also, what about the legacy/burden of all the other drivers that are already in place?

My guess is that trying to push all the drivers into the model would almost definitely ensure that both code paths are alive and require maintenance for years to come. Trying to get everyone moved over would be a pretty large effort and (unless we get real harsh about it) would take a looong time to get everyone on board. After the transition we would probably end up with shims all over support the older driver class naming too. Either that or we would end up with the same top level driver classes we have now, and maybe internally they use a target instance but not in the configurable pick and choose way that the model was intended for, and the whole exercise wouldn't really do much other than have more drivers implement targets and cause some code churn.

IMO the target stuff is a nice architecture for drivers to follow, but I don't think its really something we need to do. I could see this being much more important to push on if we had plans to split up the driver apis into a provisioner and target kinda thing that the volume manager knows about, but as long as they all are sent through a single driver class api then it's all just implementation details behind that.
 

Like I said, I'm biased and I think the new approach is much better in a number of ways, but that's a different debate.  I'd be curious to see what others think and what might be the best way to move forward.

Thanks,
John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [hidden email]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [cinder] Target classes in Cinder

Eric Harney
In reply to this post by John Griffith-2
On 06/02/2017 03:47 PM, John Griffith wrote:

> Hey Everyone,
>
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
>
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.

We'd need separate CI jobs for the different target classes too.


> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
>
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
>
> What I'm wondering is, even though I certainly think this is a FAR SUPERIOR
> design to what we had, I don't like having both code-paths and designs in
> the code base.  Should we consider reverting the drivers that are using the
> new model back and remove cinder/volume/targets?  Or should we start
> flagging those new drivers that don't use the new model during review?
> Also, what about the legacy/burden of all the other drivers that are
> already in place?
>
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
>
> Thanks,
> John
>

Some perspective from my side here:  before reading this mail, I had a
bit different idea of what the target_drivers were actually for.

The LVM, block_device, and DRBD drivers use this target_driver system
because they manage "local" storage and then layer an iSCSI target on
top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
original POV of the LVM driver, which was doing this to work on multiple
different distributions that had to pick scsi-target-utils or LIO to
function at all.  The important detail here is that the
scsi-target-utils/LIO code could also then be applied to different
volume drivers.

The Solidfire driver is doing something different here, and using the
target_driver classes as an interface upon which it defines its own
target driver.  In this case, this splits up the code within the driver
itself, but doesn't enable plugging in other target drivers to the
Solidfire driver.  So the fact that it's tied to this defined
target_driver class interface doesn't change much.

The question, I think, mostly comes down to whether you get better code,
or better deployment configurability, by a) defining a few target
classes for your driver or b) defining a few volume driver classes for
your driver.   (See coprhd or Pure for some examples.)

I'm not convinced there is any difference in the outcome, so I can't see
why we would enforce any policy around this.  The main difference is in
which cinder.conf fields you set during deployment, the rest pretty much
ends up the same in either scheme.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [hidden email]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [cinder] Target classes in Cinder

Jay Bryant
I had forgotten that we added this and am guessing that other cores did as well. As a result, it likely, was not enforced in driver reviews.

I need to better understand the benefit. In don't think there is a hurry to remove this right now. Can we put it on the agenda for Denver?

Jay
On Fri, Jun 2, 2017 at 4:14 PM Eric Harney <[hidden email]> wrote:
On 06/02/2017 03:47 PM, John Griffith wrote:
> Hey Everyone,
>
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
>
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.

We'd need separate CI jobs for the different target classes too.


> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
>
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
>
> What I'm wondering is, even though I certainly think this is a FAR SUPERIOR
> design to what we had, I don't like having both code-paths and designs in
> the code base.  Should we consider reverting the drivers that are using the
> new model back and remove cinder/volume/targets?  Or should we start
> flagging those new drivers that don't use the new model during review?
> Also, what about the legacy/burden of all the other drivers that are
> already in place?
>
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
>
> Thanks,
> John
>

Some perspective from my side here:  before reading this mail, I had a
bit different idea of what the target_drivers were actually for.

The LVM, block_device, and DRBD drivers use this target_driver system
because they manage "local" storage and then layer an iSCSI target on
top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
original POV of the LVM driver, which was doing this to work on multiple
different distributions that had to pick scsi-target-utils or LIO to
function at all.  The important detail here is that the
scsi-target-utils/LIO code could also then be applied to different
volume drivers.

The Solidfire driver is doing something different here, and using the
target_driver classes as an interface upon which it defines its own
target driver.  In this case, this splits up the code within the driver
itself, but doesn't enable plugging in other target drivers to the
Solidfire driver.  So the fact that it's tied to this defined
target_driver class interface doesn't change much.

The question, I think, mostly comes down to whether you get better code,
or better deployment configurability, by a) defining a few target
classes for your driver or b) defining a few volume driver classes for
your driver.   (See coprhd or Pure for some examples.)

I'm not convinced there is any difference in the outcome, so I can't see
why we would enforce any policy around this.  The main difference is in
which cinder.conf fields you set during deployment, the rest pretty much
ends up the same in either scheme.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@...?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [hidden email]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [cinder] Target classes in Cinder

John Griffith-2
In reply to this post by Eric Harney


On Fri, Jun 2, 2017 at 3:11 PM, Eric Harney <[hidden email]> wrote:
On 06/02/2017 03:47 PM, John Griffith wrote:
> Hey Everyone,
>
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
>
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.

We'd need separate CI jobs for the different target classes too.


> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
>
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
>
> What I'm wondering is, even though I certainly think this is a FAR SUPERIOR
> design to what we had, I don't like having both code-paths and designs in
> the code base.  Should we consider reverting the drivers that are using the
> new model back and remove cinder/volume/targets?  Or should we start
> flagging those new drivers that don't use the new model during review?
> Also, what about the legacy/burden of all the other drivers that are
> already in place?
>
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
>
> Thanks,
> John
>

Some perspective from my side here:  before reading this mail, I had a
bit different idea of what the target_drivers were actually for.

The LVM, block_device, and DRBD drivers use this target_driver system
because they manage "local" storage and then layer an iSCSI target on
top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
original POV of the LVM driver, which was doing this to work on multiple
different distributions that had to pick scsi-target-utils or LIO to
function at all.  The important detail here is that the
scsi-target-utils/LIO code could also then be applied to different
volume drivers.

​Yeah, that's fair; it is different that they're
creating a target etc.  At least the new code is
sucked up by default and we don't have that mixin
iscsi class any more.  Meaning that drivers that
don't need LIO/Tgt etc don't get it in the import.

Regardless of which way you use things here you end
up sharing this interface anyway, so I guess maybe
none of this topic is even relevant any more.

The Solidfire driver is doing something different here, and using the
target_driver classes as an interface upon which it defines its own
target driver.  In this case, this splits up the code within the driver
itself, but doesn't enable plugging in other target drivers to the
Solidfire driver.  So the fact that it's tied to this defined
target_driver class interface doesn't change much.

The question, I think, mostly comes down to whether you get better code,
or better deployment configurability, by a) defining a few target
classes for your driver or b) defining a few volume driver classes for
your driver.   (See coprhd or Pure for some examples.)

I'm not convinced there is any difference in the outcome, so I can't see
why we would enforce any policy around this.  The main difference is in
which cinder.conf fields you set during deployment, the rest pretty much
ends up the same in either scheme.
 
​That's fair, I was just wondering if there was any 
opportunity to slim down some of the few remaining things
​in the base driver and the chain of inheritance that we
have driver--->iscsi-->san--->foo, but to your point maybe
it's not really any benefit.  

Just thought it might be worth looking to see if there were
some opportunity to, and honestly wondering if the SF driver
should go back to being more like everyone else for 
consistency, but after reading some of your thoughts here I'm
not sure this topic is even worth visiting right now.


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@...enstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [hidden email]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [cinder] Target classes in Cinder

John Griffith-2
In reply to this post by Jay Bryant


On Fri, Jun 2, 2017 at 3:51 PM, Jay Bryant <[hidden email]> wrote:
I had forgotten that we added this and am guessing that other cores did as well. As a result, it likely, was not enforced in driver reviews.

I need to better understand the benefit. In don't think there is a hurry to remove this right now. Can we put it on the agenda for Denver?
Yeah, I think it's an out of sight out of mind... and maybe just having the volume/targets module alone
is good enough regardless of whether drivers want to do child inheritance or member inheritance against
it.

Meh... ok, never mind.​
 


Jay

On Fri, Jun 2, 2017 at 4:14 PM Eric Harney <[hidden email]> wrote:
On 06/02/2017 03:47 PM, John Griffith wrote:
> Hey Everyone,
>
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
>
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.

We'd need separate CI jobs for the different target classes too.


> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
>
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
>
> What I'm wondering is, even though I certainly think this is a FAR SUPERIOR
> design to what we had, I don't like having both code-paths and designs in
> the code base.  Should we consider reverting the drivers that are using the
> new model back and remove cinder/volume/targets?  Or should we start
> flagging those new drivers that don't use the new model during review?
> Also, what about the legacy/burden of all the other drivers that are
> already in place?
>
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
>
> Thanks,
> John
>

Some perspective from my side here:  before reading this mail, I had a
bit different idea of what the target_drivers were actually for.

The LVM, block_device, and DRBD drivers use this target_driver system
because they manage "local" storage and then layer an iSCSI target on
top of it.  (scsi-target-utils, or LIO, etc.)  This makes sense from the
original POV of the LVM driver, which was doing this to work on multiple
different distributions that had to pick scsi-target-utils or LIO to
function at all.  The important detail here is that the
scsi-target-utils/LIO code could also then be applied to different
volume drivers.

The Solidfire driver is doing something different here, and using the
target_driver classes as an interface upon which it defines its own
target driver.  In this case, this splits up the code within the driver
itself, but doesn't enable plugging in other target drivers to the
Solidfire driver.  So the fact that it's tied to this defined
target_driver class interface doesn't change much.

The question, I think, mostly comes down to whether you get better code,
or better deployment configurability, by a) defining a few target
classes for your driver or b) defining a few volume driver classes for
your driver.   (See coprhd or Pure for some examples.)

I'm not convinced there is any difference in the outcome, so I can't see
why we would enforce any policy around this.  The main difference is in
which cinder.conf fields you set during deployment, the rest pretty much
ends up the same in either scheme.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [hidden email]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [cinder] Target classes in Cinder

Walter Boring
In reply to this post by John Griffith-2
I had initially looked into this for the 3PAR drivers when we initially were working on the target driver code.   The problem I found was, it would take a fair amount of time to refactor the code, with marginal benefit.   Yes, the design is better, but I couldn't justify the refactoring time, effort and testing of the new driver model just to get the same functionality.   Also, we would still need 2 CIs to ensure that the FC vs. iSCSI target drivers for 3PAR would work correctly, so it doesn't really save CI efforts much.   I guess what I'm trying to say is that, even though it's a better model, we always have to weigh the time investment to reward, and I couldn't justify it with all the other efforts I was involved with at the time.

I kind of assume that for the most part, most developers don't even understand why we have the target driver model, and secondly if they were educated on it, that they'd run into the same issue I had.

On Fri, Jun 2, 2017 at 12:47 PM, John Griffith <[hidden email]> wrote:
Hey Everyone,

So quite a while back we introduced a new model for dealing with target management in the drivers (ie initialize_connection, ensure_export etc). 

Just to summarize a bit:  The original model was that all of the target related stuff lived in a base class of the base drivers.  Folks would inherit from said base class and off they'd go.  This wasn't very flexible, and it's why we ended up with things like two drivers per backend in the case of FibreChannel support.  So instead of just say having "driver-foo", we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their own CI, configs etc.  Kind of annoying.

So we introduced this new model for targets, independent connectors or fabrics so to speak that live in `cinder/volume/targets`.  The idea being that drivers were no longer locked in to inheriting from a base class to get the transport layer they wanted, but instead, the targets class was decoupled, and your driver could just instantiate whichever type they needed and use it.  This was great in theory for folks like me that if I ever did FC, rather than create a second driver (the pattern of 3 classes: common, iscsi and FC), it would just be a config option for my driver, and I'd use the one you selected in config (or both).

Anyway, I won't go too far into the details around the concept (unless somebody wants to hear more), but the reality is it's been a couple years now and currently it looks like there are a total of 4 out of the 80+ drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd (and I implemented 3 of them I think... so that's not good). 

What I'm wondering is, even though I certainly think this is a FAR SUPERIOR design to what we had, I don't like having both code-paths and designs in the code base.  Should we consider reverting the drivers that are using the new model back and remove cinder/volume/targets?  Or should we start flagging those new drivers that don't use the new model during review?  Also, what about the legacy/burden of all the other drivers that are already in place?

Like I said, I'm biased and I think the new approach is much better in a number of ways, but that's a different debate.  I'd be curious to see what others think and what might be the best way to move forward.

Thanks,
John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [hidden email]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev