Re: [trove] Delivering datastore logs to customers

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: [trove] Delivering datastore logs to customers

Michael Basnight
I think this is a good idea and I support it. In todays meeting [1] there were some questions, and I encourage them to get brought up here. My only question is in regard to the "tail" of a file we discussed in irc. After talking about it w/ other trovesters, I think it doesnt make sense to tail the log for most datstores. I cant imagine finding anything useful in say, a java, applications last 100 lines (especially if a stack trace was present). But I dont want to derail, so lets try to focus on the "deliver to swift" first option.


On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon <[hidden email]> wrote:

    Greetings, OpenStack DBaaS community.


    I'd like to start discussion around a new feature in Trove. The feature I would like to propose covers manipulating  database log files. 


    Main idea. Give user an ability to retrieve database log file for any purposes.

    Goals to achieve. Suppose we have an application (binary application, without source code) which requires a DB connection to perform data manipulations and a user would like to perform development, debbuging of an application, also logs would be useful for audit process. Trove itself provides access only for CRUD operations inside of database, so the user cannot access the instance directly and analyze its log files. Therefore, Trove should be able to provide ways to allow a user to download the database log for analysis.


    Log manipulations are designed to let user perform log investigations. Since Trove is a PaaS - level project, its user cannot interact with the compute instance directly, only with database through the provided API (database operations).

I would like to propose the following API operations:

  1. Create DBLog entries.

  2. Delete DBLog entries.

  3. List DBLog entries.

Possible API, models, server, and guest configurations are described at wiki page. [1]


[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Michael Basnight

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [trove] Delivering datastore logs to customers

Vipul Sabhaya
Yep agreed, this is a great idea. 

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These entries, treated as a Trove resource aren’t useful, since you don’t actually manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight <[hidden email]> wrote:
I think this is a good idea and I support it. In todays meeting [1] there were some questions, and I encourage them to get brought up here. My only question is in regard to the "tail" of a file we discussed in irc. After talking about it w/ other trovesters, I think it doesnt make sense to tail the log for most datstores. I cant imagine finding anything useful in say, a java, applications last 100 lines (especially if a stack trace was present). But I dont want to derail, so lets try to focus on the "deliver to swift" first option.


On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon <[hidden email]> wrote:

    Greetings, OpenStack DBaaS community.


    I'd like to start discussion around a new feature in Trove. The feature I would like to propose covers manipulating  database log files. 


    Main idea. Give user an ability to retrieve database log file for any purposes.

    Goals to achieve. Suppose we have an application (binary application, without source code) which requires a DB connection to perform data manipulations and a user would like to perform development, debbuging of an application, also logs would be useful for audit process. Trove itself provides access only for CRUD operations inside of database, so the user cannot access the instance directly and analyze its log files. Therefore, Trove should be able to provide ways to allow a user to download the database log for analysis.


    Log manipulations are designed to let user perform log investigations. Since Trove is a PaaS - level project, its user cannot interact with the compute instance directly, only with database through the provided API (database operations).

I would like to propose the following API operations:

  1. Create DBLog entries.

  2. Delete DBLog entries.

  3. List DBLog entries.

Possible API, models, server, and guest configurations are described at wiki page. [1]


[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Michael Basnight

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [trove] Delivering datastore logs to customers

Denis Makogon
Vipul, agreed.

Trove server side could store a mapping of available log files and their paths per datastore.
Also i agreed with ingoring DBLog model since it's really useless in term of future manipulations.
And deployer would be able to define which logs could be available for user by setting allowing parameter per each log type.

Example:
 allow_commit_log = True, allow_bin_log = False - each of parameter sets are custom per datastore.
mapping = {}
if allow_bin_log: mapping.update({'bin_log': path})



2013/12/20 Vipul Sabhaya <[hidden email]>
Yep agreed, this is a great idea. 

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These entries, treated as a Trove resource aren’t useful, since you don’t actually manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight <[hidden email]> wrote:
I think this is a good idea and I support it. In todays meeting [1] there were some questions, and I encourage them to get brought up here. My only question is in regard to the "tail" of a file we discussed in irc. After talking about it w/ other trovesters, I think it doesnt make sense to tail the log for most datstores. I cant imagine finding anything useful in say, a java, applications last 100 lines (especially if a stack trace was present). But I dont want to derail, so lets try to focus on the "deliver to swift" first option.


On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon <[hidden email]> wrote:

    Greetings, OpenStack DBaaS community.


    I'd like to start discussion around a new feature in Trove. The feature I would like to propose covers manipulating  database log files. 


    Main idea. Give user an ability to retrieve database log file for any purposes.

    Goals to achieve. Suppose we have an application (binary application, without source code) which requires a DB connection to perform data manipulations and a user would like to perform development, debbuging of an application, also logs would be useful for audit process. Trove itself provides access only for CRUD operations inside of database, so the user cannot access the instance directly and analyze its log files. Therefore, Trove should be able to provide ways to allow a user to download the database log for analysis.


    Log manipulations are designed to let user perform log investigations. Since Trove is a PaaS - level project, its user cannot interact with the compute instance directly, only with database through the provided API (database operations).

I would like to propose the following API operations:

  1. Create DBLog entries.

  2. Delete DBLog entries.

  3. List DBLog entries.

Possible API, models, server, and guest configurations are described at wiki page. [1]


[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Michael Basnight

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [trove] Delivering datastore logs to customers

Daniel Morris
In reply to this post by Vipul Sabhaya
Vipul, 

I know we discussed this briefly in the Wednesday meeting but I still have a few questions.   I am not bought in to the idea that we do not need to maintain the records of saved logs.   I agree that we do not need to enable users to download and manipulate the logs themselves via Trove ( that can be left to Swift), but at a minimum, I believe that the system will still need to maintain a mapping of where the logs are stored in swift.  This is a simple addition to the list of available logs per datastore (an additional field of its swift location – a location exists, you know the log has been saved).  If we do not do this, how then does the user know where to find the logs they have saved or if they even exist in Swift without searching manually?  It may be that this is covered, but I don't see this represented in the BP.  Is the assumption that it is some known path?  I would expect to see the Swift location retuned on a GET of the available logs types for a specific instance (there is currently only a top-level GET for logs available per datastore type).    

I am also assuming in this case, and per the BP, that If the user does not have the ability to select the storage location in Swift of if this is controlled exclusively by the deployer.  And that you would only allow one occurrence of the log, per datastore / instance and that the behavior of writing a log more than once to the same location is that it will overwrite / append, but it is not detailed in the BP.

Thanks,
Daniel
From: Vipul Sabhaya <[hidden email]>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Date: Friday, December 20, 2013 2:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Subject: Re: [openstack-dev] [trove] Delivering datastore logs to customers

Yep agreed, this is a great idea. 

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These entries, treated as a Trove resource aren’t useful, since you don’t actually manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight <[hidden email]> wrote:
I think this is a good idea and I support it. In todays meeting [1] there were some questions, and I encourage them to get brought up here. My only question is in regard to the "tail" of a file we discussed in irc. After talking about it w/ other trovesters, I think it doesnt make sense to tail the log for most datstores. I cant imagine finding anything useful in say, a java, applications last 100 lines (especially if a stack trace was present). But I dont want to derail, so lets try to focus on the "deliver to swift" first option.


On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon <[hidden email]> wrote:

    Greetings, OpenStack DBaaS community.


    I'd like to start discussion around a new feature in Trove. The feature I would like to propose covers manipulating  database log files. 


    Main idea. Give user an ability to retrieve database log file for any purposes.

    Goals to achieve. Suppose we have an application (binary application, without source code) which requires a DB connection to perform data manipulations and a user would like to perform development, debbuging of an application, also logs would be useful for audit process. Trove itself provides access only for CRUD operations inside of database, so the user cannot access the instance directly and analyze its log files. Therefore, Trove should be able to provide ways to allow a user to download the database log for analysis.


    Log manipulations are designed to let user perform log investigations. Since Trove is a PaaS - level project, its user cannot interact with the compute instance directly, only with database through the provided API (database operations).

I would like to propose the following API operations:

  1. Create DBLog entries.

  2. Delete DBLog entries.

  3. List DBLog entries.

Possible API, models, server, and guest configurations are described at wiki page. [1]


[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Michael Basnight

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [trove] Delivering datastore logs to customers

Denis Makogon
Goodday, Daniel. Thanks for response.

Today, before your message, i've update wiki page [1]. Now while POST user would recieve DBLog responce object which would contain location ulr of downloaded log file.
About the way of storing files, i've described inside of [1], in guest-side configuretion, that each file inside container would contain timestamp, and i'm not going to limit user with specific number of allowed files inside Swift.
I hope, i answered all your questions.


2013/12/23 Daniel Morris <[hidden email]>
Vipul, 

I know we discussed this briefly in the Wednesday meeting but I still have a few questions.   I am not bought in to the idea that we do not need to maintain the records of saved logs.   I agree that we do not need to enable users to download and manipulate the logs themselves via Trove ( that can be left to Swift), but at a minimum, I believe that the system will still need to maintain a mapping of where the logs are stored in swift.  This is a simple addition to the list of available logs per datastore (an additional field of its swift location – a location exists, you know the log has been saved).  If we do not do this, how then does the user know where to find the logs they have saved or if they even exist in Swift without searching manually?  It may be that this is covered, but I don't see this represented in the BP.  Is the assumption that it is some known path?  I would expect to see the Swift location retuned on a GET of the available logs types for a specific instance (there is currently only a top-level GET for logs available per datastore type).    

I am also assuming in this case, and per the BP, that If the user does not have the ability to select the storage location in Swift of if this is controlled exclusively by the deployer.  And that you would only allow one occurrence of the log, per datastore / instance and that the behavior of writing a log more than once to the same location is that it will overwrite / append, but it is not detailed in the BP.

Thanks,
Daniel
From: Vipul Sabhaya <[hidden email]>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Date: Friday, December 20, 2013 2:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Subject: Re: [openstack-dev] [trove] Delivering datastore logs to customers

Yep agreed, this is a great idea. 

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These entries, treated as a Trove resource aren’t useful, since you don’t actually manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight <[hidden email]> wrote:
I think this is a good idea and I support it. In todays meeting [1] there were some questions, and I encourage them to get brought up here. My only question is in regard to the "tail" of a file we discussed in irc. After talking about it w/ other trovesters, I think it doesnt make sense to tail the log for most datstores. I cant imagine finding anything useful in say, a java, applications last 100 lines (especially if a stack trace was present). But I dont want to derail, so lets try to focus on the "deliver to swift" first option.


On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon <[hidden email]> wrote:

    Greetings, OpenStack DBaaS community.


    I'd like to start discussion around a new feature in Trove. The feature I would like to propose covers manipulating  database log files. 


    Main idea. Give user an ability to retrieve database log file for any purposes.

    Goals to achieve. Suppose we have an application (binary application, without source code) which requires a DB connection to perform data manipulations and a user would like to perform development, debbuging of an application, also logs would be useful for audit process. Trove itself provides access only for CRUD operations inside of database, so the user cannot access the instance directly and analyze its log files. Therefore, Trove should be able to provide ways to allow a user to download the database log for analysis.


    Log manipulations are designed to let user perform log investigations. Since Trove is a PaaS - level project, its user cannot interact with the compute instance directly, only with database through the provided API (database operations).

I would like to propose the following API operations:

  1. Create DBLog entries.

  2. Delete DBLog entries.

  3. List DBLog entries.

Possible API, models, server, and guest configurations are described at wiki page. [1]


[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Michael Basnight

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [trove] Delivering datastore logs to customers

Vipul Sabhaya
In reply to this post by Daniel Morris



On Mon, Dec 23, 2013 at 8:59 AM, Daniel Morris <[hidden email]> wrote:
Vipul, 

I know we discussed this briefly in the Wednesday meeting but I still have a few questions.   I am not bought in to the idea that we do not need to maintain the records of saved logs.   I agree that we do not need to enable users to download and manipulate the logs themselves via Trove ( that can be left to Swift), but at a minimum, I believe that the system will still need to maintain a mapping of where the logs are stored in swift.  This is a simple addition to the list of available logs per datastore (an additional field of its swift location – a location exists, you know the log has been saved).  If we do not do this, how then does the user know where to find the logs they have saved or if they even exist in Swift without searching manually?  It may be that this is covered, but I don't see this represented in the BP.  Is the assumption that it is some known path?  I would expect to see the Swift location retuned on a GET of the available logs types for a specific instance (there is currently only a top-level GET for logs available per datastore type).    

The Swift location can be returned in the response to the POST/‘save’ operation.  We may consider returning a top-level immutable resource (like ‘flavors’) that when queried, could return the Base path for logs in Swift.  

Logs are not meaningful to Trove, since you can’t act on them or perform other meaningful Trove operations on them.  Thus I don’t believe they qualify as a resource in Trove.  Multiple ‘save’ operations should not result in a replace of the previous logs, it should just add to what may already be there in Swift.
 
I am also assuming in this case, and per the BP, that If the user does not have the ability to select the storage location in Swift of if this is controlled exclusively by the deployer.  And that you would only allow one occurrence of the log, per datastore / instance and that the behavior of writing a log more than once to the same location is that it will overwrite / append, but it is not detailed in the BP.

The location should be decided by Trove, not the user.  We’ll likely need to group them in Swift by InstanceID buckets.  I don’t believe we should do appends/overwrites - new Logs saved would just add to what may already exist.  If the user chooses they don’t need the logs, they can perform the delete directly in Swift.

 
Thanks,
Daniel
From: Vipul Sabhaya <[hidden email]>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Date: Friday, December 20, 2013 2:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Subject: Re: [openstack-dev] [trove] Delivering datastore logs to customers

Yep agreed, this is a great idea. 

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These entries, treated as a Trove resource aren’t useful, since you don’t actually manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight <[hidden email]> wrote:
I think this is a good idea and I support it. In todays meeting [1] there were some questions, and I encourage them to get brought up here. My only question is in regard to the "tail" of a file we discussed in irc. After talking about it w/ other trovesters, I think it doesnt make sense to tail the log for most datstores. I cant imagine finding anything useful in say, a java, applications last 100 lines (especially if a stack trace was present). But I dont want to derail, so lets try to focus on the "deliver to swift" first option.


On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon <[hidden email]> wrote:

    Greetings, OpenStack DBaaS community.


    I'd like to start discussion around a new feature in Trove. The feature I would like to propose covers manipulating  database log files. 


    Main idea. Give user an ability to retrieve database log file for any purposes.

    Goals to achieve. Suppose we have an application (binary application, without source code) which requires a DB connection to perform data manipulations and a user would like to perform development, debbuging of an application, also logs would be useful for audit process. Trove itself provides access only for CRUD operations inside of database, so the user cannot access the instance directly and analyze its log files. Therefore, Trove should be able to provide ways to allow a user to download the database log for analysis.


    Log manipulations are designed to let user perform log investigations. Since Trove is a PaaS - level project, its user cannot interact with the compute instance directly, only with database through the provided API (database operations).

I would like to propose the following API operations:

  1. Create DBLog entries.

  2. Delete DBLog entries.

  3. List DBLog entries.

Possible API, models, server, and guest configurations are described at wiki page. [1]


[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Michael Basnight

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Reply | Threaded
Open this post in threaded view
|

Re: [trove] Delivering datastore logs to customers

Daniel Morris

From: Vipul Sabhaya <[hidden email]>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Date: Tuesday, December 24, 2013 3:42 PM
To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Subject: Re: [openstack-dev] [trove] Delivering datastore logs to customers




On Mon, Dec 23, 2013 at 8:59 AM, Daniel Morris <[hidden email]> wrote:
Vipul, 

I know we discussed this briefly in the Wednesday meeting but I still have a few questions.   I am not bought in to the idea that we do not need to maintain the records of saved logs.   I agree that we do not need to enable users to download and manipulate the logs themselves via Trove ( that can be left to Swift), but at a minimum, I believe that the system will still need to maintain a mapping of where the logs are stored in swift.  This is a simple addition to the list of available logs per datastore (an additional field of its swift location – a location exists, you know the log has been saved).  If we do not do this, how then does the user know where to find the logs they have saved or if they even exist in Swift without searching manually?  It may be that this is covered, but I don't see this represented in the BP.  Is the assumption that it is some known path?  I would expect to see the Swift location retuned on a GET of the available logs types for a specific instance (there is currently only a top-level GET for logs available per datastore type).    

The Swift location can be returned in the response to the POST/‘save’ operation.  We may consider returning a top-level immutable resource (like ‘flavors’) that when queried, could return the Base path for logs in Swift.  
As long as we have a way to programmatically obtain and build the base path to the logs on a per instance basis, that should be fine.

Logs are not meaningful to Trove, since you can’t act on them or perform other meaningful Trove operations on them.  Thus I don’t believe they qualify as a resource in Trove.  Multiple ‘save’ operations should not result in a replace of the previous logs, it should just add to what may already be there in Swift.
 
I am also assuming in this case, and per the BP, that If the user does not have the ability to select the storage location in Swift of if this is controlled exclusively by the deployer.  And that you would only allow one occurrence of the log, per datastore / instance and that the behavior of writing a log more than once to the same location is that it will overwrite / append, but it is not detailed in the BP.

The location should be decided by Trove, not the user.  We’ll likely need to group them in Swift by InstanceID buckets.  I don’t believe we should do appends/overwrites - new Logs saved would just add to what may already exist.  If the user chooses they don’t need the logs, they can perform the delete directly in Swift.

 
Thanks,
Daniel
From: Vipul Sabhaya <[hidden email]>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Date: Friday, December 20, 2013 2:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" <[hidden email]>
Subject: Re: [openstack-dev] [trove] Delivering datastore logs to customers

Yep agreed, this is a great idea. 

We really only need two API calls to get this going:
- List available logs to ‘save’
- Save a log (to swift)

Some additional points to consider:
- We don’t need to create a record of every Log ‘saved’ in Trove.  These entries, treated as a Trove resource aren’t useful, since you don’t actually manipulate that resource.
- Deletes of Logs shouldn’t be part of the Trove API, if the user wants to delete them, just use Swift.
- A deployer should be able to choose which logs can be ‘saved’ by their users


On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight <[hidden email]> wrote:
I think this is a good idea and I support it. In todays meeting [1] there were some questions, and I encourage them to get brought up here. My only question is in regard to the "tail" of a file we discussed in irc. After talking about it w/ other trovesters, I think it doesnt make sense to tail the log for most datstores. I cant imagine finding anything useful in say, a java, applications last 100 lines (especially if a stack trace was present). But I dont want to derail, so lets try to focus on the "deliver to swift" first option.


On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon <[hidden email]> wrote:

    Greetings, OpenStack DBaaS community.


    I'd like to start discussion around a new feature in Trove. The feature I would like to propose covers manipulating  database log files. 


    Main idea. Give user an ability to retrieve database log file for any purposes.

    Goals to achieve. Suppose we have an application (binary application, without source code) which requires a DB connection to perform data manipulations and a user would like to perform development, debbuging of an application, also logs would be useful for audit process. Trove itself provides access only for CRUD operations inside of database, so the user cannot access the instance directly and analyze its log files. Therefore, Trove should be able to provide ways to allow a user to download the database log for analysis.


    Log manipulations are designed to let user perform log investigations. Since Trove is a PaaS - level project, its user cannot interact with the compute instance directly, only with database through the provided API (database operations).

I would like to propose the following API operations:

  1. Create DBLog entries.

  2. Delete DBLog entries.

  3. List DBLog entries.

Possible API, models, server, and guest configurations are described at wiki page. [1]


[1] https://wiki.openstack.org/wiki/TroveDBInstanceLogOperation

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Michael Basnight

_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[hidden email]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev