(file) Return to readme.pulloperations CVS log (file) (dir) Up to [Pegasus] / pegasus

File: [Pegasus] / pegasus / readme.pulloperations (download)
Revision: 1.1.2.16, Thu Nov 21 17:59:38 2013 UTC (10 years, 5 months ago) by karl
Branch: TASK-PEP317_pullop-branch
Changes since 1.1.2.15: +17 -3 lines
PEP#: 317
TITLE: TASK-PEP317_pullop-branch Merge Out from head of tree effective 21 Nov 2013.

DESCRIPTION:

        Using the CIM/XML Pull Operations

STATUS

<<< The TODO section is being maintained during the review and checkin process
to keep track of problems, errors, notes, etc.  Must be deleted before
checkin to head of tree. Please feel free to add notes, etc in this
section as you review/test.>>>>>>

TODO list:
   1. Binary operation from OOP.  Need to add counter to binary
      protocol to be able to count objects in response. Generates
      warnings in things like messageserializer and does not work with
      OOP right now.  Corrected by converting to XML. 
   2. OpenExecQuery - Code is incorrect in that it does not include the
      return from the exec query function to the aggregator yet.
   3. Code for Pull part of OpenQueryInstancesRequest a) should be part of
      the common CIMOperationRequestDispatcher execCommon code.
   4. The changes to WQLCIMOperationRequestDispatcher and CQL... for handling
      pull not completed so we feed the responses back to the EnmerationContext
      queues
   3. Lots of minor TODOs, diagnostics, etc.
   4. External runtime variables. Proposing that they be fixed for this release
      rather than set by configuration.  This should be discussed.  Am making
      this a separate bug.  See bug 9819 for the changes to cover this.
   5. Decision on EnumerationContext timeout (separate thread or just
      checks during other operations). Can we, in fact really keep the 
      enumeration context table and queue under control without monitoring
      with a separate thread. We must monitor for:
      a. Client operation that stop requesting (i.e. inter operation time
          exceeds operationTimeout). Note that if it simply exceeds the time
          the next operation does the cleanup.  The issue is those clients that
          simply stop and do not either close or go to completion.
      b. We should protect against providers that no not every finish delivering
          or take to long between deliveries.  This does not exist in Pegasus
          today
   6. Consider moving some of the code in dispatcher from templates to common
      functions which would mean adding intermediate classes in CIMMessage but
      would reduce code size.
   7. Extension to avoid double move of objects in CIMResponseData (one
      into enumerationContext queue and second to new cimResponseData for
      response.  Want to avoid second move by extending Open/Pull response
      messages to include count and CIMResponse data to count objects out
      of queue when converting (avoids the second move).  Big issue here
      with binary data since need to extend format to count it.
   8. NEXT TASKS: 
      a. test the enumeration timeout thread
      b. finish and test the OpenQueryInstances
      c. Clean up TODOs
      d. Find issue when we run makepoststarttests in pullop client with
         forceProviderProcesses = true.  This causes an operation like
         cimcli pei CIM_ManagedElement to not complete (client timeout)
         sometimes.

21 November 2013
1. Mergeout from head of tree to 21 November 2013.

18 November 2013
1. Cleanup of a bunch of minor errors and completion of all of the code for
   the openQueryInstances except for the PullInstances in Dispatcher and
   the aggregator function.
2. OpenqueryInstances added to cimcli.

13 October 2013 CVS branch update.
1. Integrated bug 9786 into the branch.  Note that we need to test the
   generated statistics.
2. Mergeout executed to update to head of tree as of 8:00 am 13 October 2013.
3. Cleaned up several errors in OOP processing.  Note that there is at least
   one issue left when we to a pull on ManagedElement in at least one of the
   namespaces.
4. Cleaned up some of the outstanding diagnostic code
5. Generally passes all tests except for one test of pullop where it is trying
   to pull enum instances CIM_ManagedElement from a particular namespace.

NOTE: I did not make comments here for changes in October despite the fact
that I did 2 mergouts, number of fixes, and a mergein.

30 September 2013 - CVS Update
Mergeout head of tree up to 29 September 2013.

29 September 2013. CVS update.
1. Modified calls to statisticalData.cpp to a) directly call with request
   type, b) incorporate the open, pull, etc. messages.  However, since these
   are not part of the CIM class, we must do something special with them.
   See bug 9785 for full solution to this issue.
2. Corrected OOP interface to enable new flag to indicate internal operations
   and set host, etc.
3. Add code to CQLOperationsDispatcher and WQLOperationDispatcher to clean
   up CIMResponseDataCounter after filtering.
4. Modified ProviderAgent to set Host info for some pull operations.
5. Added new flag to CIMBinMsgSerializer and Deserializer.

17 September 2013 CVS update (Actually two different updates over 3 days)
1. Clean up some issues in CIMMessage.h and CIMMessage.cpp
2. Extend OpenExecQuery to WQL and CQL processors but return not complete
3. Remove memory leak in EnumerationContext and EnumerationContextTable
   handling.
4. Created template functions for much of the pull operations.
5. Reversed order of queryLanguage and query (and changed names to match
   execQuery) in client and server.  Note that these are the execQuery
   WQL and CQL filters and NOT FQL filters.
6. Some code cleanup in dispatcher
7. Today, passes all tests in pullop but issue in alltests. For some reason
   not finding CIMObjectManager instance. Also, leaves enumeration contexts
   if client terminates since cleanup thread not operating.
8. XML from OOP not correctly processed.

14 September 2013 CVS update
Merged out up to 25 August.  Cleaned up all operations and standardized code.
At this point the non pull operations code is in a set of templates but the
pull is not yet.
Fixed a significant number of problems so that it appears that the operations
except for OpenExecQuery run stably, at least with the pullop test program.
Note that there is a problem in that the Interop control provider is not
returning its singleton wbemserver object for some reason.  Causes a test
failure

Fixed for 16 June CVS Update
   1. Cleaned up the enumerationContext and Table release functions and tested
      to confirm that we do not lose memory in either normal sequences or
      sequences that close early. Cleaned up pullop and added more tests
Taged Before: PREAUG25UPDATE and after POSTAUG25UPDATE

Fixed for 9 June CVS update
   1. Cleaned up code for OpenQueryInstances.  Note that this is incomplete.
      No support in WQL or CQL Operations
   2. 

What was fixed for 5 June checkin.
   1. Extended ResponseTest MOF for for both CMPI and C++ subclasses
   2. Fixed issues with pullop.
   3. Fixed temp issue with CIMResponseData size by putting in mutex. That
      is not a permanent fix but it gets around issue probably in the control
      of the move logic that meant counts were off.
   4. Fixed issues in Dispatcher so that associator code works. Still messy
      code in the dispatcher.
   5. Changed name of Enumerationtable.h & cpp to EnumerationContextTable.*
   6  Changed name of ResponseStressTest module, classes, etc.

TAG: TASK_PEP317_5JUNE_2013_2

2 June 2013

Issues  - KS

 - Still way to many TODO and KS comments and KS_TEMPS.  Removing bit by bit.

 - Runtime variable connection for the config parameters not installed. That
   has been made into a separate bug (see bug 9819)

5. Issue with the threaded timer.  For some reason during tests it
eventually calls the timer thread with trash for the parm (which is
pointer to the EnumerationTable object). Caught because we do a valid
test at beginning of the function.

6. Still using the templates in CIMOperationRequestDispatcher to simplify
the handle... processing.  

7. I think I have a way around the double move of objects in the
EnumerationContext so that the outputter will just take a defined number
of objects directly from the gathering cache and save the second move.

8. Not yet passing all tests but getting closer now. The major test that is
causing an error today is the execution of a full enumeration with the
forceProviders = true.  This causes a client timeout sometimes.



===========================================

OVERVIEW:

The operation extensions for pull operations defined in the DMTF specification
DSP0200 V 1.4 were implemented in Pegasus effective Pegasus version 2.11
including Client and Server.

These operations extend the CIM/XML  individual operations to operation
sequences where the server must maintain state between operations in a
sequence and the client must execute multiple operations to get the full
set of instances or instance paths.

The following new CIM/XML operations as defined in DSP0200 are included;

    -OpenEnumerateInstances
    -openEnumerateInstancePaths
    -OpenReferenceInstances
    -OpenReferenceInstancePaths
    -OpenAssociatiorInstances
    -OpenAssociatorInstancePaths
    -OpenQueryInstances
    -PullInstancesWithPath
    -PullInstancePaths
    -PullInstances
    -CloseEnumeration
    -EnumerationCount
     OpenExecQuery

The following  operations have not been implemented in this version of Pegasus:

    -OpenQueryInstances

The following limitations on the implementation exist;

1. The filterQueryLanguage and filterQuery parameters are processed by
   the Pegasus client but the server returns error if there is any data in
   either parameter. This work does not include the development of the
   query language.  Note that a separate effort to extend Pegasus to use
   the DMTF FQL query language is in process.

2. The input parameter continueOnError is processed correctly by the client
   but the Pegasus server only provides for false since the server does not
   include logic to continue processing responses after an error is
   encountered. 
   This is consistent with the statement in the specification that use of 
   this functionality is optional and the fact that the DMTF agrees that all 
   of the issues of continuing after errors have not been clarified.  

3. The operation enumerationCount is not processed by the server today since
   a) really getting the count would be the same cost as the corresponding
   enumeration, b) the server does not include a history or estimating
   mechanism for this to date.
   NOTE: After a through review as part of the development of the next version
   of CMPI we have concluded that this operation is probably not worth the
   effort.  Since it is optional, Pegasus will only return the unknown status
   at this point

Since the concept of sequences of operations linked together (open, pull, close)
is a major extension to the original CIM/XML operation concept of completely
independent operations several new pieces of functionality are implemented
to control interOperationTimeouts, counts of objects to be returned, etc.

TBD - Review this

CLIENT

The new operations follow the same pattern as the APIs for existing operations
in that:

1. All errors are handled as CIMException and Exception

2. The means of inputting parameters are the same except that there are
   significantly more input parameters with the open operations and for the 
   first time operations return parameters as well as objects in the 
   response.  Specifically the open and pull operations return values for 
   enumerationContext which is the identity for a pull sequence and 
   endOfSequence which is the marker the server sends in open and pull 
   responses when it has no more objects to send.

The significant differences include:

1. Processing of parameters on responses (i.e. the endOfSequence and
   enumerationContext parameters are returned for open and pull operations).

2. Numeric arguments (Uint32 and Uint64 include the option of NULL in some
   cases so they are packaged inside classes Uint32Arg and Uint64Arg in the
   client api.

3. The association and reference operations ONLY process instances.  They do
   not include the capability to return classes like reference and associator
   do and therefore return CIMInstance rather than CIMObject.

4. Paths are returned in all cases (i.e OpenEnumerateInstances and
   PullInstancesWithPath where they were not with EnumeratInstances.

5. The client must maintain state between operations in a sequence (using
   the enumerationContext parameter).

TBD- Are there more differences.


SERVER

The Pegasus server attempts to always deliver the requested number of objects
for any open or pull request (the specification allows for the server to
deliver less than the requested number of objects and specifically to return
zero objects on open).  We felt that it was worth any extra cost in processing
to provide the client with exactly what it had requested.

The pegasus server always closes an enumeration sequence upon receipt of any
error from the providers, repository, etc. Therefore the server will reject
any request that has continueOnError = true;

Expansion to allow the continue on error may be added in a future version.
In any case, the whole purpose of the continue on error is really to allow
input from good providers to be mixed with providers that return errors so
that generally this would mean simply changing the logic in the return mechanism 
to not shutdown when an error is received from any given provider.

Generally we do not believe that the providers need to do much more in the
future to support the continueOnError other than possibly allowing the provider
to continue processing after it has received an error.

PROVIDERS

This implementation requires NO changes to the existing providers.  The
provider APIs operate just as they do with the original operations.

Because the server processing is different however, there may be some
behavior differences primarily because the client now controls the speed of
delivery of objects.

In previous versions of Pegasus, the server attempts to deliver objects as
rapidly as then can be put on the network.  In the case of HTTP chunked requests
they are delivered in chunks of about 100 objects. The primary delay for the
providers was the processing of each segment through the server.  The server
is blocked so that no other segment can proceed through the server until that
segment is processed and sent on the network.
In the case of non-chunkedresponses, they are completely gathered in the serve
and then delivered as one non-chunked response. There were no delays for the
providers, just lots of possible memory use in the server.

The responses from providers (delivered through the deliver(...) interface are
gathered into segments of about 100 objects and this group of objects is moved
through the server to be delivered to the client.

However with the inclusion of the pull operations,   The segments of objects
from the providers are cached in the server response path until the 
maxObjectCount for that request (open or pull) and that number returned in a
non-chunked response. Thus, if the client is slow to issue pull requests,
the providers might be delayed at some point to reduce memory usage in the
server (the delay appears as slow response tothe deliver operation).

In other words, the time to process large sets of responses from the provider
now depends on the speed of handling the client.

It is important to remember in developing providers that the Pegasus server
can most efficiently process responses if they are passed from the provider
to the server individually or in small arrays of objects rather than the
provider gathering very large arrays of objects and sending them to the
server.

NEXT GENERATION PROVIDERS
KS_TODO

CONFIGURATION PARAMETERS

The server includes several configuration parameters to set limits on the
processing of pull operations.  All of these configuration parameters are
compile time parameters rather than runtime.

1. Maximum value of minimum interoperation time.  This parameter defines the
maximum time allowed between the return of an open or pull response and 
the receipt of the next pull or a close operation before the server may 
close the enumeration.  The specification allows the server to set a 
maximum interoperation time and refuse open requests that with requested 
operationTimeout greater than that time.  
CIM_ERR_INVALID_OPERATION_TIMEOUT

This value is set with the Pegasus environment variable
PEGASUS_PULL....

2. Maximum objects returned in a single open or pull operation.  The server
can set a maximum limit on the number of objects that can be returned in
a single open or pull oepration with the maxObjectCount parameter.

3. Whether the server allows 0 as an interoperation timeout value. The value
zero is s special value for the interoperationTimeout in that it tells the
server to not timeout any enumeration sequence.

With this value for interoperationTimeout, the only way to close an 
enumeration sequence is to complete all of the pulls or issue the close.  
If for some reason the sequence is not completed, that enumeration context 
would remain open indefinitly.  Since in Pegasus any open enumeration 
context uses resources (the context object and any provider resposnes that 
have not yet been issued in a response) it would appear that most 
platforms would not want to allow the existence of enumeration contexts 
that cannot be closed by the server.  

4, maximum consecutive pull requests with 0 maxObjectCount.  The use of the
pull operation with maxObjectCount set to zero could be used to keep an
enumeration context open indefinitly (this tells the server to restart the
interoperationTimeout but not send any objects in the response). Therefore the
specification allows for the server setting maximum limits on this behavior
and returning the error CIM_ERR_SERVER_LIMITS_EXCEEDED if this limit is
exceeded.
Note that this is maximum CONSECUTIVE pulls so that issuing a pull with
a non-zero count resets this counter.

KS-TBD - Is this really logical since we can still block by just issuing
lots of zero request and an occansional request for one object.

Pegaus sets the value of this limit to 1000 and allows the implementer to
modify it with the PEGASUS_MAXIMUM_ZERO_OBJECTCOUNT environment variable.

5. Default operationTimeout - 

The default of this parameter is to refuse operat

In the current release of Pegasus these are all compile time parameters.


NOTES On working with task branch.

Merge out Process

   To keep our TASK branch in sync with the current head of tree we need
   to do a regular merge out.  the TaskMakefile contains the makefile
   procedures to do this efficiently.  NOTE: Following these procedures is
   important in that you are merging out new material each time you do
   the merge out.  If you were just to repeatedly merge out, you would be
   merging previously merged changes a second time causing a real mess.

    Start with new directory and put TaskMakefile above pegasus (needed so you
    have this file for the initial operations.  

      make -f TaskMakefile branch_merge_out BNAME=PEP317-pullop  ## takes a long time

   This checks out current head, merges it into task branch and sets tags
   for the mergeout.  Note that at the end of this step this work is
   part of the TASK... branch.

   NOW check for conflicts, errors, etc. that resulted from the merge.
   Look for conflict flags, compare the results (I use linux merge as a
   good graphic compare tool) and build and test. When you are satisfied
   that the merge out is clean, you can commit the results to the TASK...
   branch
   
   To commit the work to  this into Task branch

      make -f mak/TaskMakefile branch_merge_out_commit BNAME=PEP317-pullop

  or manually commit and finish as follows

    cvs commit
    make -f mak/TaskMakefile  branch_merge_out_finish BNAME=PEP317-pullop

## This last step is important since it cleans up temporary tags to prepare
   you for the next checkout
   
COMPARE TASKBRANCH WITH HEAD

    In a new pegasus work space do same as above for merge out.

    make -f TaskMakefile BNAME=PEP317-pullop

    This produces a result which is all of the head merged into the branch.
    A diff of this is all the new changes to the head of tree that you will
    include into the merge.


No CVS admin address has been configured
Powered by
ViewCVS 0.9.2