1 karl 1.1.2.1 Using the CIM/XML Pull Operations
2
3 STATUS
4
|
5 karl 1.1.2.4 <<< The TODO section is being maintained during the review and checkin process
|
6 karl 1.1.2.1 to keep track of problems, errors, notes, etc. Must be deleted before
7 checkin to head of tree. Please feel free to add notes, etc in this
8 section as you review/test.>>>>>>
9
|
10 karl 1.1.2.5 NOTES On working with task branch.
11
|
12 karl 1.1.2.6 Merge out Process
|
13 karl 1.1.2.5
|
14 karl 1.1.2.6 To keep our TASK branch in sync with the current head of tree we need
15 to do a regular merge out. the TaskMakefile contains the makefile
16 procedures to do this efficiently. NOTE: Following these procedures is
17 important in that you are merging out new material each time you do
18 the merge out. If you were just to repeatedly merge out, you would be
19 merging previously merged changes a second time causing a real mess.
20
21 Start with new directory and put TaskMakefile above pegasus (needed so you
22 have this file for the initial operations.
23
24 make -f TaskMakefile branch_merge_out BNAME=PEP317-pullop ## takes a long time
25
26 This checks out current head, merges it into task branch and sets tags
27 for the mergeout. Note that at the end of this step this work is
28 part of the TASK... branch.
29
30 NOW check for conflicts, errors, etc. that resulted from the merge.
31 Look for conflict flags, compare the results (I use linux merge as a
32 good graphic compare tool) and build and test. When you are satisfied
33 that the merge out is clean, you can commit the results to the TASK...
34 branch
|
35 karl 1.1.2.5
|
36 karl 1.1.2.6 To commit the work to this into Task branch
|
37 karl 1.1.2.5
|
38 karl 1.1.2.6 make -f mak/TaskMakefile branch_merge_out_commit BNAME=PEP317-pullop
|
39 karl 1.1.2.5
40 or manually commit and finish as follows
41
|
42 karl 1.1.2.6 cvs commit
43 make -f mak/TaskMakefile branch_merge_out_finish BNAME=PEP317-pullop
|
44 karl 1.1.2.5
|
45 karl 1.1.2.6 ## This last step is important since it cleans up temporary tags to prepare
46 you for the next checkout
|
47 karl 1.1.2.5
48 COMPARE TASKBRANCH WITH HEAD
49
|
50 karl 1.1.2.7 In a new pegasus work space do same as above for merge out.
|
51 karl 1.1.2.5
|
52 karl 1.1.2.7 make -f TaskMakefile BNAME=PEP317-pullop
|
53 karl 1.1.2.5
|
54 karl 1.1.2.7 This produces a result which is all of the head merged into the branch.
55 A diff of this is all the new changes to the head of tree that you will
56 include into the merge.
|
57 karl 1.1.2.5
|
58 karl 1.1.2.3
|
59 karl 1.1.2.4 TODO list:
|
60 karl 1.1.2.3 1. Binary operation from OOP. Need to add counter to binary
61 protocol to be able to count objects in response. Generates
62 warnings in things like messageserializer and does not work with
63 OOP right now.
|
64 karl 1.1.2.4 2. OpenExecQuery - Code is incorrect in that it used InstancesWithPath
65 where the spec is instances with no path. Need new function to wrap
66 getInstanceElement(withoutPathElement) in XmlReader. Note that
67 Alternate is to put flag on InstancesWith Path to say no path
68 3. Code for Pull part of OpenQueryInstancesRequest a) should be part of
69 the common CIMOperationRequestDispatcher execCommon code.
70 4. The changes to WQLCIMOperationRequestDispatcher and CQL... for handling
71 pull not completed so we feed the responses back to the EnmerationContext
72 queues
|
73 karl 1.1.2.7 3. Lots of minor TODOs, diagnostics, etc.
|
74 karl 1.1.2.4 4. External runtime variables. Decide this as part of PEP. The variables
75 exist in CIMOperationRequestDispatcher but not in CIMConfig. The primary
76 ones to consider are:
77 a. System maxObjectCount. Setting some maximum size on what a pull
78 client can request (i.e. the maximum size of the maxObjectCount on
79 Open... and pull operations.
80 b. Pull interoperationTimeout (max times between operations). This is
81 the maximum number of seconds on the operationTimeout parameter of the
82 Open operations
83 c. Maximum size of the responseCache before it starts backing up
84 responses to the providers.
|
85 karl 1.1.2.3 5. Decision on EnumerationContext timeout (separate thread or just
|
86 karl 1.1.2.4 checks during other operations). Can we, in fact really keep the
87 enumeration context table and queue under control without monitoring
88 with a separate thread. We must monitor for:
89 a. Client operation that stop requesting (i.e. inter operation time
90 exceeds operationTimeout). Note that if it simply exceeds the time
91 the next operation does the cleanup. The issue is those clients that
92 simply stop and do not either close or go to completion.
93 b. We should protect against providers that no not every finish delivering
94 or take to long between deliveries. This does not exist in Pegasus
95 today
96 6. Clean up code in Dispatcher. The associators code is still real mess
97 and the pull code is in a template. The Pull code is good now but
98 must be duplicated. Look at creating new CIMMessage CIMPullResponseMessage
99 so that we can have common code. Everything is the same except what
100 goes into the CIMResponseData so it is logical to have completely
101 common processing
|
102 karl 1.1.2.3 7. Extension to avoid double move of objects in CIMResponseData (one
103 into enumerationContext queue and second to new cimResponseData for
104 response. Want to avoid second move by extending Open/Pull response
105 messages to include count and CIMResponse data to count objects out
106 of queue when converting (avoids the second move). Big issue here
107 with binary data since need to extend format to count it.
|
108 karl 1.1.2.4 8. Still using templates, etc. in code in the Dispatcher. This is for
109 all of the open operations where there is a lot of duplicate code
110 and the pull operations that are 99% duplicate code (in a single template)
|
111 karl 1.1.2.7 9. NEXT TASK: get the pull operations into a single function by
112 creating a new CIMPullResponse message in CIMMessage.h that contains
113 the pull data. Then we can use a single function to process all pull
114 operations.
115
116 Fixed for 16 June CVS Update
117 1. Cleaned up the enumerationContext and Table release functions and tested
118 to confirm that we do not lose memory in either normal sequences or
119 sequences that close early. Cleaned up pullop and added more tests
|
120 karl 1.1.2.4
121 Fixed for 9 June CVS update
122 1. Cleaned up code for OpenQueryInstances. Note that this is incomplete.
123 No support in WQL or CQL Operations
124 2.
125
126 What was fixed for 5 June checkin.
|
127 karl 1.1.2.3 1. Extended ResponseTest MOF for for both CMPI and C++ subclasses
128 2. Fixed issues with pullop.
129 3. Fixed temp issue with CIMResponseData size by putting in mutex. That
130 is not a permanent fix but it gets around issue probably in the control
131 of the move logic that meant counts were off.
132 4. Fixed issues in Dispatcher so that associator code works. Still messy
133 code in the dispatcher.
134 5. Changed name of Enumerationtable.h & cpp to EnumerationContextTable.*
135 6 Changed name of ResponseStressTest module, classes, etc.
136
137 TAG: TASK_PEP317_5JUNE_2013_2
138
|
139 karl 1.1.2.2 2 June 2013
|
140 karl 1.1.2.1
141 Issues - KS
142 1. have not installed the binary move in CIMResponseData. Please run
143 with OPP off.
144 2. Some problem in the processing so we are getting server crashes.
145 Right no I am guessing that this is in the binaryCodec and am going to
146 expand the test tools to allow testing through the localhost.
147
148 3. Still way to many TODO and KS comments and KS_TEMPS. Removing bit by bit.
149
150 4. Env variable connection for the config parameters not installed.
151
152 5. Issue with the threaded timer. For some reason during tests it
153 eventually calls the timer thread with trash for the parm (which is
154 pointer to the EnumerationTable object). Caught because we do a valid
155 test at beginning of the function.
156
|
157 karl 1.1.2.2 6. Still using the templates in CIMOperationRequestDispatcher to simplify
158 the handle... processing.
159
160 7. I think I have a way around the double move of objects in the
161 EnumerationContext so that the outputter will just take a defined number
162 of objects directly from the gathering cache and save the second move.
163
164 8. Not yet passing all tests but getting closer now.
165
166 9. Created a tag before this commit TASK_PEP317_1JUNE_2013.
167
168 10. Next Tag will be TASK_PEP317_2_JUNE_2013 in the task branch
169
170
|
171 karl 1.1.2.1 ===========================================
172
173 OVERVIEW:
174
175 The operation extensions for pull operations defined in the DMTF specification
176 DSP0200 V 1.4 were implemented in Pegasus effective Pegasus version 2.11
177 including Client and Server.
178
179 These operations extend the CIM/XML individual operations to operation
180 sequences where the server must maintain state between operations in a
181 sequence and the client must execute multiple operations to get the full
182 set of instances or instance paths.
183
184 The following new CIM/XML operations as defined in DSP0200 are included;
185
186 -OpenEnumerateInstances
187 -openEnumerateInstancePaths
188 -OpenReferenceInstances
189 -OpenReferenceInstancePaths
190 -OpenAssociatiorInstances
191 -OpenAssociatorInstancePaths
192 karl 1.1.2.1 -PullInstancesWithPath
193 -PullInstancePaths
194 -CloseEnumeration
195 -EnumerationCount
|
196 karl 1.1.2.2 OpenExecQuery
|
197 karl 1.1.2.1
198 The following operations have not been implemented in this version of Pegasus:
199
200 -OpenQueryInstances
201
202 The following limitations on the implementation exist;
203
204 1. The filterQueryLanguage and filterQuery parameters are processed by
205 the Pegasus client but the server returns error if there is any data in
|
206 karl 1.1.2.2 either parameter. This work does not include the development of the
207 query language. Note that a separate effort to extend Pegasus to use
208 the DMTF FQL query language is in process.
|
209 karl 1.1.2.1
210 2. The input parameter continueOnError is processed correctly by the client
211 but the Pegasus server only provides for false since the server does not
212 include logic to continue processing responses after an error is
213 encountered.
214 This is consistent with the statement in the specification that use of
215 this functionality is optional and the fact that the DMTF agrees that all
216 of the issues of continuing after errors have not been clarified.
217
218 3. The operation enumerationCount is not processed by the server today since
219 a) really getting the count would be the same cost as the corresponding
220 enumeration, b) the server does not include a history or estimating
221 mechanism for this to date.
|
222 karl 1.1.2.2 NOTE: After a through review as part of the development of the next version
223 of CMPI we have concluded that this operation is probably not worth the
224 effort. Since it is optional, Pegasus will only return the unknown status
225 at this point
|
226 karl 1.1.2.1
227 Since the concept of sequences of operations linked together (open, pull, close)
228 is a major extension to the original CIM/XML operation concept of completely
229 independent operations several new pieces of functionality are implemented
230 to control interOperationTimeouts, counts of objects to be returned, etc.
231
232 TBD - Review this
233
234 CLIENT
235
236 The new operations follow the same pattern as the APIs for existing operations
237 in that:
238
239 1. All errors are handled as CIMException and Exception
240
241 2. The means of inputting parameters are the same except that there are
242 significantly more input parameters with the open operations and for the
243 first time operations return parameters as well as objects in the
244 response. Specifically the open and pull operations return values for
245 enumerationContext which is the identity for a pull sequence and
246 endOfSequence which is the marker the server sends in open and pull
247 karl 1.1.2.1 responses when it has no more objects to send.
248
249 The significant differences include:
250
251 1. Processing of parameters on responses (i.e. the endOfSequence and
252 enumerationContext parameters are returned for open and pull operations).
253
254 2. Numeric arguments (Uint32 and Uint64 include the option of NULL in some
255 cases so they are packaged inside classes Uint32Arg and Uint64Arg in the
256 client api.
257
258 3. The association and reference operations ONLY process instances. They do
259 not include the capability to return classes like reference and associator
260 do and therefore return CIMInstance rather than CIMObject.
261
262 4. Paths are returned in all cases (i.e OpenEnumerateInstances and
263 PullInstancesWithPath where they were not with EnumeratInstances.
264
265 5. The client must maintain state between operations in a sequence (using
266 the enumerationContext parameter).
267
268 karl 1.1.2.1 TBD- Are there more differences.
269
270
271 SERVER
272
273 The Pegasus server attempts to always deliver the requested number of objects
274 for any open or pull request (the specification allows for the server to
275 deliver less than the requested number of objects and specifically to return
276 zero objects on open). We felt that it was worth any extra cost in processing
277 to provide the client with exactly what it had requested.
278
279 The pegasus server always closes an enumeration sequence upon receipt of any
280 error from the providers, repository, etc. Therefore the server will reject
|
281 karl 1.1.2.2 any request that has continueOnError = true;
282
283 Expansion to allow the continue on error may be added in a future version.
284 In any case, the whole purpose of the continue on error is really to allow
285 input from good providers to be mixed with providers that return errors so
286 that generally this would mean simply changing the logic in the return mechanism
287 to not shutdown when an error is received from any given provider.
288
289 Generally we do not believe that the providers need to do much more in the
290 future to support the continueOnError other than possibly allowing the provider
291 to continue processing after it has received an error.
|
292 karl 1.1.2.1
293 PROVIDERS
294
295 This implementation requires NO changes to the existing providers. The
296 provider APIs operate just as they do with the original operations.
297
298 Because the server processing is different however, there may be some
299 behavior differences primarily because the client now controls the speed of
300 delivery of objects.
301
302 In previous versions of Pegasus, the server attempts to deliver objects as
303 rapidly as then can be put on the network. In the case of HTTP chunked requests
304 they are delivered in chunks of about 100 objects. The primary delay for the
305 providers was the processing of each segment through the server. The server
306 is blocked so that no other segment can proceed through the server until that
307 segment is processed and sent on the network.
308 In the case of non-chunkedresponses, they are completely gathered in the serve
309 and then delivered as one non-chunked response. There were no delays for the
310 providers, just lots of possible memory use in the server.
311
312 The responses from providers (delivered through the deliver(...) interface are
313 karl 1.1.2.1 gathered into segments of about 100 objects and this group of objects is moved
314 through the server to be delivered to the client.
315
316 However with the inclusion of the pull operations, The segments of objects
317 from the providers are cached in the server response path until the
318 maxObjectCount for that request (open or pull) and that number returned in a
319 non-chunked response. Thus, if the client is slow to issue pull requests,
320 the providers might be delayed at some point to reduce memory usage in the
321 server (the delay appears as slow response tothe deliver operation).
322
323 In other words, the time to process large sets of responses from the provider
324 now depends on the speed of handling the client.
325
326 It is important to remember in developing providers that the Pegasus server
327 can most efficiently process responses if they are passed from the provider
328 to the server individually or in small arrays of objects rather than the
329 provider gathering very large arrays of objects and sending them to the
330 server.
331
|
332 karl 1.1.2.2 NEXT GENERATION PROVIDERS
333 KS_TODO
334
|
335 karl 1.1.2.1 CONFIGURATION PARAMETERS
336
337 The server includes several configuration parameters to set limits on the
338 processing of pull operations. All of these configuration parameters are
339 compile time parameters rather than runtime.
340
341 1. Maximum value of minimum interoperation time. This parameter defines the
342 maximum time allowed between the return of an open or pull response and
343 the receipt of the next pull or a close operation before the server may
344 close the enumeration. The specification allows the server to set a
345 maximum interoperation time and refuse open requests that with requested
346 operationTimeout greater than that time.
347 CIM_ERR_INVALID_OPERATION_TIMEOUT
348
349 This value is set with the Pegasus environment variable
350 PEGASUS_PULL....
351
352 2. Maximum objects returned in a single open or pull operation. The server
353 can set a maximum limit on the number of objects that can be returned in
354 a single open or pull oepration with the maxObjectCount parameter.
355
356 karl 1.1.2.1 3. Whether the server allows 0 as an interoperation timeout value. The value
357 zero is s special value for the interoperationTimeout in that it tells the
358 server to not timeout any enumeration sequence.
359
360 With this value for interoperationTimeout, the only way to close an
361 enumeration sequence is to complete all of the pulls or issue the close.
362 If for some reason the sequence is not completed, that enumeration context
363 would remain open indefinitly. Since in Pegasus any open enumeration
364 context uses resources (the context object and any provider resposnes that
365 have not yet been issued in a response) it would appear that most
366 platforms would not want to allow the existence of enumeration contexts
367 that cannot be closed by the server.
368
369 4, maximum consecutive pull requests with 0 maxObjectCount. The use of the
370 pull operation with maxObjectCount set to zero could be used to keep an
371 enumeration context open indefinitly (this tells the server to restart the
372 interoperationTimeout but not send any objects in the response). Therefore the
373 specification allows for the server setting maximum limits on this behavior
374 and returning the error CIM_ERR_SERVER_LIMITS_EXCEEDED if this limit is
375 exceeded.
376 Note that this is maximum CONSECUTIVE pulls so that issuing a pull with
377 karl 1.1.2.1 a non-zero count resets this counter.
378
379 KS-TBD - Is this really logical since we can still block by just issuing
380 lots of zero request and an occansional request for one object.
381
382 Pegaus sets the value of this limit to 1000 and allows the implementer to
383 modify it with the PEGASUS_MAXIMUM_ZERO_OBJECTCOUNT environment variable.
384
385 5. Default operationTimeout -
386
387 The default of this parameter is to refuse operat
388
389 In the current release of Pegasus these are all compile time parameters.
|