1 karl 1.1 <html>
2
3 <head>
4 <meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
5 <meta name="GENERATOR" content="Microsoft FrontPage 4.0">
6 <meta name="ProgId" content="FrontPage.Editor.Document">
7 <title>Creating a non</title>
8 </head>
9
10 <body>
11
12 <h1 align="center">
13
14 <B><C>Pegasus Working Paper</B>
15
16 </h1>
17 <h1 align="center">
18
19 <B>Creating a non-blocking environment for Pegasus</C></B>
20
21 </h1>
22 karl 1.1 <p>
23
24 <C>Results of 14 May face-to-face meeting</C>
25
26 </p>
27 <p>
28
29 AUTHORS: Chip Vincent (IBM), Mike Brasher (BMC), Karl Schopmeyer (The Open Group)
30
31 DATE: 20 May 2001
32
33 ----------------------------------------------</p>
34
35 <h2>Overview of The Meeting </h2>
36
37 The Pegasus work group spent a fair part of Tuesday afternoon (15 May 2001) at
38 the Compaq meeting defining the requirements that we felt were important for
39 creating a nonblocking extension to Pegasus and defining the major components
40 of the design.
41
42 First, all parties agree that making Pegasus non-blocking to CIM_Operations is
43 karl 1.1 a very high priority activity. It should be one of the initial activities of
44 the Phase 2 development.
45
46 Second we agreed that it was more than simply multi-threading the processing
47 of requests. In all probability the key blocks will be at the providers
48 depending on the design rules applied to providers.
49
50 <h2>Key Issues</h2>
51
52 The key issues with providers are;
53
54 <OL>
55 <LI> The time to accomplish tasks will vary widely from the simplistic very
56 rapid response to normal operations to extremely long and non-deterministic
57 times for some operations conducted by providers. Consider as an extreme
58 case, the time to format a disk.
59
60 <LI>Much of the question will lie in the design and structure of provider
61 and a key issues will be whether they are re-entrant or not.
62
63 </OL>
64 karl 1.1
65 For CIM Operations we concluded that we could break the problem into several
66 components:
67 <OL>
68 <LI> Processing of operation request input process (reception,
69 decoding)
70 <LI>Processing of dispatching to providers
71 <LI>Processing of response from providers and particularly aggregration of
72 provider responses.
73 <LI>Processing of response generation and output to the originator.
74 <LI>Finally we began to take into account the flow of indications from event
75 providers and its effect on the blocking model.
76 </OL>
77
78 <h2>Providers and Threading</h2>
79
80 We agreed that providers must be either:
81 <OL>
82 <LI> Reentrant so that they may process multiple requests in parallel
83 <LI>Protected by a queuing mechanism in the CIMOM to meter requests to the
84 provider if they are not reentrant.
85 karl 1.1 </OL>
86
87 We also concluded that we could not, in the long run, impose the requirement
88 that all providers be reentrant. We need to account for both models of
89 provider.
90
91 It appears that for Pegasus it will be important to know whether any given
92 provider is reentrant or not and that this should be part of the registration
93 of providers
94
95 <h2>Aggregating Provider Responses</h2>
96
97 We have to account for aggregation of responses from multiple providers. This
98 is not simply an issue for 1) separate property providers for a class or 2)
99 the potential for multiple providers for a single class (ex. Separation by
100 instance keys, etc.). Aggregation is essential for any number of enumeration
101 operations simply because of derived providers.
102
103 We must account for aggregating provider responses as part of the completetion of phase one, not simply for phase 2. Aggregation is really a function issue, not just a threading issue.
104
105 However, the technique we use of accessing multiple providers and for aggregating results may be dependent on the threading model.
106 karl 1.1
107 <h2>Threading and Indications</h2>
108
109 TBD
110
111 <h2>Threading</h2>
112
113 TBD
114
115 <h2>Components Required</h2>
116 The following is the list of major components that will be required to finish the threading work.
117 <OL>
118 <LI> Provider queuing
119 <LI> Aggregator
120 <LI> Request threading
121 <LI> Response threading
122 <LI> Threads Library
123 <LI> NonReentrant Function Blocking (repositories and other provider
124 functions)
125 <LI> Queuing
126 </OL>
127 karl 1.1
128 <H4>Potential Limitations</H4>
129 1. If we limit ourselves to only reentrant providers initially we eliminate
130 the need for queuing
131
132 <H4>A threads Library </H4>
133
134 It becomes obvious that one of the first things we will have to do is to create
135 a thread abstraction so that we can separate threads implementations from use in
136 the Pegasus code.
137
138 <H3>Proposal for Dispatching Request to Providers using Threads</H3>
139
140
141 This proposal addresses threading as it relates to handling requests within
142 the CIMOM. Threading issues relating to the HTTP client or server are not
143 discussed.
144
145
146 The current implementation of Pegasus handles requests from the client
147 synchronously. A request from a given client blocks the CIMOM from
148 karl 1.1 processing additional requests for that client. Assuming an active client
149 connection, the CIMOM processes requests using the following general steps,
150 in order.
151 1) Receive message and decode.
152 2) Dispatch request.
153 3) Process request (repository or provider).
154 4) Encode result and send message.
155
156
157 Given that an individual request may produce a large result and that a
158 given request may be decomposed and dispatched to the repository and
159 multiple providers, it is preferred that the CIMOM support multitasking to
160 expedite request responses and maintain client responsiveness. The CIMOM
161 should introduce threads to perform client requests.
162
163
164 Threading provides the greatest benefit during step 2 and step 3.
165
166
167 Threading at Step 2:
168 -Each request passed to the dispatcher is executed on an independent
169 karl 1.1 thread. This thread can be thought of as the request thread since its
170 purpose it exists for the lifetime of the request (NOTE: this ideas holds
171 for indication subscriptions). Request threads allow a large number of
172 requests to execute simultaneously for each client.
173
174
175 The diagram below shows the multiple requests executing simultaneously. The
176 horizontal axis represents thread objects and the vertical axis represents
177 time. For example, the below image depicts 2 or more threads operating at
178 the same time.
179 <PRE>
180
181
182 Request 1 --------->
183 Request 2 --------->
184 ?
185
186
187 </PRE>
188
189 Threading at Step 3:
190 karl 1.1 -Assuming the request requires processing by multiple entities, i.e., the
191 repository and one or more providers, each entity executes on an
192 independent thread. Each of these threads can be thought of an operation
193 thread since each exists for the particular lifetime of the operation for a
194 given entity. Operation threads allow multiple operations to execute
195 simultaneously for each request.
196
197
198 The diagram below depicts a single request performing multiple operations
199 against potentially different operation entities (repository or provider).
200
201
202 <PRE>
203 Request 1 --------->
204 Operation A --------->
205 Operation B --------->
206 ?
207 Request 2 --------->
208 ?
209
210 </PRE>
211 karl 1.1
212
213 The above diagram assumes that the operation entities are reentrant. This
214 is necessary because a single request may result in multiple operation
215 threads against a single provider (e.g., a request for different class
216 instances managed by the sample provider) or multiple requests that operate
217 on the same operation entity (e.g., multiple requests for the same class
218 instance).
219
220
221 <PRE>
222 Request 1 --------->
223 Operation A ---------> Entity I(class A)
224 Operation B ---------> Entity I(class B)
225 ?
226 Request 2 --------->
227 Operation A ---------> Entity I(class A)
228 ?
229
230 </PRE>
231
232 karl 1.1
233 When a provider is does not support reentrance, all operation threads
234 resulting from any request must be serialized to prevent resource conflicts
235 within the provider. This could be accomplished using one operation queue
236 per non-reentrant provider with a dedicated (as opposed to shared) thread
237 to allow an operation to complete before executing another, in the order
238 the requests were received.
239
240
241 NOTE: Threads can be created as necessary or obtained from a pool. New
242 threads or threads from a dynamic pool enable the CIMOM to dispatch and
243 process a virtually unlimited number of request simultaneously.
244
245
246 In order to take advantage of reentrant providers and support non-reentrant
247 providers, both non-queued operating threading techniques are required.
248
249
250 <PRE>
251 Reentrant provider -> non-queued operation threads
252 Non-reentrant provider -> queued operations threads
253 karl 1.1
254 </PRE>
255
256
257 The notion of multiple requests resulting in multiple operations executing
258 simultaneously (multithreaded) leads to the notion that providers should
259 respond asynchronously. That is, requests execute as they are invoked and
260 responses made as they are generated. Regardless of the reentrance of the
261 provider, it is useful for the provider to support interfaces that support
262 asynchronous operations. Asynchronous operations require a technique for
263 generate partial (subset) responses during execution. That is, providers
264 require an object to aggregate (or propagate) intermediate results from
265 operations. The term aggregator or sink describes and object designed to
266 handle partial responses. For general purposes, an object that processes
267 intermediate results is called a response handler, rather than aggregator
268 or sink, which have associated usage implications. For CIM operations, it
269 is suggested that a single complete object (or partial depending on the
270 request parameters) represent the increment for reporting partial
271 responses. Specifically, providers pass completed objects to the response
272 handler as they are created. In this way, response handlers can process
273 partial responses according to the implementation and/or configuration.
274 karl 1.1
275
276 NOTE: The cardinality between request threads, operation threads, and
277 response handlers is not specified. It can (should) vary based on the
278 implementation/configuration.
279
280
281 The following diagram illustrates a reentrant provider passing responses
282 (individual completed objects), resulting from multiple operations, as over
283 time.
284
285
286 <PRE>
287 Request 1 --------->
288 Operation A ---------> Entity I(class A)
289 Operation A(Object 1) to responseHandler
290 Operation A(Object 2) to responseHandler
291 Operation B ---------> Entity I(class B)
292 Operation B(Object 1) to responseHandler
293 Operation A(Object 3) to responseHandler
294
295 karl 1.1 </PRE>
296
297
298 In general, requests to the CIMOM result in the creation of a request
299 thread. The response thread determines the operation entities and creates
300 an operation thread to correspond to each entity. Reentrant providers
301 support operation threads implicitly, while no-reentrant providers require
302 the operations to be queued for serialization. Regardless of reentrance
303 support, providers should provide responses, when practical, asynchronously
304 using response handlers. Response handlers
305
306
307
308 </body>
309
310 </html>
|