Root/
1 | Notes on the Generic Block Layer Rewrite in Linux 2.5 |
2 | ===================================================== |
3 | |
4 | Notes Written on Jan 15, 2002: |
5 | Jens Axboe <jens.axboe@oracle.com> |
6 | Suparna Bhattacharya <suparna@in.ibm.com> |
7 | |
8 | Last Updated May 2, 2002 |
9 | September 2003: Updated I/O Scheduler portions |
10 | Nick Piggin <npiggin@kernel.dk> |
11 | |
12 | Introduction: |
13 | |
14 | These are some notes describing some aspects of the 2.5 block layer in the |
15 | context of the bio rewrite. The idea is to bring out some of the key |
16 | changes and a glimpse of the rationale behind those changes. |
17 | |
18 | Please mail corrections & suggestions to suparna@in.ibm.com. |
19 | |
20 | Credits: |
21 | --------- |
22 | |
23 | 2.5 bio rewrite: |
24 | Jens Axboe <jens.axboe@oracle.com> |
25 | |
26 | Many aspects of the generic block layer redesign were driven by and evolved |
27 | over discussions, prior patches and the collective experience of several |
28 | people. See sections 8 and 9 for a list of some related references. |
29 | |
30 | The following people helped with review comments and inputs for this |
31 | document: |
32 | Christoph Hellwig <hch@infradead.org> |
33 | Arjan van de Ven <arjanv@redhat.com> |
34 | Randy Dunlap <rdunlap@xenotime.net> |
35 | Andre Hedrick <andre@linux-ide.org> |
36 | |
37 | The following people helped with fixes/contributions to the bio patches |
38 | while it was still work-in-progress: |
39 | David S. Miller <davem@redhat.com> |
40 | |
41 | |
42 | Description of Contents: |
43 | ------------------------ |
44 | |
45 | 1. Scope for tuning of logic to various needs |
46 | 1.1 Tuning based on device or low level driver capabilities |
47 | - Per-queue parameters |
48 | - Highmem I/O support |
49 | - I/O scheduler modularization |
50 | 1.2 Tuning based on high level requirements/capabilities |
51 | 1.2.1 I/O Barriers |
52 | 1.2.2 Request Priority/Latency |
53 | 1.3 Direct access/bypass to lower layers for diagnostics and special |
54 | device operations |
55 | 1.3.1 Pre-built commands |
56 | 2. New flexible and generic but minimalist i/o structure or descriptor |
57 | (instead of using buffer heads at the i/o layer) |
58 | 2.1 Requirements/Goals addressed |
59 | 2.2 The bio struct in detail (multi-page io unit) |
60 | 2.3 Changes in the request structure |
61 | 3. Using bios |
62 | 3.1 Setup/teardown (allocation, splitting) |
63 | 3.2 Generic bio helper routines |
64 | 3.2.1 Traversing segments and completion units in a request |
65 | 3.2.2 Setting up DMA scatterlists |
66 | 3.2.3 I/O completion |
67 | 3.2.4 Implications for drivers that do not interpret bios (don't handle |
68 | multiple segments) |
69 | 3.2.5 Request command tagging |
70 | 3.3 I/O submission |
71 | 4. The I/O scheduler |
72 | 5. Scalability related changes |
73 | 5.1 Granular locking: Removal of io_request_lock |
74 | 5.2 Prepare for transition to 64 bit sector_t |
75 | 6. Other Changes/Implications |
76 | 6.1 Partition re-mapping handled by the generic block layer |
77 | 7. A few tips on migration of older drivers |
78 | 8. A list of prior/related/impacted patches/ideas |
79 | 9. Other References/Discussion Threads |
80 | |
81 | --------------------------------------------------------------------------- |
82 | |
83 | Bio Notes |
84 | -------- |
85 | |
86 | Let us discuss the changes in the context of how some overall goals for the |
87 | block layer are addressed. |
88 | |
89 | 1. Scope for tuning the generic logic to satisfy various requirements |
90 | |
91 | The block layer design supports adaptable abstractions to handle common |
92 | processing with the ability to tune the logic to an appropriate extent |
93 | depending on the nature of the device and the requirements of the caller. |
94 | One of the objectives of the rewrite was to increase the degree of tunability |
95 | and to enable higher level code to utilize underlying device/driver |
96 | capabilities to the maximum extent for better i/o performance. This is |
97 | important especially in the light of ever improving hardware capabilities |
98 | and application/middleware software designed to take advantage of these |
99 | capabilities. |
100 | |
101 | 1.1 Tuning based on low level device / driver capabilities |
102 | |
103 | Sophisticated devices with large built-in caches, intelligent i/o scheduling |
104 | optimizations, high memory DMA support, etc may find some of the |
105 | generic processing an overhead, while for less capable devices the |
106 | generic functionality is essential for performance or correctness reasons. |
107 | Knowledge of some of the capabilities or parameters of the device should be |
108 | used at the generic block layer to take the right decisions on |
109 | behalf of the driver. |
110 | |
111 | How is this achieved ? |
112 | |
113 | Tuning at a per-queue level: |
114 | |
115 | i. Per-queue limits/values exported to the generic layer by the driver |
116 | |
117 | Various parameters that the generic i/o scheduler logic uses are set at |
118 | a per-queue level (e.g maximum request size, maximum number of segments in |
119 | a scatter-gather list, hardsect size) |
120 | |
121 | Some parameters that were earlier available as global arrays indexed by |
122 | major/minor are now directly associated with the queue. Some of these may |
123 | move into the block device structure in the future. Some characteristics |
124 | have been incorporated into a queue flags field rather than separate fields |
125 | in themselves. There are blk_queue_xxx functions to set the parameters, |
126 | rather than update the fields directly |
127 | |
128 | Some new queue property settings: |
129 | |
130 | blk_queue_bounce_limit(q, u64 dma_address) |
131 | Enable I/O to highmem pages, dma_address being the |
132 | limit. No highmem default. |
133 | |
134 | blk_queue_max_sectors(q, max_sectors) |
135 | Sets two variables that limit the size of the request. |
136 | |
137 | - The request queue's max_sectors, which is a soft size in |
138 | units of 512 byte sectors, and could be dynamically varied |
139 | by the core kernel. |
140 | |
141 | - The request queue's max_hw_sectors, which is a hard limit |
142 | and reflects the maximum size request a driver can handle |
143 | in units of 512 byte sectors. |
144 | |
145 | The default for both max_sectors and max_hw_sectors is |
146 | 255. The upper limit of max_sectors is 1024. |
147 | |
148 | blk_queue_max_phys_segments(q, max_segments) |
149 | Maximum physical segments you can handle in a request. 128 |
150 | default (driver limit). (See 3.2.2) |
151 | |
152 | blk_queue_max_hw_segments(q, max_segments) |
153 | Maximum dma segments the hardware can handle in a request. 128 |
154 | default (host adapter limit, after dma remapping). |
155 | (See 3.2.2) |
156 | |
157 | blk_queue_max_segment_size(q, max_seg_size) |
158 | Maximum size of a clustered segment, 64kB default. |
159 | |
160 | blk_queue_hardsect_size(q, hardsect_size) |
161 | Lowest possible sector size that the hardware can operate |
162 | on, 512 bytes default. |
163 | |
164 | New queue flags: |
165 | |
166 | QUEUE_FLAG_CLUSTER (see 3.2.2) |
167 | QUEUE_FLAG_QUEUED (see 3.2.4) |
168 | |
169 | |
170 | ii. High-mem i/o capabilities are now considered the default |
171 | |
172 | The generic bounce buffer logic, present in 2.4, where the block layer would |
173 | by default copyin/out i/o requests on high-memory buffers to low-memory buffers |
174 | assuming that the driver wouldn't be able to handle it directly, has been |
175 | changed in 2.5. The bounce logic is now applied only for memory ranges |
176 | for which the device cannot handle i/o. A driver can specify this by |
177 | setting the queue bounce limit for the request queue for the device |
178 | (blk_queue_bounce_limit()). This avoids the inefficiencies of the copyin/out |
179 | where a device is capable of handling high memory i/o. |
180 | |
181 | In order to enable high-memory i/o where the device is capable of supporting |
182 | it, the pci dma mapping routines and associated data structures have now been |
183 | modified to accomplish a direct page -> bus translation, without requiring |
184 | a virtual address mapping (unlike the earlier scheme of virtual address |
185 | -> bus translation). So this works uniformly for high-memory pages (which |
186 | do not have a corresponding kernel virtual address space mapping) and |
187 | low-memory pages. |
188 | |
189 | Note: Please refer to Documentation/PCI/PCI-DMA-mapping.txt for a discussion |
190 | on PCI high mem DMA aspects and mapping of scatter gather lists, and support |
191 | for 64 bit PCI. |
192 | |
193 | Special handling is required only for cases where i/o needs to happen on |
194 | pages at physical memory addresses beyond what the device can support. In these |
195 | cases, a bounce bio representing a buffer from the supported memory range |
196 | is used for performing the i/o with copyin/copyout as needed depending on |
197 | the type of the operation. For example, in case of a read operation, the |
198 | data read has to be copied to the original buffer on i/o completion, so a |
199 | callback routine is set up to do this, while for write, the data is copied |
200 | from the original buffer to the bounce buffer prior to issuing the |
201 | operation. Since an original buffer may be in a high memory area that's not |
202 | mapped in kernel virtual addr, a kmap operation may be required for |
203 | performing the copy, and special care may be needed in the completion path |
204 | as it may not be in irq context. Special care is also required (by way of |
205 | GFP flags) when allocating bounce buffers, to avoid certain highmem |
206 | deadlock possibilities. |
207 | |
208 | It is also possible that a bounce buffer may be allocated from high-memory |
209 | area that's not mapped in kernel virtual addr, but within the range that the |
210 | device can use directly; so the bounce page may need to be kmapped during |
211 | copy operations. [Note: This does not hold in the current implementation, |
212 | though] |
213 | |
214 | There are some situations when pages from high memory may need to |
215 | be kmapped, even if bounce buffers are not necessary. For example a device |
216 | may need to abort DMA operations and revert to PIO for the transfer, in |
217 | which case a virtual mapping of the page is required. For SCSI it is also |
218 | done in some scenarios where the low level driver cannot be trusted to |
219 | handle a single sg entry correctly. The driver is expected to perform the |
220 | kmaps as needed on such occasions using the __bio_kmap_atomic and bio_kmap_irq |
221 | routines as appropriate. A driver could also use the blk_queue_bounce() |
222 | routine on its own to bounce highmem i/o to low memory for specific requests |
223 | if so desired. |
224 | |
225 | iii. The i/o scheduler algorithm itself can be replaced/set as appropriate |
226 | |
227 | As in 2.4, it is possible to plugin a brand new i/o scheduler for a particular |
228 | queue or pick from (copy) existing generic schedulers and replace/override |
229 | certain portions of it. The 2.5 rewrite provides improved modularization |
230 | of the i/o scheduler. There are more pluggable callbacks, e.g for init, |
231 | add request, extract request, which makes it possible to abstract specific |
232 | i/o scheduling algorithm aspects and details outside of the generic loop. |
233 | It also makes it possible to completely hide the implementation details of |
234 | the i/o scheduler from block drivers. |
235 | |
236 | I/O scheduler wrappers are to be used instead of accessing the queue directly. |
237 | See section 4. The I/O scheduler for details. |
238 | |
239 | 1.2 Tuning Based on High level code capabilities |
240 | |
241 | i. Application capabilities for raw i/o |
242 | |
243 | This comes from some of the high-performance database/middleware |
244 | requirements where an application prefers to make its own i/o scheduling |
245 | decisions based on an understanding of the access patterns and i/o |
246 | characteristics |
247 | |
248 | ii. High performance filesystems or other higher level kernel code's |
249 | capabilities |
250 | |
251 | Kernel components like filesystems could also take their own i/o scheduling |
252 | decisions for optimizing performance. Journalling filesystems may need |
253 | some control over i/o ordering. |
254 | |
255 | What kind of support exists at the generic block layer for this ? |
256 | |
257 | The flags and rw fields in the bio structure can be used for some tuning |
258 | from above e.g indicating that an i/o is just a readahead request, or for |
259 | marking barrier requests (discussed next), or priority settings (currently |
260 | unused). As far as user applications are concerned they would need an |
261 | additional mechanism either via open flags or ioctls, or some other upper |
262 | level mechanism to communicate such settings to block. |
263 | |
264 | 1.2.1 I/O Barriers |
265 | |
266 | There is a way to enforce strict ordering for i/os through barriers. |
267 | All requests before a barrier point must be serviced before the barrier |
268 | request and any other requests arriving after the barrier will not be |
269 | serviced until after the barrier has completed. This is useful for higher |
270 | level control on write ordering, e.g flushing a log of committed updates |
271 | to disk before the corresponding updates themselves. |
272 | |
273 | A flag in the bio structure, BIO_BARRIER is used to identify a barrier i/o. |
274 | The generic i/o scheduler would make sure that it places the barrier request and |
275 | all other requests coming after it after all the previous requests in the |
276 | queue. Barriers may be implemented in different ways depending on the |
277 | driver. For more details regarding I/O barriers, please read barrier.txt |
278 | in this directory. |
279 | |
280 | 1.2.2 Request Priority/Latency |
281 | |
282 | Todo/Under discussion: |
283 | Arjan's proposed request priority scheme allows higher levels some broad |
284 | control (high/med/low) over the priority of an i/o request vs other pending |
285 | requests in the queue. For example it allows reads for bringing in an |
286 | executable page on demand to be given a higher priority over pending write |
287 | requests which haven't aged too much on the queue. Potentially this priority |
288 | could even be exposed to applications in some manner, providing higher level |
289 | tunability. Time based aging avoids starvation of lower priority |
290 | requests. Some bits in the bi_rw flags field in the bio structure are |
291 | intended to be used for this priority information. |
292 | |
293 | |
294 | 1.3 Direct Access to Low level Device/Driver Capabilities (Bypass mode) |
295 | (e.g Diagnostics, Systems Management) |
296 | |
297 | There are situations where high-level code needs to have direct access to |
298 | the low level device capabilities or requires the ability to issue commands |
299 | to the device bypassing some of the intermediate i/o layers. |
300 | These could, for example, be special control commands issued through ioctl |
301 | interfaces, or could be raw read/write commands that stress the drive's |
302 | capabilities for certain kinds of fitness tests. Having direct interfaces at |
303 | multiple levels without having to pass through upper layers makes |
304 | it possible to perform bottom up validation of the i/o path, layer by |
305 | layer, starting from the media. |
306 | |
307 | The normal i/o submission interfaces, e.g submit_bio, could be bypassed |
308 | for specially crafted requests which such ioctl or diagnostics |
309 | interfaces would typically use, and the elevator add_request routine |
310 | can instead be used to directly insert such requests in the queue or preferably |
311 | the blk_do_rq routine can be used to place the request on the queue and |
312 | wait for completion. Alternatively, sometimes the caller might just |
313 | invoke a lower level driver specific interface with the request as a |
314 | parameter. |
315 | |
316 | If the request is a means for passing on special information associated with |
317 | the command, then such information is associated with the request->special |
318 | field (rather than misuse the request->buffer field which is meant for the |
319 | request data buffer's virtual mapping). |
320 | |
321 | For passing request data, the caller must build up a bio descriptor |
322 | representing the concerned memory buffer if the underlying driver interprets |
323 | bio segments or uses the block layer end*request* functions for i/o |
324 | completion. Alternatively one could directly use the request->buffer field to |
325 | specify the virtual address of the buffer, if the driver expects buffer |
326 | addresses passed in this way and ignores bio entries for the request type |
327 | involved. In the latter case, the driver would modify and manage the |
328 | request->buffer, request->sector and request->nr_sectors or |
329 | request->current_nr_sectors fields itself rather than using the block layer |
330 | end_request or end_that_request_first completion interfaces. |
331 | (See 2.3 or Documentation/block/request.txt for a brief explanation of |
332 | the request structure fields) |
333 | |
334 | [TBD: end_that_request_last should be usable even in this case; |
335 | Perhaps an end_that_direct_request_first routine could be implemented to make |
336 | handling direct requests easier for such drivers; Also for drivers that |
337 | expect bios, a helper function could be provided for setting up a bio |
338 | corresponding to a data buffer] |
339 | |
340 | <JENS: I dont understand the above, why is end_that_request_first() not |
341 | usable? Or _last for that matter. I must be missing something> |
342 | <SUP: What I meant here was that if the request doesn't have a bio, then |
343 | end_that_request_first doesn't modify nr_sectors or current_nr_sectors, |
344 | and hence can't be used for advancing request state settings on the |
345 | completion of partial transfers. The driver has to modify these fields |
346 | directly by hand. |
347 | This is because end_that_request_first only iterates over the bio list, |
348 | and always returns 0 if there are none associated with the request. |
349 | _last works OK in this case, and is not a problem, as I mentioned earlier |
350 | > |
351 | |
352 | 1.3.1 Pre-built Commands |
353 | |
354 | A request can be created with a pre-built custom command to be sent directly |
355 | to the device. The cmd block in the request structure has room for filling |
356 | in the command bytes. (i.e rq->cmd is now 16 bytes in size, and meant for |
357 | command pre-building, and the type of the request is now indicated |
358 | through rq->flags instead of via rq->cmd) |
359 | |
360 | The request structure flags can be set up to indicate the type of request |
361 | in such cases (REQ_PC: direct packet command passed to driver, REQ_BLOCK_PC: |
362 | packet command issued via blk_do_rq, REQ_SPECIAL: special request). |
363 | |
364 | It can help to pre-build device commands for requests in advance. |
365 | Drivers can now specify a request prepare function (q->prep_rq_fn) that the |
366 | block layer would invoke to pre-build device commands for a given request, |
367 | or perform other preparatory processing for the request. This is routine is |
368 | called by elv_next_request(), i.e. typically just before servicing a request. |
369 | (The prepare function would not be called for requests that have REQ_DONTPREP |
370 | enabled) |
371 | |
372 | Aside: |
373 | Pre-building could possibly even be done early, i.e before placing the |
374 | request on the queue, rather than construct the command on the fly in the |
375 | driver while servicing the request queue when it may affect latencies in |
376 | interrupt context or responsiveness in general. One way to add early |
377 | pre-building would be to do it whenever we fail to merge on a request. |
378 | Now REQ_NOMERGE is set in the request flags to skip this one in the future, |
379 | which means that it will not change before we feed it to the device. So |
380 | the pre-builder hook can be invoked there. |
381 | |
382 | |
383 | 2. Flexible and generic but minimalist i/o structure/descriptor. |
384 | |
385 | 2.1 Reason for a new structure and requirements addressed |
386 | |
387 | Prior to 2.5, buffer heads were used as the unit of i/o at the generic block |
388 | layer, and the low level request structure was associated with a chain of |
389 | buffer heads for a contiguous i/o request. This led to certain inefficiencies |
390 | when it came to large i/o requests and readv/writev style operations, as it |
391 | forced such requests to be broken up into small chunks before being passed |
392 | on to the generic block layer, only to be merged by the i/o scheduler |
393 | when the underlying device was capable of handling the i/o in one shot. |
394 | Also, using the buffer head as an i/o structure for i/os that didn't originate |
395 | from the buffer cache unnecessarily added to the weight of the descriptors |
396 | which were generated for each such chunk. |
397 | |
398 | The following were some of the goals and expectations considered in the |
399 | redesign of the block i/o data structure in 2.5. |
400 | |
401 | i. Should be appropriate as a descriptor for both raw and buffered i/o - |
402 | avoid cache related fields which are irrelevant in the direct/page i/o path, |
403 | or filesystem block size alignment restrictions which may not be relevant |
404 | for raw i/o. |
405 | ii. Ability to represent high-memory buffers (which do not have a virtual |
406 | address mapping in kernel address space). |
407 | iii.Ability to represent large i/os w/o unnecessarily breaking them up (i.e |
408 | greater than PAGE_SIZE chunks in one shot) |
409 | iv. At the same time, ability to retain independent identity of i/os from |
410 | different sources or i/o units requiring individual completion (e.g. for |
411 | latency reasons) |
412 | v. Ability to represent an i/o involving multiple physical memory segments |
413 | (including non-page aligned page fragments, as specified via readv/writev) |
414 | without unnecessarily breaking it up, if the underlying device is capable of |
415 | handling it. |
416 | vi. Preferably should be based on a memory descriptor structure that can be |
417 | passed around different types of subsystems or layers, maybe even |
418 | networking, without duplication or extra copies of data/descriptor fields |
419 | themselves in the process |
420 | vii.Ability to handle the possibility of splits/merges as the structure passes |
421 | through layered drivers (lvm, md, evms), with minimal overhead. |
422 | |
423 | The solution was to define a new structure (bio) for the block layer, |
424 | instead of using the buffer head structure (bh) directly, the idea being |
425 | avoidance of some associated baggage and limitations. The bio structure |
426 | is uniformly used for all i/o at the block layer ; it forms a part of the |
427 | bh structure for buffered i/o, and in the case of raw/direct i/o kiobufs are |
428 | mapped to bio structures. |
429 | |
430 | 2.2 The bio struct |
431 | |
432 | The bio structure uses a vector representation pointing to an array of tuples |
433 | of <page, offset, len> to describe the i/o buffer, and has various other |
434 | fields describing i/o parameters and state that needs to be maintained for |
435 | performing the i/o. |
436 | |
437 | Notice that this representation means that a bio has no virtual address |
438 | mapping at all (unlike buffer heads). |
439 | |
440 | struct bio_vec { |
441 | struct page *bv_page; |
442 | unsigned short bv_len; |
443 | unsigned short bv_offset; |
444 | }; |
445 | |
446 | /* |
447 | * main unit of I/O for the block layer and lower layers (ie drivers) |
448 | */ |
449 | struct bio { |
450 | sector_t bi_sector; |
451 | struct bio *bi_next; /* request queue link */ |
452 | struct block_device *bi_bdev; /* target device */ |
453 | unsigned long bi_flags; /* status, command, etc */ |
454 | unsigned long bi_rw; /* low bits: r/w, high: priority */ |
455 | |
456 | unsigned int bi_vcnt; /* how may bio_vec's */ |
457 | unsigned int bi_idx; /* current index into bio_vec array */ |
458 | |
459 | unsigned int bi_size; /* total size in bytes */ |
460 | unsigned short bi_phys_segments; /* segments after physaddr coalesce*/ |
461 | unsigned short bi_hw_segments; /* segments after DMA remapping */ |
462 | unsigned int bi_max; /* max bio_vecs we can hold |
463 | used as index into pool */ |
464 | struct bio_vec *bi_io_vec; /* the actual vec list */ |
465 | bio_end_io_t *bi_end_io; /* bi_end_io (bio) */ |
466 | atomic_t bi_cnt; /* pin count: free when it hits zero */ |
467 | void *bi_private; |
468 | bio_destructor_t *bi_destructor; /* bi_destructor (bio) */ |
469 | }; |
470 | |
471 | With this multipage bio design: |
472 | |
473 | - Large i/os can be sent down in one go using a bio_vec list consisting |
474 | of an array of <page, offset, len> fragments (similar to the way fragments |
475 | are represented in the zero-copy network code) |
476 | - Splitting of an i/o request across multiple devices (as in the case of |
477 | lvm or raid) is achieved by cloning the bio (where the clone points to |
478 | the same bi_io_vec array, but with the index and size accordingly modified) |
479 | - A linked list of bios is used as before for unrelated merges (*) - this |
480 | avoids reallocs and makes independent completions easier to handle. |
481 | - Code that traverses the req list can find all the segments of a bio |
482 | by using rq_for_each_segment. This handles the fact that a request |
483 | has multiple bios, each of which can have multiple segments. |
484 | - Drivers which can't process a large bio in one shot can use the bi_idx |
485 | field to keep track of the next bio_vec entry to process. |
486 | (e.g a 1MB bio_vec needs to be handled in max 128kB chunks for IDE) |
487 | [TBD: Should preferably also have a bi_voffset and bi_vlen to avoid modifying |
488 | bi_offset an len fields] |
489 | |
490 | (*) unrelated merges -- a request ends up containing two or more bios that |
491 | didn't originate from the same place. |
492 | |
493 | bi_end_io() i/o callback gets called on i/o completion of the entire bio. |
494 | |
495 | At a lower level, drivers build a scatter gather list from the merged bios. |
496 | The scatter gather list is in the form of an array of <page, offset, len> |
497 | entries with their corresponding dma address mappings filled in at the |
498 | appropriate time. As an optimization, contiguous physical pages can be |
499 | covered by a single entry where <page> refers to the first page and <len> |
500 | covers the range of pages (upto 16 contiguous pages could be covered this |
501 | way). There is a helper routine (blk_rq_map_sg) which drivers can use to build |
502 | the sg list. |
503 | |
504 | Note: Right now the only user of bios with more than one page is ll_rw_kio, |
505 | which in turn means that only raw I/O uses it (direct i/o may not work |
506 | right now). The intent however is to enable clustering of pages etc to |
507 | become possible. The pagebuf abstraction layer from SGI also uses multi-page |
508 | bios, but that is currently not included in the stock development kernels. |
509 | The same is true of Andrew Morton's work-in-progress multipage bio writeout |
510 | and readahead patches. |
511 | |
512 | 2.3 Changes in the Request Structure |
513 | |
514 | The request structure is the structure that gets passed down to low level |
515 | drivers. The block layer make_request function builds up a request structure, |
516 | places it on the queue and invokes the drivers request_fn. The driver makes |
517 | use of block layer helper routine elv_next_request to pull the next request |
518 | off the queue. Control or diagnostic functions might bypass block and directly |
519 | invoke underlying driver entry points passing in a specially constructed |
520 | request structure. |
521 | |
522 | Only some relevant fields (mainly those which changed or may be referred |
523 | to in some of the discussion here) are listed below, not necessarily in |
524 | the order in which they occur in the structure (see include/linux/blkdev.h) |
525 | Refer to Documentation/block/request.txt for details about all the request |
526 | structure fields and a quick reference about the layers which are |
527 | supposed to use or modify those fields. |
528 | |
529 | struct request { |
530 | struct list_head queuelist; /* Not meant to be directly accessed by |
531 | the driver. |
532 | Used by q->elv_next_request_fn |
533 | rq->queue is gone |
534 | */ |
535 | . |
536 | . |
537 | unsigned char cmd[16]; /* prebuilt command data block */ |
538 | unsigned long flags; /* also includes earlier rq->cmd settings */ |
539 | . |
540 | . |
541 | sector_t sector; /* this field is now of type sector_t instead of int |
542 | preparation for 64 bit sectors */ |
543 | . |
544 | . |
545 | |
546 | /* Number of scatter-gather DMA addr+len pairs after |
547 | * physical address coalescing is performed. |
548 | */ |
549 | unsigned short nr_phys_segments; |
550 | |
551 | /* Number of scatter-gather addr+len pairs after |
552 | * physical and DMA remapping hardware coalescing is performed. |
553 | * This is the number of scatter-gather entries the driver |
554 | * will actually have to deal with after DMA mapping is done. |
555 | */ |
556 | unsigned short nr_hw_segments; |
557 | |
558 | /* Various sector counts */ |
559 | unsigned long nr_sectors; /* no. of sectors left: driver modifiable */ |
560 | unsigned long hard_nr_sectors; /* block internal copy of above */ |
561 | unsigned int current_nr_sectors; /* no. of sectors left in the |
562 | current segment:driver modifiable */ |
563 | unsigned long hard_cur_sectors; /* block internal copy of the above */ |
564 | . |
565 | . |
566 | int tag; /* command tag associated with request */ |
567 | void *special; /* same as before */ |
568 | char *buffer; /* valid only for low memory buffers upto |
569 | current_nr_sectors */ |
570 | . |
571 | . |
572 | struct bio *bio, *biotail; /* bio list instead of bh */ |
573 | struct request_list *rl; |
574 | } |
575 | |
576 | See the rq_flag_bits definitions for an explanation of the various flags |
577 | available. Some bits are used by the block layer or i/o scheduler. |
578 | |
579 | The behaviour of the various sector counts are almost the same as before, |
580 | except that since we have multi-segment bios, current_nr_sectors refers |
581 | to the numbers of sectors in the current segment being processed which could |
582 | be one of the many segments in the current bio (i.e i/o completion unit). |
583 | The nr_sectors value refers to the total number of sectors in the whole |
584 | request that remain to be transferred (no change). The purpose of the |
585 | hard_xxx values is for block to remember these counts every time it hands |
586 | over the request to the driver. These values are updated by block on |
587 | end_that_request_first, i.e. every time the driver completes a part of the |
588 | transfer and invokes block end*request helpers to mark this. The |
589 | driver should not modify these values. The block layer sets up the |
590 | nr_sectors and current_nr_sectors fields (based on the corresponding |
591 | hard_xxx values and the number of bytes transferred) and updates it on |
592 | every transfer that invokes end_that_request_first. It does the same for the |
593 | buffer, bio, bio->bi_idx fields too. |
594 | |
595 | The buffer field is just a virtual address mapping of the current segment |
596 | of the i/o buffer in cases where the buffer resides in low-memory. For high |
597 | memory i/o, this field is not valid and must not be used by drivers. |
598 | |
599 | Code that sets up its own request structures and passes them down to |
600 | a driver needs to be careful about interoperation with the block layer helper |
601 | functions which the driver uses. (Section 1.3) |
602 | |
603 | 3. Using bios |
604 | |
605 | 3.1 Setup/Teardown |
606 | |
607 | There are routines for managing the allocation, and reference counting, and |
608 | freeing of bios (bio_alloc, bio_get, bio_put). |
609 | |
610 | This makes use of Ingo Molnar's mempool implementation, which enables |
611 | subsystems like bio to maintain their own reserve memory pools for guaranteed |
612 | deadlock-free allocations during extreme VM load. For example, the VM |
613 | subsystem makes use of the block layer to writeout dirty pages in order to be |
614 | able to free up memory space, a case which needs careful handling. The |
615 | allocation logic draws from the preallocated emergency reserve in situations |
616 | where it cannot allocate through normal means. If the pool is empty and it |
617 | can wait, then it would trigger action that would help free up memory or |
618 | replenish the pool (without deadlocking) and wait for availability in the pool. |
619 | If it is in IRQ context, and hence not in a position to do this, allocation |
620 | could fail if the pool is empty. In general mempool always first tries to |
621 | perform allocation without having to wait, even if it means digging into the |
622 | pool as long it is not less that 50% full. |
623 | |
624 | On a free, memory is released to the pool or directly freed depending on |
625 | the current availability in the pool. The mempool interface lets the |
626 | subsystem specify the routines to be used for normal alloc and free. In the |
627 | case of bio, these routines make use of the standard slab allocator. |
628 | |
629 | The caller of bio_alloc is expected to taken certain steps to avoid |
630 | deadlocks, e.g. avoid trying to allocate more memory from the pool while |
631 | already holding memory obtained from the pool. |
632 | [TBD: This is a potential issue, though a rare possibility |
633 | in the bounce bio allocation that happens in the current code, since |
634 | it ends up allocating a second bio from the same pool while |
635 | holding the original bio ] |
636 | |
637 | Memory allocated from the pool should be released back within a limited |
638 | amount of time (in the case of bio, that would be after the i/o is completed). |
639 | This ensures that if part of the pool has been used up, some work (in this |
640 | case i/o) must already be in progress and memory would be available when it |
641 | is over. If allocating from multiple pools in the same code path, the order |
642 | or hierarchy of allocation needs to be consistent, just the way one deals |
643 | with multiple locks. |
644 | |
645 | The bio_alloc routine also needs to allocate the bio_vec_list (bvec_alloc()) |
646 | for a non-clone bio. There are the 6 pools setup for different size biovecs, |
647 | so bio_alloc(gfp_mask, nr_iovecs) will allocate a vec_list of the |
648 | given size from these slabs. |
649 | |
650 | The bi_destructor() routine takes into account the possibility of the bio |
651 | having originated from a different source (see later discussions on |
652 | n/w to block transfers and kvec_cb) |
653 | |
654 | The bio_get() routine may be used to hold an extra reference on a bio prior |
655 | to i/o submission, if the bio fields are likely to be accessed after the |
656 | i/o is issued (since the bio may otherwise get freed in case i/o completion |
657 | happens in the meantime). |
658 | |
659 | The bio_clone() routine may be used to duplicate a bio, where the clone |
660 | shares the bio_vec_list with the original bio (i.e. both point to the |
661 | same bio_vec_list). This would typically be used for splitting i/o requests |
662 | in lvm or md. |
663 | |
664 | 3.2 Generic bio helper Routines |
665 | |
666 | 3.2.1 Traversing segments and completion units in a request |
667 | |
668 | The macro rq_for_each_segment() should be used for traversing the bios |
669 | in the request list (drivers should avoid directly trying to do it |
670 | themselves). Using these helpers should also make it easier to cope |
671 | with block changes in the future. |
672 | |
673 | struct req_iterator iter; |
674 | rq_for_each_segment(bio_vec, rq, iter) |
675 | /* bio_vec is now current segment */ |
676 | |
677 | I/O completion callbacks are per-bio rather than per-segment, so drivers |
678 | that traverse bio chains on completion need to keep that in mind. Drivers |
679 | which don't make a distinction between segments and completion units would |
680 | need to be reorganized to support multi-segment bios. |
681 | |
682 | 3.2.2 Setting up DMA scatterlists |
683 | |
684 | The blk_rq_map_sg() helper routine would be used for setting up scatter |
685 | gather lists from a request, so a driver need not do it on its own. |
686 | |
687 | nr_segments = blk_rq_map_sg(q, rq, scatterlist); |
688 | |
689 | The helper routine provides a level of abstraction which makes it easier |
690 | to modify the internals of request to scatterlist conversion down the line |
691 | without breaking drivers. The blk_rq_map_sg routine takes care of several |
692 | things like collapsing physically contiguous segments (if QUEUE_FLAG_CLUSTER |
693 | is set) and correct segment accounting to avoid exceeding the limits which |
694 | the i/o hardware can handle, based on various queue properties. |
695 | |
696 | - Prevents a clustered segment from crossing a 4GB mem boundary |
697 | - Avoids building segments that would exceed the number of physical |
698 | memory segments that the driver can handle (phys_segments) and the |
699 | number that the underlying hardware can handle at once, accounting for |
700 | DMA remapping (hw_segments) (i.e. IOMMU aware limits). |
701 | |
702 | Routines which the low level driver can use to set up the segment limits: |
703 | |
704 | blk_queue_max_hw_segments() : Sets an upper limit of the maximum number of |
705 | hw data segments in a request (i.e. the maximum number of address/length |
706 | pairs the host adapter can actually hand to the device at once) |
707 | |
708 | blk_queue_max_phys_segments() : Sets an upper limit on the maximum number |
709 | of physical data segments in a request (i.e. the largest sized scatter list |
710 | a driver could handle) |
711 | |
712 | 3.2.3 I/O completion |
713 | |
714 | The existing generic block layer helper routines end_request, |
715 | end_that_request_first and end_that_request_last can be used for i/o |
716 | completion (and setting things up so the rest of the i/o or the next |
717 | request can be kicked of) as before. With the introduction of multi-page |
718 | bio support, end_that_request_first requires an additional argument indicating |
719 | the number of sectors completed. |
720 | |
721 | 3.2.4 Implications for drivers that do not interpret bios (don't handle |
722 | multiple segments) |
723 | |
724 | Drivers that do not interpret bios e.g those which do not handle multiple |
725 | segments and do not support i/o into high memory addresses (require bounce |
726 | buffers) and expect only virtually mapped buffers, can access the rq->buffer |
727 | field. As before the driver should use current_nr_sectors to determine the |
728 | size of remaining data in the current segment (that is the maximum it can |
729 | transfer in one go unless it interprets segments), and rely on the block layer |
730 | end_request, or end_that_request_first/last to take care of all accounting |
731 | and transparent mapping of the next bio segment when a segment boundary |
732 | is crossed on completion of a transfer. (The end*request* functions should |
733 | be used if only if the request has come down from block/bio path, not for |
734 | direct access requests which only specify rq->buffer without a valid rq->bio) |
735 | |
736 | 3.2.5 Generic request command tagging |
737 | |
738 | 3.2.5.1 Tag helpers |
739 | |
740 | Block now offers some simple generic functionality to help support command |
741 | queueing (typically known as tagged command queueing), ie manage more than |
742 | one outstanding command on a queue at any given time. |
743 | |
744 | blk_queue_init_tags(struct request_queue *q, int depth) |
745 | |
746 | Initialize internal command tagging structures for a maximum |
747 | depth of 'depth'. |
748 | |
749 | blk_queue_free_tags((struct request_queue *q) |
750 | |
751 | Teardown tag info associated with the queue. This will be done |
752 | automatically by block if blk_queue_cleanup() is called on a queue |
753 | that is using tagging. |
754 | |
755 | The above are initialization and exit management, the main helpers during |
756 | normal operations are: |
757 | |
758 | blk_queue_start_tag(struct request_queue *q, struct request *rq) |
759 | |
760 | Start tagged operation for this request. A free tag number between |
761 | 0 and 'depth' is assigned to the request (rq->tag holds this number), |
762 | and 'rq' is added to the internal tag management. If the maximum depth |
763 | for this queue is already achieved (or if the tag wasn't started for |
764 | some other reason), 1 is returned. Otherwise 0 is returned. |
765 | |
766 | blk_queue_end_tag(struct request_queue *q, struct request *rq) |
767 | |
768 | End tagged operation on this request. 'rq' is removed from the internal |
769 | book keeping structures. |
770 | |
771 | To minimize struct request and queue overhead, the tag helpers utilize some |
772 | of the same request members that are used for normal request queue management. |
773 | This means that a request cannot both be an active tag and be on the queue |
774 | list at the same time. blk_queue_start_tag() will remove the request, but |
775 | the driver must remember to call blk_queue_end_tag() before signalling |
776 | completion of the request to the block layer. This means ending tag |
777 | operations before calling end_that_request_last()! For an example of a user |
778 | of these helpers, see the IDE tagged command queueing support. |
779 | |
780 | Certain hardware conditions may dictate a need to invalidate the block tag |
781 | queue. For instance, on IDE any tagged request error needs to clear both |
782 | the hardware and software block queue and enable the driver to sanely restart |
783 | all the outstanding requests. There's a third helper to do that: |
784 | |
785 | blk_queue_invalidate_tags(struct request_queue *q) |
786 | |
787 | Clear the internal block tag queue and re-add all the pending requests |
788 | to the request queue. The driver will receive them again on the |
789 | next request_fn run, just like it did the first time it encountered |
790 | them. |
791 | |
792 | 3.2.5.2 Tag info |
793 | |
794 | Some block functions exist to query current tag status or to go from a |
795 | tag number to the associated request. These are, in no particular order: |
796 | |
797 | blk_queue_tagged(q) |
798 | |
799 | Returns 1 if the queue 'q' is using tagging, 0 if not. |
800 | |
801 | blk_queue_tag_request(q, tag) |
802 | |
803 | Returns a pointer to the request associated with tag 'tag'. |
804 | |
805 | blk_queue_tag_depth(q) |
806 | |
807 | Return current queue depth. |
808 | |
809 | blk_queue_tag_queue(q) |
810 | |
811 | Returns 1 if the queue can accept a new queued command, 0 if we are |
812 | at the maximum depth already. |
813 | |
814 | blk_queue_rq_tagged(rq) |
815 | |
816 | Returns 1 if the request 'rq' is tagged. |
817 | |
818 | 3.2.5.2 Internal structure |
819 | |
820 | Internally, block manages tags in the blk_queue_tag structure: |
821 | |
822 | struct blk_queue_tag { |
823 | struct request **tag_index; /* array or pointers to rq */ |
824 | unsigned long *tag_map; /* bitmap of free tags */ |
825 | struct list_head busy_list; /* fifo list of busy tags */ |
826 | int busy; /* queue depth */ |
827 | int max_depth; /* max queue depth */ |
828 | }; |
829 | |
830 | Most of the above is simple and straight forward, however busy_list may need |
831 | a bit of explaining. Normally we don't care too much about request ordering, |
832 | but in the event of any barrier requests in the tag queue we need to ensure |
833 | that requests are restarted in the order they were queue. This may happen |
834 | if the driver needs to use blk_queue_invalidate_tags(). |
835 | |
836 | Tagging also defines a new request flag, REQ_QUEUED. This is set whenever |
837 | a request is currently tagged. You should not use this flag directly, |
838 | blk_rq_tagged(rq) is the portable way to do so. |
839 | |
840 | 3.3 I/O Submission |
841 | |
842 | The routine submit_bio() is used to submit a single io. Higher level i/o |
843 | routines make use of this: |
844 | |
845 | (a) Buffered i/o: |
846 | The routine submit_bh() invokes submit_bio() on a bio corresponding to the |
847 | bh, allocating the bio if required. ll_rw_block() uses submit_bh() as before. |
848 | |
849 | (b) Kiobuf i/o (for raw/direct i/o): |
850 | The ll_rw_kio() routine breaks up the kiobuf into page sized chunks and |
851 | maps the array to one or more multi-page bios, issuing submit_bio() to |
852 | perform the i/o on each of these. |
853 | |
854 | The embedded bh array in the kiobuf structure has been removed and no |
855 | preallocation of bios is done for kiobufs. [The intent is to remove the |
856 | blocks array as well, but it's currently in there to kludge around direct i/o.] |
857 | Thus kiobuf allocation has switched back to using kmalloc rather than vmalloc. |
858 | |
859 | Todo/Observation: |
860 | |
861 | A single kiobuf structure is assumed to correspond to a contiguous range |
862 | of data, so brw_kiovec() invokes ll_rw_kio for each kiobuf in a kiovec. |
863 | So right now it wouldn't work for direct i/o on non-contiguous blocks. |
864 | This is to be resolved. The eventual direction is to replace kiobuf |
865 | by kvec's. |
866 | |
867 | Badari Pulavarty has a patch to implement direct i/o correctly using |
868 | bio and kvec. |
869 | |
870 | |
871 | (c) Page i/o: |
872 | Todo/Under discussion: |
873 | |
874 | Andrew Morton's multi-page bio patches attempt to issue multi-page |
875 | writeouts (and reads) from the page cache, by directly building up |
876 | large bios for submission completely bypassing the usage of buffer |
877 | heads. This work is still in progress. |
878 | |
879 | Christoph Hellwig had some code that uses bios for page-io (rather than |
880 | bh). This isn't included in bio as yet. Christoph was also working on a |
881 | design for representing virtual/real extents as an entity and modifying |
882 | some of the address space ops interfaces to utilize this abstraction rather |
883 | than buffer_heads. (This is somewhat along the lines of the SGI XFS pagebuf |
884 | abstraction, but intended to be as lightweight as possible). |
885 | |
886 | (d) Direct access i/o: |
887 | Direct access requests that do not contain bios would be submitted differently |
888 | as discussed earlier in section 1.3. |
889 | |
890 | Aside: |
891 | |
892 | Kvec i/o: |
893 | |
894 | Ben LaHaise's aio code uses a slightly different structure instead |
895 | of kiobufs, called a kvec_cb. This contains an array of <page, offset, len> |
896 | tuples (very much like the networking code), together with a callback function |
897 | and data pointer. This is embedded into a brw_cb structure when passed |
898 | to brw_kvec_async(). |
899 | |
900 | Now it should be possible to directly map these kvecs to a bio. Just as while |
901 | cloning, in this case rather than PRE_BUILT bio_vecs, we set the bi_io_vec |
902 | array pointer to point to the veclet array in kvecs. |
903 | |
904 | TBD: In order for this to work, some changes are needed in the way multi-page |
905 | bios are handled today. The values of the tuples in such a vector passed in |
906 | from higher level code should not be modified by the block layer in the course |
907 | of its request processing, since that would make it hard for the higher layer |
908 | to continue to use the vector descriptor (kvec) after i/o completes. Instead, |
909 | all such transient state should either be maintained in the request structure, |
910 | and passed on in some way to the endio completion routine. |
911 | |
912 | |
913 | 4. The I/O scheduler |
914 | I/O scheduler, a.k.a. elevator, is implemented in two layers. Generic dispatch |
915 | queue and specific I/O schedulers. Unless stated otherwise, elevator is used |
916 | to refer to both parts and I/O scheduler to specific I/O schedulers. |
917 | |
918 | Block layer implements generic dispatch queue in block/*.c. |
919 | The generic dispatch queue is responsible for properly ordering barrier |
920 | requests, requeueing, handling non-fs requests and all other subtleties. |
921 | |
922 | Specific I/O schedulers are responsible for ordering normal filesystem |
923 | requests. They can also choose to delay certain requests to improve |
924 | throughput or whatever purpose. As the plural form indicates, there are |
925 | multiple I/O schedulers. They can be built as modules but at least one should |
926 | be built inside the kernel. Each queue can choose different one and can also |
927 | change to another one dynamically. |
928 | |
929 | A block layer call to the i/o scheduler follows the convention elv_xxx(). This |
930 | calls elevator_xxx_fn in the elevator switch (block/elevator.c). Oh, xxx |
931 | and xxx might not match exactly, but use your imagination. If an elevator |
932 | doesn't implement a function, the switch does nothing or some minimal house |
933 | keeping work. |
934 | |
935 | 4.1. I/O scheduler API |
936 | |
937 | The functions an elevator may implement are: (* are mandatory) |
938 | elevator_merge_fn called to query requests for merge with a bio |
939 | |
940 | elevator_merge_req_fn called when two requests get merged. the one |
941 | which gets merged into the other one will be |
942 | never seen by I/O scheduler again. IOW, after |
943 | being merged, the request is gone. |
944 | |
945 | elevator_merged_fn called when a request in the scheduler has been |
946 | involved in a merge. It is used in the deadline |
947 | scheduler for example, to reposition the request |
948 | if its sorting order has changed. |
949 | |
950 | elevator_allow_merge_fn called whenever the block layer determines |
951 | that a bio can be merged into an existing |
952 | request safely. The io scheduler may still |
953 | want to stop a merge at this point if it |
954 | results in some sort of conflict internally, |
955 | this hook allows it to do that. |
956 | |
957 | elevator_dispatch_fn* fills the dispatch queue with ready requests. |
958 | I/O schedulers are free to postpone requests by |
959 | not filling the dispatch queue unless @force |
960 | is non-zero. Once dispatched, I/O schedulers |
961 | are not allowed to manipulate the requests - |
962 | they belong to generic dispatch queue. |
963 | |
964 | elevator_add_req_fn* called to add a new request into the scheduler |
965 | |
966 | elevator_queue_empty_fn returns true if the merge queue is empty. |
967 | Drivers shouldn't use this, but rather check |
968 | if elv_next_request is NULL (without losing the |
969 | request if one exists!) |
970 | |
971 | elevator_former_req_fn |
972 | elevator_latter_req_fn These return the request before or after the |
973 | one specified in disk sort order. Used by the |
974 | block layer to find merge possibilities. |
975 | |
976 | elevator_completed_req_fn called when a request is completed. |
977 | |
978 | elevator_may_queue_fn returns true if the scheduler wants to allow the |
979 | current context to queue a new request even if |
980 | it is over the queue limit. This must be used |
981 | very carefully!! |
982 | |
983 | elevator_set_req_fn |
984 | elevator_put_req_fn Must be used to allocate and free any elevator |
985 | specific storage for a request. |
986 | |
987 | elevator_activate_req_fn Called when device driver first sees a request. |
988 | I/O schedulers can use this callback to |
989 | determine when actual execution of a request |
990 | starts. |
991 | elevator_deactivate_req_fn Called when device driver decides to delay |
992 | a request by requeueing it. |
993 | |
994 | elevator_init_fn* |
995 | elevator_exit_fn Allocate and free any elevator specific storage |
996 | for a queue. |
997 | |
998 | 4.2 Request flows seen by I/O schedulers |
999 | All requests seen by I/O schedulers strictly follow one of the following three |
1000 | flows. |
1001 | |
1002 | set_req_fn -> |
1003 | |
1004 | i. add_req_fn -> (merged_fn ->)* -> dispatch_fn -> activate_req_fn -> |
1005 | (deactivate_req_fn -> activate_req_fn ->)* -> completed_req_fn |
1006 | ii. add_req_fn -> (merged_fn ->)* -> merge_req_fn |
1007 | iii. [none] |
1008 | |
1009 | -> put_req_fn |
1010 | |
1011 | 4.3 I/O scheduler implementation |
1012 | The generic i/o scheduler algorithm attempts to sort/merge/batch requests for |
1013 | optimal disk scan and request servicing performance (based on generic |
1014 | principles and device capabilities), optimized for: |
1015 | i. improved throughput |
1016 | ii. improved latency |
1017 | iii. better utilization of h/w & CPU time |
1018 | |
1019 | Characteristics: |
1020 | |
1021 | i. Binary tree |
1022 | AS and deadline i/o schedulers use red black binary trees for disk position |
1023 | sorting and searching, and a fifo linked list for time-based searching. This |
1024 | gives good scalability and good availability of information. Requests are |
1025 | almost always dispatched in disk sort order, so a cache is kept of the next |
1026 | request in sort order to prevent binary tree lookups. |
1027 | |
1028 | This arrangement is not a generic block layer characteristic however, so |
1029 | elevators may implement queues as they please. |
1030 | |
1031 | ii. Merge hash |
1032 | AS and deadline use a hash table indexed by the last sector of a request. This |
1033 | enables merging code to quickly look up "back merge" candidates, even when |
1034 | multiple I/O streams are being performed at once on one disk. |
1035 | |
1036 | "Front merges", a new request being merged at the front of an existing request, |
1037 | are far less common than "back merges" due to the nature of most I/O patterns. |
1038 | Front merges are handled by the binary trees in AS and deadline schedulers. |
1039 | |
1040 | iii. Plugging the queue to batch requests in anticipation of opportunities for |
1041 | merge/sort optimizations |
1042 | |
1043 | Plugging is an approach that the current i/o scheduling algorithm resorts to so |
1044 | that it collects up enough requests in the queue to be able to take |
1045 | advantage of the sorting/merging logic in the elevator. If the |
1046 | queue is empty when a request comes in, then it plugs the request queue |
1047 | (sort of like plugging the bath tub of a vessel to get fluid to build up) |
1048 | till it fills up with a few more requests, before starting to service |
1049 | the requests. This provides an opportunity to merge/sort the requests before |
1050 | passing them down to the device. There are various conditions when the queue is |
1051 | unplugged (to open up the flow again), either through a scheduled task or |
1052 | could be on demand. For example wait_on_buffer sets the unplugging going |
1053 | through sync_buffer() running blk_run_address_space(mapping). Or the caller |
1054 | can do it explicity through blk_unplug(bdev). So in the read case, |
1055 | the queue gets explicitly unplugged as part of waiting for completion on that |
1056 | buffer. For page driven IO, the address space ->sync_page() takes care of |
1057 | doing the blk_run_address_space(). |
1058 | |
1059 | Aside: |
1060 | This is kind of controversial territory, as it's not clear if plugging is |
1061 | always the right thing to do. Devices typically have their own queues, |
1062 | and allowing a big queue to build up in software, while letting the device be |
1063 | idle for a while may not always make sense. The trick is to handle the fine |
1064 | balance between when to plug and when to open up. Also now that we have |
1065 | multi-page bios being queued in one shot, we may not need to wait to merge |
1066 | a big request from the broken up pieces coming by. |
1067 | |
1068 | 4.4 I/O contexts |
1069 | I/O contexts provide a dynamically allocated per process data area. They may |
1070 | be used in I/O schedulers, and in the block layer (could be used for IO statis, |
1071 | priorities for example). See *io_context in block/ll_rw_blk.c, and as-iosched.c |
1072 | for an example of usage in an i/o scheduler. |
1073 | |
1074 | |
1075 | 5. Scalability related changes |
1076 | |
1077 | 5.1 Granular Locking: io_request_lock replaced by a per-queue lock |
1078 | |
1079 | The global io_request_lock has been removed as of 2.5, to avoid |
1080 | the scalability bottleneck it was causing, and has been replaced by more |
1081 | granular locking. The request queue structure has a pointer to the |
1082 | lock to be used for that queue. As a result, locking can now be |
1083 | per-queue, with a provision for sharing a lock across queues if |
1084 | necessary (e.g the scsi layer sets the queue lock pointers to the |
1085 | corresponding adapter lock, which results in a per host locking |
1086 | granularity). The locking semantics are the same, i.e. locking is |
1087 | still imposed by the block layer, grabbing the lock before |
1088 | request_fn execution which it means that lots of older drivers |
1089 | should still be SMP safe. Drivers are free to drop the queue |
1090 | lock themselves, if required. Drivers that explicitly used the |
1091 | io_request_lock for serialization need to be modified accordingly. |
1092 | Usually it's as easy as adding a global lock: |
1093 | |
1094 | static DEFINE_SPINLOCK(my_driver_lock); |
1095 | |
1096 | and passing the address to that lock to blk_init_queue(). |
1097 | |
1098 | 5.2 64 bit sector numbers (sector_t prepares for 64 bit support) |
1099 | |
1100 | The sector number used in the bio structure has been changed to sector_t, |
1101 | which could be defined as 64 bit in preparation for 64 bit sector support. |
1102 | |
1103 | 6. Other Changes/Implications |
1104 | |
1105 | 6.1 Partition re-mapping handled by the generic block layer |
1106 | |
1107 | In 2.5 some of the gendisk/partition related code has been reorganized. |
1108 | Now the generic block layer performs partition-remapping early and thus |
1109 | provides drivers with a sector number relative to whole device, rather than |
1110 | having to take partition number into account in order to arrive at the true |
1111 | sector number. The routine blk_partition_remap() is invoked by |
1112 | generic_make_request even before invoking the queue specific make_request_fn, |
1113 | so the i/o scheduler also gets to operate on whole disk sector numbers. This |
1114 | should typically not require changes to block drivers, it just never gets |
1115 | to invoke its own partition sector offset calculations since all bios |
1116 | sent are offset from the beginning of the device. |
1117 | |
1118 | |
1119 | 7. A Few Tips on Migration of older drivers |
1120 | |
1121 | Old-style drivers that just use CURRENT and ignores clustered requests, |
1122 | may not need much change. The generic layer will automatically handle |
1123 | clustered requests, multi-page bios, etc for the driver. |
1124 | |
1125 | For a low performance driver or hardware that is PIO driven or just doesn't |
1126 | support scatter-gather changes should be minimal too. |
1127 | |
1128 | The following are some points to keep in mind when converting old drivers |
1129 | to bio. |
1130 | |
1131 | Drivers should use elv_next_request to pick up requests and are no longer |
1132 | supposed to handle looping directly over the request list. |
1133 | (struct request->queue has been removed) |
1134 | |
1135 | Now end_that_request_first takes an additional number_of_sectors argument. |
1136 | It used to handle always just the first buffer_head in a request, now |
1137 | it will loop and handle as many sectors (on a bio-segment granularity) |
1138 | as specified. |
1139 | |
1140 | Now bh->b_end_io is replaced by bio->bi_end_io, but most of the time the |
1141 | right thing to use is bio_endio(bio, uptodate) instead. |
1142 | |
1143 | If the driver is dropping the io_request_lock from its request_fn strategy, |
1144 | then it just needs to replace that with q->queue_lock instead. |
1145 | |
1146 | As described in Sec 1.1, drivers can set max sector size, max segment size |
1147 | etc per queue now. Drivers that used to define their own merge functions i |
1148 | to handle things like this can now just use the blk_queue_* functions at |
1149 | blk_init_queue time. |
1150 | |
1151 | Drivers no longer have to map a {partition, sector offset} into the |
1152 | correct absolute location anymore, this is done by the block layer, so |
1153 | where a driver received a request ala this before: |
1154 | |
1155 | rq->rq_dev = mk_kdev(3, 5); /* /dev/hda5 */ |
1156 | rq->sector = 0; /* first sector on hda5 */ |
1157 | |
1158 | it will now see |
1159 | |
1160 | rq->rq_dev = mk_kdev(3, 0); /* /dev/hda */ |
1161 | rq->sector = 123128; /* offset from start of disk */ |
1162 | |
1163 | As mentioned, there is no virtual mapping of a bio. For DMA, this is |
1164 | not a problem as the driver probably never will need a virtual mapping. |
1165 | Instead it needs a bus mapping (dma_map_page for a single segment or |
1166 | use dma_map_sg for scatter gather) to be able to ship it to the driver. For |
1167 | PIO drivers (or drivers that need to revert to PIO transfer once in a |
1168 | while (IDE for example)), where the CPU is doing the actual data |
1169 | transfer a virtual mapping is needed. If the driver supports highmem I/O, |
1170 | (Sec 1.1, (ii) ) it needs to use __bio_kmap_atomic and bio_kmap_irq to |
1171 | temporarily map a bio into the virtual address space. |
1172 | |
1173 | |
1174 | 8. Prior/Related/Impacted patches |
1175 | |
1176 | 8.1. Earlier kiobuf patches (sct/axboe/chait/hch/mkp) |
1177 | - orig kiobuf & raw i/o patches (now in 2.4 tree) |
1178 | - direct kiobuf based i/o to devices (no intermediate bh's) |
1179 | - page i/o using kiobuf |
1180 | - kiobuf splitting for lvm (mkp) |
1181 | - elevator support for kiobuf request merging (axboe) |
1182 | 8.2. Zero-copy networking (Dave Miller) |
1183 | 8.3. SGI XFS - pagebuf patches - use of kiobufs |
1184 | 8.4. Multi-page pioent patch for bio (Christoph Hellwig) |
1185 | 8.5. Direct i/o implementation (Andrea Arcangeli) since 2.4.10-pre11 |
1186 | 8.6. Async i/o implementation patch (Ben LaHaise) |
1187 | 8.7. EVMS layering design (IBM EVMS team) |
1188 | 8.8. Larger page cache size patch (Ben LaHaise) and |
1189 | Large page size (Daniel Phillips) |
1190 | => larger contiguous physical memory buffers |
1191 | 8.9. VM reservations patch (Ben LaHaise) |
1192 | 8.10. Write clustering patches ? (Marcelo/Quintela/Riel ?) |
1193 | 8.11. Block device in page cache patch (Andrea Archangeli) - now in 2.4.10+ |
1194 | 8.12. Multiple block-size transfers for faster raw i/o (Shailabh Nagar, |
1195 | Badari) |
1196 | 8.13 Priority based i/o scheduler - prepatches (Arjan van de Ven) |
1197 | 8.14 IDE Taskfile i/o patch (Andre Hedrick) |
1198 | 8.15 Multi-page writeout and readahead patches (Andrew Morton) |
1199 | 8.16 Direct i/o patches for 2.5 using kvec and bio (Badari Pulavarthy) |
1200 | |
1201 | 9. Other References: |
1202 | |
1203 | 9.1 The Splice I/O Model - Larry McVoy (and subsequent discussions on lkml, |
1204 | and Linus' comments - Jan 2001) |
1205 | 9.2 Discussions about kiobuf and bh design on lkml between sct, linus, alan |
1206 | et al - Feb-March 2001 (many of the initial thoughts that led to bio were |
1207 | brought up in this discussion thread) |
1208 | 9.3 Discussions on mempool on lkml - Dec 2001. |
1209 | |
1210 |
Branches:
ben-wpan
ben-wpan-stefan
javiroman/ks7010
jz-2.6.34
jz-2.6.34-rc5
jz-2.6.34-rc6
jz-2.6.34-rc7
jz-2.6.35
jz-2.6.36
jz-2.6.37
jz-2.6.38
jz-2.6.39
jz-3.0
jz-3.1
jz-3.11
jz-3.12
jz-3.13
jz-3.15
jz-3.16
jz-3.18-dt
jz-3.2
jz-3.3
jz-3.4
jz-3.5
jz-3.6
jz-3.6-rc2-pwm
jz-3.9
jz-3.9-clk
jz-3.9-rc8
jz47xx
jz47xx-2.6.38
master
Tags:
od-2011-09-04
od-2011-09-18
v2.6.34-rc5
v2.6.34-rc6
v2.6.34-rc7
v3.9