Blame view

kernel/linux-imx6_3.14.28/Documentation/cgroups/blkio-controller.txt 16 KB
6b13f685e   김민수   BSP 최초 추가
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
  				Block IO Controller
  				===================
  Overview
  ========
  cgroup subsys "blkio" implements the block io controller. There seems to be
  a need of various kinds of IO control policies (like proportional BW, max BW)
  both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
  Plan is to use the same cgroup based management interface for blkio controller
  and based on user options switch IO policies in the background.
  
  Currently two IO control policies are implemented. First one is proportional
  weight time based division of disk policy. It is implemented in CFQ. Hence
  this policy takes effect only on leaf nodes when CFQ is being used. The second
  one is throttling policy which can be used to specify upper IO rate limits
  on devices. This policy is implemented in generic block layer and can be
  used on leaf nodes as well as higher level logical devices like device mapper.
  
  HOWTO
  =====
  Proportional Weight division of bandwidth
  -----------------------------------------
  You can do a very simple testing of running two dd threads in two different
  cgroups. Here is what you can do.
  
  - Enable Block IO controller
  	CONFIG_BLK_CGROUP=y
  
  - Enable group scheduling in CFQ
  	CONFIG_CFQ_GROUP_IOSCHED=y
  
  - Compile and boot into kernel and mount IO controller (blkio); see
    cgroups.txt, Why are cgroups needed?.
  
  	mount -t tmpfs cgroup_root /sys/fs/cgroup
  	mkdir /sys/fs/cgroup/blkio
  	mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
  
  - Create two cgroups
  	mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2
  
  - Set weights of group test1 and test2
  	echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
  	echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight
  
  - Create two same size files (say 512MB each) on same disk (file1, file2) and
    launch two dd threads in different cgroup to read those files.
  
  	sync
  	echo 3 > /proc/sys/vm/drop_caches
  
  	dd if=/mnt/sdb/zerofile1 of=/dev/null &
  	echo $! > /sys/fs/cgroup/blkio/test1/tasks
  	cat /sys/fs/cgroup/blkio/test1/tasks
  
  	dd if=/mnt/sdb/zerofile2 of=/dev/null &
  	echo $! > /sys/fs/cgroup/blkio/test2/tasks
  	cat /sys/fs/cgroup/blkio/test2/tasks
  
  - At macro level, first dd should finish first. To get more precise data, keep
    on looking at (with the help of script), at blkio.disk_time and
    blkio.disk_sectors files of both test1 and test2 groups. This will tell how
    much disk time (in milli seconds), each group got and how many secotors each
    group dispatched to the disk. We provide fairness in terms of disk time, so
    ideally io.disk_time of cgroups should be in proportion to the weight.
  
  Throttling/Upper Limit policy
  -----------------------------
  - Enable Block IO controller
  	CONFIG_BLK_CGROUP=y
  
  - Enable throttling in block layer
  	CONFIG_BLK_DEV_THROTTLING=y
  
  - Mount blkio controller (see cgroups.txt, Why are cgroups needed?)
          mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
  
  - Specify a bandwidth rate on particular device for root group. The format
    for policy is "<major>:<minor>  <bytes_per_second>".
  
          echo "8:16  1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device
  
    Above will put a limit of 1MB/second on reads happening for root group
    on device having major/minor number 8:16.
  
  - Run dd to read a file and see if rate is throttled to 1MB/s or not.
  
  		# dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024
  		# iflag=direct
          1024+0 records in
          1024+0 records out
          4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s
  
   Limits for writes can be put using blkio.throttle.write_bps_device file.
  
  Hierarchical Cgroups
  ====================
  
  Both CFQ and throttling implement hierarchy support; however,
  throttling's hierarchy support is enabled iff "sane_behavior" is
  enabled from cgroup side, which currently is a development option and
  not publicly available.
  
  If somebody created a hierarchy like as follows.
  
  			root
  			/  \
  		     test1 test2
  			|
  		     test3
  
  CFQ by default and throttling with "sane_behavior" will handle the
  hierarchy correctly.  For details on CFQ hierarchy support, refer to
  Documentation/block/cfq-iosched.txt.  For throttling, all limits apply
  to the whole subtree while all statistics are local to the IOs
  directly generated by tasks in that cgroup.
  
  Throttling without "sane_behavior" enabled from cgroup side will
  practically treat all groups at same level as if it looks like the
  following.
  
  				pivot
  			     /  /   \  \
  			root  test1 test2  test3
  
  Various user visible config options
  ===================================
  CONFIG_BLK_CGROUP
  	- Block IO controller.
  
  CONFIG_DEBUG_BLK_CGROUP
  	- Debug help. Right now some additional stats file show up in cgroup
  	  if this option is enabled.
  
  CONFIG_CFQ_GROUP_IOSCHED
  	- Enables group scheduling in CFQ. Currently only 1 level of group
  	  creation is allowed.
  
  CONFIG_BLK_DEV_THROTTLING
  	- Enable block device throttling support in block layer.
  
  Details of cgroup files
  =======================
  Proportional weight policy files
  --------------------------------
  - blkio.weight
  	- Specifies per cgroup weight. This is default weight of the group
  	  on all the devices until and unless overridden by per device rule.
  	  (See blkio.weight_device).
  	  Currently allowed range of weights is from 10 to 1000.
  
  - blkio.weight_device
  	- One can specify per cgroup per device rules using this interface.
  	  These rules override the default value of group weight as specified
  	  by blkio.weight.
  
  	  Following is the format.
  
  	  # echo dev_maj:dev_minor weight > blkio.weight_device
  	  Configure weight=300 on /dev/sdb (8:16) in this cgroup
  	  # echo 8:16 300 > blkio.weight_device
  	  # cat blkio.weight_device
  	  dev     weight
  	  8:16    300
  
  	  Configure weight=500 on /dev/sda (8:0) in this cgroup
  	  # echo 8:0 500 > blkio.weight_device
  	  # cat blkio.weight_device
  	  dev     weight
  	  8:0     500
  	  8:16    300
  
  	  Remove specific weight for /dev/sda in this cgroup
  	  # echo 8:0 0 > blkio.weight_device
  	  # cat blkio.weight_device
  	  dev     weight
  	  8:16    300
  
  - blkio.leaf_weight[_device]
  	- Equivalents of blkio.weight[_device] for the purpose of
            deciding how much weight tasks in the given cgroup has while
            competing with the cgroup's child cgroups. For details,
            please refer to Documentation/block/cfq-iosched.txt.
  
  - blkio.time
  	- disk time allocated to cgroup per device in milliseconds. First
  	  two fields specify the major and minor number of the device and
  	  third field specifies the disk time allocated to group in
  	  milliseconds.
  
  - blkio.sectors
  	- number of sectors transferred to/from disk by the group. First
  	  two fields specify the major and minor number of the device and
  	  third field specifies the number of sectors transferred by the
  	  group to/from the device.
  
  - blkio.io_service_bytes
  	- Number of bytes transferred to/from the disk by the group. These
  	  are further divided by the type of operation - read or write, sync
  	  or async. First two fields specify the major and minor number of the
  	  device, third field specifies the operation type and the fourth field
  	  specifies the number of bytes.
  
  - blkio.io_serviced
  	- Number of IOs completed to/from the disk by the group. These
  	  are further divided by the type of operation - read or write, sync
  	  or async. First two fields specify the major and minor number of the
  	  device, third field specifies the operation type and the fourth field
  	  specifies the number of IOs.
  
  - blkio.io_service_time
  	- Total amount of time between request dispatch and request completion
  	  for the IOs done by this cgroup. This is in nanoseconds to make it
  	  meaningful for flash devices too. For devices with queue depth of 1,
  	  this time represents the actual service time. When queue_depth > 1,
  	  that is no longer true as requests may be served out of order. This
  	  may cause the service time for a given IO to include the service time
  	  of multiple IOs when served out of order which may result in total
  	  io_service_time > actual time elapsed. This time is further divided by
  	  the type of operation - read or write, sync or async. First two fields
  	  specify the major and minor number of the device, third field
  	  specifies the operation type and the fourth field specifies the
  	  io_service_time in ns.
  
  - blkio.io_wait_time
  	- Total amount of time the IOs for this cgroup spent waiting in the
  	  scheduler queues for service. This can be greater than the total time
  	  elapsed since it is cumulative io_wait_time for all IOs. It is not a
  	  measure of total time the cgroup spent waiting but rather a measure of
  	  the wait_time for its individual IOs. For devices with queue_depth > 1
  	  this metric does not include the time spent waiting for service once
  	  the IO is dispatched to the device but till it actually gets serviced
  	  (there might be a time lag here due to re-ordering of requests by the
  	  device). This is in nanoseconds to make it meaningful for flash
  	  devices too. This time is further divided by the type of operation -
  	  read or write, sync or async. First two fields specify the major and
  	  minor number of the device, third field specifies the operation type
  	  and the fourth field specifies the io_wait_time in ns.
  
  - blkio.io_merged
  	- Total number of bios/requests merged into requests belonging to this
  	  cgroup. This is further divided by the type of operation - read or
  	  write, sync or async.
  
  - blkio.io_queued
  	- Total number of requests queued up at any given instant for this
  	  cgroup. This is further divided by the type of operation - read or
  	  write, sync or async.
  
  - blkio.avg_queue_size
  	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
  	  The average queue size for this cgroup over the entire time of this
  	  cgroup's existence. Queue size samples are taken each time one of the
  	  queues of this cgroup gets a timeslice.
  
  - blkio.group_wait_time
  	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
  	  This is the amount of time the cgroup had to wait since it became busy
  	  (i.e., went from 0 to 1 request queued) to get a timeslice for one of
  	  its queues. This is different from the io_wait_time which is the
  	  cumulative total of the amount of time spent by each IO in that cgroup
  	  waiting in the scheduler queue. This is in nanoseconds. If this is
  	  read when the cgroup is in a waiting (for timeslice) state, the stat
  	  will only report the group_wait_time accumulated till the last time it
  	  got a timeslice and will not include the current delta.
  
  - blkio.empty_time
  	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
  	  This is the amount of time a cgroup spends without any pending
  	  requests when not being served, i.e., it does not include any time
  	  spent idling for one of the queues of the cgroup. This is in
  	  nanoseconds. If this is read when the cgroup is in an empty state,
  	  the stat will only report the empty_time accumulated till the last
  	  time it had a pending request and will not include the current delta.
  
  - blkio.idle_time
  	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
  	  This is the amount of time spent by the IO scheduler idling for a
  	  given cgroup in anticipation of a better request than the existing ones
  	  from other queues/cgroups. This is in nanoseconds. If this is read
  	  when the cgroup is in an idling state, the stat will only report the
  	  idle_time accumulated till the last idle period and will not include
  	  the current delta.
  
  - blkio.dequeue
  	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
  	  gives the statistics about how many a times a group was dequeued
  	  from service tree of the device. First two fields specify the major
  	  and minor number of the device and third field specifies the number
  	  of times a group was dequeued from a particular device.
  
  - blkio.*_recursive
  	- Recursive version of various stats. These files show the
            same information as their non-recursive counterparts but
            include stats from all the descendant cgroups.
  
  Throttling/Upper limit policy files
  -----------------------------------
  - blkio.throttle.read_bps_device
  	- Specifies upper limit on READ rate from the device. IO rate is
  	  specified in bytes per second. Rules are per device. Following is
  	  the format.
  
    echo "<major>:<minor>  <rate_bytes_per_second>" > /cgrp/blkio.throttle.read_bps_device
  
  - blkio.throttle.write_bps_device
  	- Specifies upper limit on WRITE rate to the device. IO rate is
  	  specified in bytes per second. Rules are per device. Following is
  	  the format.
  
    echo "<major>:<minor>  <rate_bytes_per_second>" > /cgrp/blkio.throttle.write_bps_device
  
  - blkio.throttle.read_iops_device
  	- Specifies upper limit on READ rate from the device. IO rate is
  	  specified in IO per second. Rules are per device. Following is
  	  the format.
  
    echo "<major>:<minor>  <rate_io_per_second>" > /cgrp/blkio.throttle.read_iops_device
  
  - blkio.throttle.write_iops_device
  	- Specifies upper limit on WRITE rate to the device. IO rate is
  	  specified in io per second. Rules are per device. Following is
  	  the format.
  
    echo "<major>:<minor>  <rate_io_per_second>" > /cgrp/blkio.throttle.write_iops_device
  
  Note: If both BW and IOPS rules are specified for a device, then IO is
        subjected to both the constraints.
  
  - blkio.throttle.io_serviced
  	- Number of IOs (bio) completed to/from the disk by the group (as
  	  seen by throttling policy). These are further divided by the type
  	  of operation - read or write, sync or async. First two fields specify
  	  the major and minor number of the device, third field specifies the
  	  operation type and the fourth field specifies the number of IOs.
  
  	  blkio.io_serviced does accounting as seen by CFQ and counts are in
  	  number of requests (struct request). On the other hand,
  	  blkio.throttle.io_serviced counts number of IO in terms of number
  	  of bios as seen by throttling policy.  These bios can later be
  	  merged by elevator and total number of requests completed can be
  	  lesser.
  
  - blkio.throttle.io_service_bytes
  	- Number of bytes transferred to/from the disk by the group. These
  	  are further divided by the type of operation - read or write, sync
  	  or async. First two fields specify the major and minor number of the
  	  device, third field specifies the operation type and the fourth field
  	  specifies the number of bytes.
  
  	  These numbers should roughly be same as blkio.io_service_bytes as
  	  updated by CFQ. The difference between two is that
  	  blkio.io_service_bytes will not be updated if CFQ is not operating
  	  on request queue.
  
  Common files among various policies
  -----------------------------------
  - blkio.reset_stats
  	- Writing an int to this file will result in resetting all the stats
  	  for that cgroup.
  
  CFQ sysfs tunable
  =================
  /sys/block/<disk>/queue/iosched/slice_idle
  ------------------------------------------
  On a faster hardware CFQ can be slow, especially with sequential workload.
  This happens because CFQ idles on a single queue and single queue might not
  drive deeper request queue depths to keep the storage busy. In such scenarios
  one can try setting slice_idle=0 and that would switch CFQ to IOPS
  (IO operations per second) mode on NCQ supporting hardware.
  
  That means CFQ will not idle between cfq queues of a cfq group and hence be
  able to driver higher queue depth and achieve better throughput. That also
  means that cfq provides fairness among groups in terms of IOPS and not in
  terms of disk time.
  
  /sys/block/<disk>/queue/iosched/group_idle
  ------------------------------------------
  If one disables idling on individual cfq queues and cfq service trees by
  setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
  on the group in an attempt to provide fairness among groups.
  
  By default group_idle is same as slice_idle and does not do anything if
  slice_idle is enabled.
  
  One can experience an overall throughput drop if you have created multiple
  groups and put applications in that group which are not driving enough
  IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
  on individual groups and throughput should improve.
  
  What works
  ==========
  - Currently only sync IO queues are support. All the buffered writes are
    still system wide and not per group. Hence we will not see service
    differentiation between buffered writes between groups.