By only allowing async IO to consume 3/4 ths of the tag depth, we
always have slots free to serve sync IO. This is important to avoid
having writes fill the entire tag queue, thus starving reads.
Original patch and idea from Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
It's not used for anything. On top of that, it's racy and can thus
trigger a faulty BUG_ON() in __blk_free_tags() on queue exit.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
For most initialization purposes, calling blk_queue_init_tags() without
the queue lock held is OK. Only if called for resizing an existing map
must the lock be held. Ditto for tag cleanup, the maps are reference
counted.
So switch the general queue flag setting to the unlocked variant, but
retain the locked variant for resizing.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
We can save some atomic ops in the IO path, if we clearly define
the rules of how to modify the queue flags.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Every file should include the headers containing the externs for its
global functions (in this case for __blk_queue_free_tags()).
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>