Recently, a bug having to do with the alignment of transaction response
data was fixed. However, some apps such as libdc1394 relied on the
presence of that bug in order to function correctly. In order to stay
compatible with old versions of those apps, this patch preserves the bug
in cases where it is harmless to normal operation (such as the single
quadlet read) due to a simple duplication of data. This guarantees
maximum compatability for those users who are using the old app with the
fixed kernel.
Signed-off-by: David Moore <dcm@acm.org>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Use only statically allocated data for PHY config packet transmission.
With the previous incarnation, some data wouldn't be freed if the packet
transmit callback was never called.
A theoretical drawback now is that, in PCs with more than one card,
card A may complete() for a waiter on card B. But this is highly
unlikely and its impact not serious. Bus manager B may reset bus B
before the PHY config went out, but the next phy config on B should be
fine. However, with a timeout of 100ms, this situation is close to
impossible.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Isochronous reception in dualbuffer mode is reportedly broken with
TI TSB43AB22A on x86-64. Descriptor addresses above 2G have been
determined as the trigger:
https://bugzilla.redhat.com/show_bug.cgi?id=435550
Two fixes are possible:
- pci_set_consistent_dma_mask(pdev, DMA_31BIT_MASK);
at least when IR descriptors are allocated, or
- simply don't use dualbuffer.
This fix implements the latter workaround.
But we keep using dualbuffer on x86-32 which won't give us highmen (and
thus physical addresses outside the 31bit range) in coherent DMA memory
allocations. Right now we could for example also whitelist PPC32, but
DMA mapping implementation details are expected to change there.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
There will be 4 padding bytes in struct fw_cdev_event_response on some platforms
The member:__u32 data will point to these padding bytes. While queue the
response and data in complete_transaction in fw-cdev.c, it will queue like this:
|response(excluding padding bytes)|4 padding bytes|4 padding bytes|data.
It queue 4 extra bytes. That is to say it use "&response + sizeof(response)"
while other place of kernel and userspace library use "&response + offsetof
(typeof(response), data)". So it will lost the last 4 bytes of data. This patch
can fix it while not changing the struct definition.
Signed-off-by: JiSheng Zhang <jszhang3@mail.ustc.edu.cn>
This fixes responses to outbound block read requests on 64bit architectures.
Tested on i686, x86-64, and x86-64 with i686 userland, using firecontrol and
gscanbus.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
After card->done and card->work are completed, any remaining pending
request would be a bug. We cannot safely complete a transaction at
that point anymore.
IOW card users must not drop their last fw_card reference (usually
indirect references through fw_device references) before their last
outbound transaction through that card was finished.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
- better name for a function argument
- removal of a local variable which became unnecessary after
"fully initialize fw_transaction before marking it pending"
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
In theory, card->flush_timer could already access a transaction between
fw_send_request()'s spin_unlock_irqrestore and the rest of what happens
in fw_send_request(). This would happen if the process which sends the
request is preempted and put to sleep right after spin_unlock_irqrestore
for longer than 100ms.
Therefore we fill in everything in struct fw_transaction at which the
flush_timer might look at before we lift the lock.
To do: Ensure that the timer does not pick up the transaction before
the time of the AT request event plus split transaction timeout.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Reported by Jay Fenlason: A bus reset tasklet may call
fw_flush_transactions and touch transactions (call their callback which
will free them) while the context which submitted the transaction is
still inserting it into the transmission queue.
A simple solution to this problem is to _not_ "flush" the transactions
because of a bus reset (complete the transcations as 'cancelled'). They
will now simply time out (completed as 'cancelled' by the split-timeout
timer).
Jay Fenlason thought of this fix too but I was quicker to type it out.
:-)
Background:
Contexts which access an instance of struct fw_transaction are:
1. the submitter, until it inserted the packet which is embedded in the
transaction into the AT req DMA,
2. the AsReqTrContext tasklet when the request packet was acked by the
responder node or transmission to the responder failed,
3. the AsRspRcvContext tasklet when it found a request which matched
an incoming response,
4. the card->flush_timer when it picks up timed-out transactions to
cancel them,
5. the bus reset tasklet when it cancels transactions (this access is
eliminated by this patch),
6. a process which shuts down an fw_card (unregisters it from fw-core
when the controller is unbound from fw-ohci) --- although in this
case there shouldn't really be any transactions anymore because we
wait until all card users finished their business with the card.
All of these contexts run concurrently (except for the 6th, presumably).
The 1st is safe against the 2nd and 3rd because of the way how a request
packet is carefully submitted to the hardware. A race between 2nd and
3rd has been fixed a while ago (bug 9617). The 4th is almost safe
against 1st, 2nd, 3rd; there are issues with it if huge scheduling
latencies occur, to be fixed separately. The 5th looks safe against
2nd, 3rd, and 4th but is unsafe against 1st. Maybe this could be fixed
with an explicit state variable in struct fw_transaction. But this
would require fw_transaction to be rewritten as only dynamically
allocatable object with reference counting --- not a good solution if we
also can simply kill this 5th accessing context (replace it by the 4th).
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Contrary to a comment in the source, request->ack of a broadcast write
request can be ACK_PENDING. Hence the existing check is insufficient.
Debug dmesg before:
AR spd 0 tl 00, ffc0 -> ffff, ack_pending , QW req, fffff0000234 = ffffffff
AT spd 0 tl 00, ffff -> ffc0, ack_complete, W resp
And the requesting node (linux1394) reports an unsolicited response.
Debug dmesg after:
AR spd 0 tl 00, ffc0 -> ffff, ack_pending , QW req, fffff0000234 = ffffffff
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
This is a functionally equivalent replacement of the current reference
counting of struct fw_card instances. It only converts it to common
idioms as suggested by Kristian Høgsberg:
- struct kref replaces atomic_t as the counter.
- wait_for_completion is used to wait for all card users to complete.
BTW, it may make sense to count card->flush_timer and card->work as
card users too.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Currently, core files do not contain the mmapped memory of the video1394
or dv1394 devices, which contain the actual video input, making it
impossible to analyse the cause of abnormal program termination for
image analysis or (de)compression software. Fix that.
Signed-off-by: Philippe De Muyter <phdm@macqel.be>
Also affects users of the rawiso ioctl API of raw1394.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Actually in this case wrap the function for now.
Signed-off-by: Alan Cox <alan@redhat.com>
Added raw1394_compat_ioctl hunk.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
This prepares video1394 for removal of the BKL (big kernel lock):
It allows video1394_open() to be called while video1394_init_module()
is still in progress.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
# cat devices.list
c 1:3 r
# echo 'c 1:3 w' > sub/devices.allow
# cat sub/devices.list
c 1:3 w
As illustrated, the parent group has no write permission to /dev/null, so
it's child should not be allowed to add this write permission.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Cc: Serge Hallyn <serue@us.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
# echo "b $((0x7fffffff)):$((0x80000000)) rwm" > devices.allow
# cat devices.list
b 214748364:-21474836 rwm
though a major/minor number of 0x800000000 is meaningless, we
should not cast it to a negative value.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Cc: Serge Hallyn <serue@us.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit f18f982ab ("sched: CPU hotplug events must not destroy scheduler
domains created by the cpusets") introduced a hotplug-related problem as
described below:
Upon CPU_DOWN_PREPARE,
update_sched_domains() -> detach_destroy_domains(&cpu_online_map)
does the following:
/*
* Force a reinitialization of the sched domains hierarchy. The domains
* and groups cannot be updated in place without racing with the balancing
* code, so we temporarily attach all running cpus to the NULL domain
* which will prevent rebalancing while the sched domains are recalculated.
*/
The sched-domains should be rebuilt when a CPU_DOWN ops. has been
completed, effectively either upon CPU_DEAD{_FROZEN} (upon success) or
CPU_DOWN_FAILED{_FROZEN} (upon failure -- restore the things to their
initial state). That's what update_sched_domains() also does but only
for !CPUSETS case.
With f18f982ab, sched-domains' reinitialization is delegated to
CPUSETS code:
cpuset_handle_cpuhp() -> common_cpu_mem_hotplug_unplug() ->
rebuild_sched_domains()
Being called for CPU_UP_PREPARE and if its callback is called after
update_sched_domains()), it just negates all the work done by
update_sched_domains() -- i.e. a soon-to-be-offline cpu is included in
the sched-domains and that makes it visible for the load-balancer
while the CPU_DOWN ops. is in progress.
__migrate_live_tasks() moves the tasks off a 'dead' cpu (it's already
"offline" when this function is called).
try_to_wake_up() is called for one of these tasks from another CPU ->
the load-balancer (wake_idle()) picks up a "dead" CPU and places the
task on it. Then e.g. BUG_ON(rq->nr_running) detects this a bit later
-> oops.
Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Tested-by: Vegard Nossum <vegard.nossum@gmail.com>
Cc: Paul Menage <menage@google.com>
Cc: Max Krasnyansky <maxk@qualcomm.com>
Cc: Paul Jackson <pj@sgi.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: miaox@cn.fujitsu.com
Cc: rostedt@goodmis.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
* git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-rc-fixes-2.6:
[SCSI] bsg: fix oops on remove
[SCSI] fusion: default MSI to disabled for SPI and FC controllers
[SCSI] ipr: Fix HDIO_GET_IDENTITY oops for SATA devices
[SCSI] mptspi: fix oops in mptspi_dv_renegotiate_work()
[SCSI] erase invalid data returned by device
The current definition of wksidarr works fine on little endian arches
(since cpu_to_le32 is a no-op there), but on big-endian arches, it fails
to compile with this error:
error: braced-group within expression allowed only inside a function
The problem is that this static declaration has cpu_to_le32 embedded
within it, and that expands into a function macro. We need to use
__constant_cpu_to_le32() instead.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Cc: Steven French <sfrench@us.ibm.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Try this:
mount a share with unix extensions
create a file on it
umount the share
You'll get the following message in the ring buffer:
VFS: Busy inodes after unmount of cifs. Self-destruct in 5 seconds. Have a
nice day...
...the problem is that cifs_get_inode_info_unix is creating and hashing
a new inode even when it's going to return error anyway. The first
lookup when creating a file returns an error so we end up leaking this
inode before we do the actual create. This appears to be a regression
caused by commit 0e4bbde94f.
The following patch seems to fix it for me, and fixes a minor
formatting nit as well.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Acked-by: Steven French <sfrench@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fix FRV irqs_disabled() to return an int, not an unsigned long to avoid
this warning:
kernel/sched.c: In function '__might_sleep':
kernel/sched.c:8198: warning: format '%d' expects type 'int', but argument 3 has type 'long unsigned int'
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Philippe Elie <phil.el@wanadoo.fr>
Cc: John Levon <levon@movementarian.org>
Cc: Maynard Johnson <maynardj@us.ibm.com>
Cc: Richard Purdie <rpurdie@openedhand.com>
Cc: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Cc: Jason Yeh <jason.yeh@amd.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cortland Setlow pointed out a bug in ov7670.c where the result from
ov7670_read() was just being checked for !0, rather than <0. This made me
realize that ov7670_read's semantics were rather confusing; it both fills
in 'value' with the result, and returns it. This is goes against general
kernel convention; so rather than fixing callers, let's fix the function.
This makes ov7670_read return <0 in the case of an error, and 0 upon
success. Thus, code like:
res = ov7670_read(...);
if (!res)
goto error;
..will work properly.
Signed-off-by: Cortland Setlow <csetlow@tower-research.com>
Signed-off-by: Andres Salomon <dilinger@debian.org>
Acked-by: Jonathan Corbet <corbet@lwn.net>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
I had 8250.nr_uarts=16 in the boot line of a test kernel and I had a weird
mysterious crash in sysfs. After taking an in-depth look I realized that
CONFIG_SERIAL_8250_NR_UARTS was set to 4 and I was walking off the end of
the serial8250_ports array.
Ouch!!!
Don't let this happen to someone else.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: Alan Cox <alan@redhat.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch is a bugfix for how defio handles multiple processes manipulating
the same framebuffer.
Thanks to Bernard Blackham for identifying this bug.
It occurs when two applications mmap the same framebuffer and concurrently
write to the same page. Normally, this doesn't occur since only a single
process mmaps the framebuffer. The symptom of the bug is that the mapping
applications will hang. The cause is that defio incorrectly tries to add the
same page twice to the pagelist. The solution I have is to walk the pagelist
and check for a duplicate before adding. Since I needed to walk the pagelist,
I now also keep the pagelist in sorted order.
Signed-off-by: Jaya Kumar <jayakumar.lkml@gmail.com>
Cc: Bernard Blackham <bernard@largestprime.net>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Coverity CID: 1356 RESOURCE_LEAK
I found a very old patch for this that was Acked but did not get applied
https://lists.linux-foundation.org/pipermail/kernel-janitors/2006-September/016362.html
There looks to be a small leak in isdn_writebuf_stub() in isdn_common.c, when
copy_from_user() returns an un-copied data length (length != 0). The below
patch should be a minimally invasive fix.
Signed-off-by: Darren Jenkins <darrenrjenkins@gmailcom>
Acked-by: Karsten Keil <kkeil@suse.de>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Coverity CID: 2172 RESOURCE_LEAK
When pool_allocate() tries to enlarge a packet, if it can not allocate enough
memory, it returns NULL without first freeing the old packet.
This patch just frees the packet first.
Signed-off-by: Darren Jenkins <darrenrjenkins@gmail.com>
Acked-by: Jiri Kosina <jkosina@suse.cz>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If you do a modremove of any sas driver, you run into an oops on
shutdown when the host is removed (coming from the host bsg device).
The root cause seems to be that there's a use after free of the
bsg_class_device: In bsg_kref_release_function, this is used (to do a
put_device(bcg->parent) after bcg->release has been called. In sas (and
possibly many other things) bcd->release frees the queue which contains
the bsg_class_device, so we get a put_device on unreferenced memory.
Fix this by taking a copy of the pointer to the parent before releasing
bsg.
Acked-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
There's a fault on the FC controllers that makes them not respond
correctly to MSI. The SPI controllers are fine, but are likely to be
onboard on older motherboards which don't handle MSI correctly, so
default both these cases to disabled. Enable by setting the module
parameter mpt_msi_enable=1.
For the SAS case, enable MSI by default, but it can be disabled by
setting the module parameter mpt_msi_enable=0.
Cc: "Prakash, Sathya" <sathya.prakash@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Fix size of LDT entries. On x86-64, ldt_desc is a double-sized descriptor.
Signed-off-by: Michael Karcher <kernel@mkarcher.dialup.fu-berlin.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This patch corrects the handling of write operations to the IPMI watchdog
to work as intended by returning the number of characters actually
processed. Without this patch, an "echo V >/dev/watchdog" enables the
watchdog if IPMI is providing the watchdog function.
Signed-off-by: Mark Rustad <MRustad@gmail.com>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Currently, ipr does not support HDIO_GET_IDENTITY to SATA devices.
An oops occurs if userspace attempts to send the command. Since hald
issues the command, ensure we fail the ioctl in ipr. This is a
temporary solution to the oops. Once the ipr libata EH conversion
is upstream, ipr will fully support HDIO_GET_IDENTITY.
Tested-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev:
libata-acpi: don't call sleeping function from invalid context
Added Targa Visionary 1000 IDE adapter to pata_sis.c
libata-acpi: filter out DIPM enable
When we release the iclog, we do an atomic_dec_and_lock to determine if
we are the last reference and need to trigger update of log headers and
writeout. However, in xlog_state_get_iclog_space() we also need to
check if we have the last reference count there. If we do, we release
the log buffer, otherwise we decrement the reference count.
But the compare and decrement in xlog_state_get_iclog_space() is not
atomic, so both places can see a reference count of 2 and neither will
release the iclog. That leads to a filesystem hang.
Close the race by replacing the atomic_read() and atomic_dec() pair with
atomic_add_unless() to ensure that they are executed atomically.
Signed-off-by: Dave Chinner <david@fromorbit.com>
Reviewed-by: Tim Shimmin <tes@sgi.com>
Tested-by: Eric Sandeen <sandeen@sandeen.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The problem is introduced by commit
664d080c41.
acpi_evaluate_integer is a sleeping function,
and it should not be called with spin_lock_irqsave.
https://bugzilla.redhat.com/show_bug.cgi?id=451399
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Some BIOSen enable DIPM via _GTF which causes command timeouts under
certain configuration. This didn't occur on 2.6.25 because 2.6.25
defaulted to SRST, so _GTF wasn't executed during boot probe, so ahci
host reset disabled DIPM and as _GTF wasn't executed after SRST, DIPM
wasn't enabled. On 2.6.26, hardreset is used during probe and after
probe _GTF is executed enabling DIPM and thus the failures.
This patch could theoretically disable DIPM on machines which used to
have it enabled on 2.6.25 but AFAIK ahci is currently the only driver
which uses SATA ACPI hierarchy (_SDD) and as the host reset would have
always disabled DIPM, this shouldn't happen.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
The IRQ rate reported back by the RTC is incorrect when HPET is enabled.
Newer hardware that has HPET to emulate the legacy RTC device gets this value
wrong since after it sets the rate, it returns before setting the variable
used to report the IRQ rate back to users of the device -- so the set rate and
the reported rate get out of sync.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Brownell <david-b@pacbell.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>