Due to hardware errata, TSO must be disabled if the PCI Express clock
request is enabled on 5906. The chip may hang when transmitting TSO
frames if CLKREQ is enabled.
Update version to 3.69.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix up tcp_mem initial settings to take into account the size of the
hash entries (different on SMP and non-SMP systems).
Signed-off-by: John Heffner <jheffner@psc.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
nexthdr is NEXTHDR_FRAGMENT, the nexthdr value from the fragment header
is hp->nexthdr.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
66 and 67 for getsockopt on IPv6 socket is doubly used for IPv6 Advanced
API and ip6tables. This moves numbers for ip6tables to 68 and 69.
This also kills XT_SO_* because {ip,ip6,arp}_tables doesn't have so much
common numbers now.
The old userland tools keep to behave as ever, because old kernel always
calls functions of IPv6 Advanced API for their numbers.
Signed-off-by: Yasuyuki Kozakai <yasuyuki.kozakai@toshiba.co.jp>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Based on patch by James D. Nurmi:
I've got some code very dependant on nfnetlink_queue, and turned up a
large number of warns coming from skb_trim. While it's quite possibly
my code, having not seen it on older kernels made me a bit suspect.
Anyhow, based on some googling I turned up this thread:
http://lkml.org/lkml/2006/8/13/56
And believe the issue to be related, so attached is a small patch to
the kernel -- not sure if this is completely correct, but for anyone
else hitting the WARN_ON(1) in skbuff.h, it might be helpful..
Signed-off-by: James D. Nurmi <jdnurmi@gmail.com>
Ported to ip6_queue and nfnetlink_queue and added return value
checks.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
NFULA_SEQ_GLOBAL should be in network byteorder.
Spotted by Al Viro.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Newer 5906 bootcode needs about 7ms to finish resetting so the poll
firmware loop was changed to maximum 20ms.
Signed-off-by: Gary Zambrano <zambrano@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Acked-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The windfarm code, in it's current incarnation, uses request_module() to
load the various submodules it needs for a given platform so that only
the main platform control module needs to be modprobed. However, it was
missing various bits. This fixes it. In the future, we'll use some
hotplug mecanisms to try to get all of this auto-loaded on the platforms
where it matters but that isn't ready yet.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
All the infrastructure is already in place for this, so we only need
to allocate a syscall number and hook it up.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds the /sys/devices/system/cpu/*/topology/thread_siblings
files on powerpc. These files are already available on other
architectures.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
When called to do a transfer that has a start offset within the cache
line which is uneven between source and destination and a length which
terminates the source of the copy exactly on a cache line, one extra
line gets copied into a temporary buffer. This is normally not an issue
since the buffer is a kernel buffer and only the requested information
gets copied into the user buffer.
The problem arises when the source ends at the very last physical page
of memory. That last cache line does not exist and results in the SHUB
chip raising an MCA.
Signed-off-by: Robin Holt <holt@sgi.com>
Signed-off-by: Dean Nelson <dcn@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Komuro reports that ISA interrupts do not work after a disable_irq(),
causing some PCMCIA drivers to not work, with messages like
eth0: Asix AX88190: io 0x300, irq 3, hw_addr xx:xx:xx:xx:xx:xx
eth0: found link beat
eth0: autonegotiation complete: 100baseT-FD selected
eth0: interrupt(s) dropped!
eth0: interrupt(s) dropped!
eth0: interrupt(s) dropped!
...
Linus Torvalds <torvalds@osdl.org> said:
"Now, edge-triggered interrupts are a _lot_ harder to mask, because the
Intel APIC is an unbelievable piece of sh*t, and has the edge-detect logic
_before_ the mask logic, so if a edge happens _while_ the device is
masked, you'll never ever see the edge ever again (unmasking will not
cause a new edge, so you simply lost the interrupt).
So when you "mask" an edge-triggered IRQ, you can't really mask it at all,
because if you did that, you'd lose it forever if the IRQ comes in while
you masked it. Instead, we're supposed to leave it active, and set a flag,
and IF the IRQ comes in, we just remember it, and mask it at that point
instead, and then on unmasking, we have to replay it by sending a
self-IPI."
This trivial patch solves the problem.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Cc: Ingo Molnar <mingo@redhat.com>
Acked-by: Komuro <komurojun-mbn@nifty.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Disable MSI support on HD-audio driver as default since there are too
many broken devices.
The module option is changed from disable_msi to enable_msi, too. For
turning MSI support on, pass enable_msi=1, instead.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
port is dereferenced even if it is NULL. Dereference it _after_ the
check if (!port)... Thanks Eric <ef87@yahoo.com> for reporting this.
This fixes
http://bugzilla.kernel.org/show_bug.cgi?id=7527
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6:
[PATCH] x86-64: Fix race in exit_idle
[PATCH] x86-64: Fix vgetcpu when CONFIG_HOTPLUG_CPU is disabled
[PATCH] x86: Add acpi_user_timer_override option for Asus boards
[PATCH] x86-64: setup saved_max_pfn correctly (kdump)
[PATCH] x86-64: Handle reserve_bootmem_generic beyond end_pfn
[PATCH] x86-64: shorten the x86_64 boot setup GDT to what the comment says
[PATCH] x86-64: Fix PTRACE_[SG]ET_THREAD_AREA regression with ia32 emulation.
[PATCH] x86-64: Fix partial page check to ensure unusable memory is not being marked usable.
Revert "[PATCH] MMCONFIG and new Intel motherboards"
This reverts commit 0130b0b32e.
Sergey Vlasov points out (and Vadim Lobanov concurs) that the bug it was
supposed to fix must be some unrelated memory corruption, and the "fix"
actually causes more problems:
"However, the new code does not look safe in all cases. If some other
task has opened more files while dup_fd() released oldf->file_lock, the
new code will update open_files to the new larger value. But newf was
allocated with the old smaller value of open_files, therefore subsequent
accesses to newf may try to write into unallocated memory."
so revert it.
Cc: Sharyathi Nagesh <sharyath@in.ibm.com>
Cc: Sergey Vlasov <vsu@altlinux.ru>
Cc: Vadim Lobanov <vlobanov@speakeasy.net>
Cc: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Commit cb07c9a186 causes the wrong return
value. is_hugepage_only_range() is a boolean, so we should return
-EINVAL rather than 1.
Also - we can use "mm" instead of looking up "current->mm" again.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When building a monolithic kernel, the load order of drivers does not
work for SAS libata users, resulting in a kernel oops.
Convert libata to use subsys_initcall instead of module_init, which
ensures that libata gets loaded before any LLDD.
This is the same thing that scsi core does to solve the problem. The
load order problem was observed on ipr SAS adapters and should exist for
other SAS users as well.
Signed-off-by: Brian King <brking@us.ibm.com>
Acked-by: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Unlike mmap(), the codepath for brk() creates a vma without first checking
that it doesn't touch a region exclusively reserved for hugepages. On
powerpc, this can allow it to create a normal page vma in a hugepage
region, causing oopses and other badness.
Add a test to prevent this. With this patch, brk() will simply fail if it
attempts to move the break into a hugepage reserved region.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
(David:)
If hugetlbfs_file_mmap() returns a failure to do_mmap_pgoff() - for example,
because the given file offset is not hugepage aligned - then do_mmap_pgoff
will go to the unmap_and_free_vma backout path.
But at this stage the vma hasn't been marked as hugepage, and the backout path
will call unmap_region() on it. That will eventually call down to the
non-hugepage version of unmap_page_range(). On ppc64, at least, that will
cause serious problems if there are any existing hugepage pagetable entries in
the vicinity - for example if there are any other hugepage mappings under the
same PUD. unmap_page_range() will trigger a bad_pud() on the hugepage pud
entries. I suspect this will also cause bad problems on ia64, though I don't
have a machine to test it on.
(Hugh:)
prepare_hugepage_range() should check file offset alignment when it checks
virtual address and length, to stop MAP_FIXED with a bad huge offset from
unmapping before it fails further down. PowerPC should apply the same
prepare_hugepage_range alignment checks as ia64 and all the others do.
Then none of the alignment checks in hugetlbfs_file_mmap are required (nor
is the check for too small a mapping); but even so, move up setting of
VM_HUGETLB and add a comment to warn of what David Gibson discovered - if
hugetlbfs_file_mmap fails before setting it, do_mmap_pgoff's unmap_region
when unwinding from error will go the non-huge way, which may cause bad
behaviour on architectures (powerpc and ia64) which segregate their huge
mappings into a separate region of the address space.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Acked-by: Adam Litke <agl@us.ibm.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Looks like I still take care of the USB gadget/peripheral framework.
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Acked-by: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix binary/logical operator typo which leads to unreachable code. Noticed
while looking at other issues; I don't have the relevant hardware to test
this.
Signed-off-by: Nathan Lynch <ntl@pobox.com>
Cc: "Antonino A. Daplas" <adaplas@pol.net>
Acked-by: James Simmons <jsimmons@infradead.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Resolve the panic on failed mount of an autofs filesystem originally
reported by Mao Bibo.
It addresses two issues that happen after the mount fail. The first a NULL
pointer reference to a field (pipe) in the autofs superblock info structure
and second the lack of super block cleanup by the autofs and autofs4
modules.
Signed-off-by: Ian Kent <raven@themaw.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Stray bracket in debug code.
Signed-off-by: Nicolas Kaiser <nikai@nikai.net>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix http://bugzilla.kernel.org/show_bug.cgi?id=7264
We need to target this quirk a little more tightly, using the T20 DMI string.
Cc: Pavel Kysilka <goldenfish@bsys.cz>
Acked-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Daniel Ritz <daniel.ritz@gmx.ch>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix interrupt routing for via 586 bridges. pirq can be 5 which needs to be
mapped to INTD. But currently the access functions can handle only pirq
1-4. this is similar to the other via chipsets where pirq 4 and 5 are both
mapped to INTD. Fixes bugzilla #7490
Cc: Daniel Paschka <monkey20181@gmx.net>
Cc: Adrian Bunk <bunk@susta.de>
Signed-off-by: Daniel Ritz <daniel.ritz@gmx.ch>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When we get a mismatch between handlers on the same IRQ, all we get is "IRQ
handler type mismatch for IRQ n". Let's print the name of the
presently-registered handler with which we got the mismatch.
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When another interrupt happens in exit_idle the exit idle notifier
could be called an incorrect number of times.
Add a test_and_clear_bit_pda and use it handle the bit
atomically against interrupts to avoid this.
Pointed out by Stephane Eranian
Signed-off-by: Andi Kleen <ak@suse.de>
The vgetcpu per CPU initialization previously relied on CPU hotplug
events for all CPUs to initialize the per CPU state. That only
worked only on kernels with CONFIG_HOTPLUG_CPU enabled. On the
others some CPUs didn't get their state initialized properly
and vgetcpu wouldn't work.
Change the initialization sequence to instead run in a normal
initcall (which runs after the normal CPU bootup) and initialize
all running CPUs there. Later hotplug CPUs are still handled
with an hotplug notifier.
This actually simplifies the code somewhat.
Signed-off-by: Andi Kleen <ak@suse.de>
Timer overrides are normally disabled on Nvidia board because
they are commonly wrong, except on new ones with HPET support.
Unfortunately there are quite some Asus boards around that
don't have HPET, but need a timer override.
We don't know yet how to handle this transparently,
but at least add a command line option to force the timer override
and let them boot.
Cc: len.brown@intel.com
Signed-off-by: Andi Kleen <ak@suse.de>
x86_64: setup saved_max_pfn correctly
2.6.19-rc4 has broken CONFIG_CRASH_DUMP support on x86_64. It is impossible
to read out the kernel contents from /proc/vmcore because saved_max_pfn is set
to zero instead of the max_pfn value before the user map is setup.
This happens because saved_max_pfn is initialized at parse_early_param() time,
and at this time no active regions have been registered. save_max_pfn is setup
from e820_end_of_ram(), more exact find_max_pfn_with_active_regions() which
returns 0 because no regions exist.
This patch fixes this by registering before and removing after the call
to e820_end_of_ram().
Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
Signed-off-by: Andi Kleen <ak@suse.de>
This can happen on kexec kernels with some configurations, in particularly
on Unisys ES7000 systems.
Analysis by Amul Shah
Cc: Amul Shah <amul.shah@unisys.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Stephen Tweedie, Herbert Xu, and myself have been struggling with a very
nasty bug in Xen. But it also pointed out a small bug in the x86_64
kernel boot setup.
The GDT limit being setup by the initial bzImage code when entering into
protected mode is way too big. The comment by the code states that the
size of the GDT is 2048, but the actual size being set up is much bigger
(32768). This happens simply because of one extra '0'.
Instead of setting up a 0x800 size, 0x8000 is set up. On bare metal this
is fine because the CPU wont load any segments unless they are
explicitly used. But unfortunately, this breaks Xen on vmx FV, since it
(for now) blindly loads all the segments into the VMCS if they are less
than the gdt limit. Since the real mode segments are around 0x3000, we are
getting junk into the VMCS and that later causes an exception.
Stephen Tweedie has written up a patch to fix the Xen side and will be
submitting that to those folks. But that doesn't excuse the GDT limit
being a magnitude too big.
AK: changed to compute true gdt size in assembler, fixed comment
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andi Kleen <ak@suse.de>
ptrace(PTRACE_[SG]ET_THREAD_AREA) calls from ia32 code
should be passed onto the x86_64 implementation.
The default case in sys32_ptrace used to call to sys_ptrace(), but is
now EINVAL. This patch fixes a regression caused by that changed.
Signed-off-by: Mike McCormack <mike@codeweavers.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Fix partial page check in e820_register_active_regions to ensure
partial pages are
not being marked as active in the memory pool.
Signed-off-by: Aaron Durbin <adurbin@google.com>
Signed-off-by: Andi Kleen <ak@suse.de>
OP_MAX_COUNTER never referenced, and is a reminant of an earlier
oprofile implementation. Remove it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
A curious thing happens, however, when ata_qc_new_init fails to get
an ata_queued_cmd:
First, ata_qc_new_init handles the failure like this:
cmd->result = (DID_OK << 16) | (QUEUE_FULL << 1);
done(cmd);
Then, we return to ata_scsi_translate and do this:
err_mem:
cmd->result = (DID_ERROR << 16);
done(cmd);
It appears to me that first we set a status code indicating that we're
ok but the device queue is full and finish the command, but then
we blow away that status code and replace it with an error flag and
finish the command a second time! That does not seem to be desirable
behavior since we merely want the I/O to wait until a command slot
frees up, not send errors up the block layer.
In the err_mem case, we should simply exit out of ata_scsi_translate
instead.
Signed-off-by: Darrick J. Wong <djwong@us.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Helps for PATA but SATA bridged devices lie and always set all the bits
so will need the error handling fixes from Tejun.
Signed-off-by: Alan Cox <alan@redhat.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/drzeus/mmc:
MMC: Do not set unsupported bits in OCR response
MMC: Poll card status after rescanning cards
* 'for-linus' of master.kernel.org:/pub/scm/linux/kernel/git/roland/infiniband:
IB/mad: Fix race between cancel and receive completion
RDMA/amso1100: Fix && typo
RDMA/amso1100: Fix unitialized pseudo_netdev accessed in c2_register_device
IB/ehca: Activate scaling code by default
IB/ehca: Use named constant for max mtu
IB/ehca: Assure 4K alignment for firmware control blocks
We should only set ->errors to CHECK_CONDITION and so on for requests
that use this field in the SCSI manner.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Contrary to what the name misleads you to believe, SG_DXFER_TO_FROM_DEV
is really just a normal read seen from the device side.
This patch fixes http://lkml.org/lkml/2006/10/13/100
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When ib_cancel_mad() is called, it puts the canceled send on a list
and schedules a "flushed" callback from process context. However,
this leaves a window where a receive completion could be processed
before the send is fully flushed.
This is fine, except that ib_find_send_mad() will find the MAD and
return it to the receive processing, which results in the sender
getting both a successful receive and a "flushed" send completion for
the same request. Understandably, this confuses the sender, which is
expecting only one of these two callbacks, and leads to grief such as
a use-after-free in IPoIB.
Fix this by changing ib_find_send_mad() to return a send struct only
if the status is still successful (and not "flushed"). The search of
the send_list already had this check, so this patch just adds the same
check to the search of the wait_list.
Signed-off-by: Roland Dreier <rolandd@cisco.com>