[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-ia64
Subject:    Re: [PATCHv2] [IA64] Fix futex_atomic_cmpxchg_inatomic()
From:       Émeric_Maschino <emeric.maschino () gmail ! com>
Date:       2012-04-17 20:14:41
Message-ID: CAA9xbM4OZJLd6Zmzr06ew2p_MYU2aM1yCv0jOTu+h1AcSV1PVg () mail ! gmail ! com
[Download RAW message or body]

This patch also works for me.

Thanks,

     Emeric


Le 17 avril 2012 01:28, Luck, Tony <tony.luck@intel.com> a écrit :
> Michel Lespinasse cleaned up the futex calling conventions in
> commit 37a9d912b24f96a0591773e6e6c3642991ae5a70
>    futex: Sanitize cmpxchg_futex_value_locked API
>
> But the ia64 implementation was subtly broken. Gcc does not know
> that register "r8" will be updated by the fault handler if the
> cmpxchg instruction takes an exception. So it feels safe in letting
> the initialization of r8 slide to after the cmpxchg. Result: we
> always return 0 whether the user address faulted or not.
>
> Fix by moving the initialization of r8 into the __asm__ code so
> gcc won't move it.
>
> Reported-by: <emeric.maschino@gmail.com>
> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=42757
> Tested-by: <emeric.maschino@gmail.com>
> Acked-by: Michel Lespinasse <walken@google.com>
> Cc: stable@vger.kernel.org # v2.6.39+
> Signed-off-by: Tony Luck <tony.luck@intel.com>
> ---
> Make Linus happy by letting gcc know we touched r8 (which sounds like a
> good idea ... since gcc has already shown that it gets in a snit if it
> doesn't know what is going in inside the __asm__)
>
> diff --git a/arch/ia64/include/asm/futex.h b/arch/ia64/include/asm/futex.h
> index 8428525..21ab376 100644
> --- a/arch/ia64/include/asm/futex.h
> +++ b/arch/ia64/include/asm/futex.h
> @@ -107,15 +107,16 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
>                return -EFAULT;
>
>        {
> -               register unsigned long r8 __asm ("r8") = 0;
> +               register unsigned long r8 __asm ("r8");
>                unsigned long prev;
>                __asm__ __volatile__(
>                        "       mf;;                                    \n"
> -                       "       mov ar.ccv=%3;;                         \n"
> -                       "[1:]   cmpxchg4.acq %0=[%1],%2,ar.ccv          \n"
> +                       "       mov %0=r0                               \n"
> +                       "       mov ar.ccv=%4;;                         \n"
> +                       "[1:]   cmpxchg4.acq %1=[%2],%3,ar.ccv          \n"
>                        "       .xdata4 \"__ex_table\", 1b-., 2f-.      \n"
>                        "[2:]"
> -                       : "=r" (prev)
> +                       : "=r" (r8), "=r" (prev)
>                        : "r" (uaddr), "r" (newval),
>                          "rO" ((long) (unsigned) oldval)
>                        : "memory");
--
To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic