Discussion:
[kernel-hardening] [PATCH v5 0/3] Implement fast refcount overflow protection
Kees Cook
2017-05-30 21:39:49 UTC
Permalink
A new patch has been added at the start of this series to make the default
refcount_t implementation just use an unchecked atomic_t implementation,
since many kernel subsystems want to be able to opt out of the full
validation, since it includes a small performance overhead. When enabling
CONFIG_REFCOUNT_FULL, the full validation is used.

The other two patches provide overflow protection on x86 without incurring
a performance penalty. The changelog for patch 3 is reproduced here for
details:

This protection is a modified version of the x86 PAX_REFCOUNT defense
from PaX/grsecurity. This speeds up the refcount_t API by duplicating
the existing atomic_t implementation with a single instruction added to
detect if the refcount has wrapped past INT_MAX (or below 0) resulting
in a negative value, where the handler then restores the refcount_t to
INT_MAX or saturates to INT_MIN / 2. With this overflow protection, the
use-after-free following a refcount_t wrap is blocked from happening,
avoiding the vulnerability entirely.

While this defense only perfectly protects the overflow case, as that
can be detected and stopped before the reference is freed and left to be
abused by an attacker, it also notices some of the "inc from 0" and "below
0" cases. However, these only indicate that a use-after-free has already
happened. Such notifications are likely avoidable by an attacker that has
already exploited a use-after-free vulnerability, but it's better to have
them than allow such conditions to remain universally silent.

On overflow detection (actually "negative value" detection), the refcount
value is reset to INT_MAX, the offending process is killed, and a report
and stack trace are generated. This allows the system to attempt to
keep operating. In the case of a below-zero decrement or other negative
value results, the refcount is saturated to INT_MIN / 2 to keep it from
reaching zero again. (For the INT_MAX reset, another option would be to
choose (INT_MAX - N) with some small N to provide some headroom for
legitimate users of the reference counter.)

On the matter of races, since the entire range beyond INT_MAX but before 0
is negative, every inc will trap, leaving no overflow-only race condition.

As for performance, this implementation adds a single "js" instruction to
the regular execution flow of a copy of the regular atomic_t operations.
Since this is a forward jump, it is by default the non-predicted path,
which will be reinforced by dynamic branch prediction. The result is
this protection having no measurable change in performance over standard
atomic_t operations. The error path, located in .text.unlikely, saves
the refcount location and then uses UD0 to fire a refcount exception
handler, which resets the refcount, reports the error, marks the process
to be killed, and returns to regular execution. This keeps the changes to
.text size minimal, avoiding return jumps and open-coded calls to the
error reporting routine.

Assembly comparison:

atomic_inc
.text:
ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp)

refcount_inc
.text:
ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp)
ffffffff8154614d: 0f 88 80 d5 17 00 js ffffffff816c36d3
...
.text.unlikely:
ffffffff816c36d3: 48 8d 4d f4 lea -0xc(%rbp),%rcx
ffffffff816c36d7: 0f ff (bad)

Thanks to PaX Team for various suggestions for improvement.


-Kees

v5:
- add unchecked atomic_t implementation when !CONFIG_REFCOUNT_FULL
- use "leal" again, as in v3 for more flexible reset handling
- provide better underflow detection, with saturation

v4:
- switch to js from jns to gain static branch prediction benefits
- use .text.unlikely for js target, effectively making handler __cold
- use UD0 with refcount exception handler instead of int 0x81
- Kconfig defaults on when arch has support

v3:
- drop named text sections until we need to distinguish sizes/directions
- reset value immediately instead of passing back to handler
- drop needless export; josh

v2:
- fix instruction pointer decrement bug; thejh
- switch to js; pax-team
- improve commit log
- extract rmwcc macro helpers for better readability
- implemented checks in inc_not_zero interface
- adjusted reset values
Kees Cook
2017-05-30 21:39:50 UTC
Permalink
Many subsystems will not use refcount_t unless there is a way to build the
kernel so that there is no regression in speed compared to atomic_t. This
adds CONFIG_REFCOUNT_FULL to enable the full refcount_t implementation
which has the validation but is slightly slower. When not enabled,
refcount_t uses the basic unchecked atomic_t routines, which results in
no code changes compared to just using atomic_t directly.

Signed-off-by: Kees Cook <***@chromium.org>
---
arch/Kconfig | 9 +++++++++
include/linux/refcount.h | 44 ++++++++++++++++++++++++++++++++++++++++++++
lib/refcount.c | 3 +++
3 files changed, 56 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 6c00e5b00f8b..fba3bf186728 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -867,4 +867,13 @@ config STRICT_MODULE_RWX
config ARCH_WANT_RELAX_ORDER
bool

+config REFCOUNT_FULL
+ bool "Perform full reference count validation at the expense of speed"
+ help
+ Enabling this switches the refcounting infrastructure from a fast
+ unchecked atomic_t implementation to a fully state checked
+ implementation, which can be slower but provides protections
+ against various use-after-free conditions that can be used in
+ security flaw exploits.
+
source "kernel/gcov/Kconfig"
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
index b34aa649d204..68ecb431dbab 100644
--- a/include/linux/refcount.h
+++ b/include/linux/refcount.h
@@ -41,6 +41,7 @@ static inline unsigned int refcount_read(const refcount_t *r)
return atomic_read(&r->refs);
}

+#ifdef CONFIG_REFCOUNT_FULL
extern __must_check bool refcount_add_not_zero(unsigned int i, refcount_t *r);
extern void refcount_add(unsigned int i, refcount_t *r);

@@ -52,6 +53,49 @@ extern void refcount_sub(unsigned int i, refcount_t *r);

extern __must_check bool refcount_dec_and_test(refcount_t *r);
extern void refcount_dec(refcount_t *r);
+#else
+static inline __must_check bool refcount_add_not_zero(unsigned int i,
+ refcount_t *r)
+{
+ return atomic_add_return(i, &r->refs) != 0;
+}
+
+static inline void refcount_add(unsigned int i, refcount_t *r)
+{
+ atomic_add(i, &r->refs);
+}
+
+static inline __must_check bool refcount_inc_not_zero(refcount_t *r)
+{
+ return atomic_add_unless(&r->refs, 1, 0);
+}
+
+static inline void refcount_inc(refcount_t *r)
+{
+ atomic_inc(&r->refs);
+}
+
+static inline __must_check bool refcount_sub_and_test(unsigned int i,
+ refcount_t *r)
+{
+ return atomic_sub_return(i, &r->refs) == 0;
+}
+
+static inline void refcount_sub(unsigned int i, refcount_t *r)
+{
+ atomic_sub(i, &r->refs);
+}
+
+static inline __must_check bool refcount_dec_and_test(refcount_t *r)
+{
+ return atomic_dec_return(&r->refs) == 0;
+}
+
+static inline void refcount_dec(refcount_t *r)
+{
+ atomic_dec(&r->refs);
+}
+#endif /* CONFIG_REFCOUNT_FULL */

extern __must_check bool refcount_dec_if_one(refcount_t *r);
extern __must_check bool refcount_dec_not_one(refcount_t *r);
diff --git a/lib/refcount.c b/lib/refcount.c
index 9f906783987e..5d0582a9480c 100644
--- a/lib/refcount.c
+++ b/lib/refcount.c
@@ -37,6 +37,8 @@
#include <linux/refcount.h>
#include <linux/bug.h>

+#ifdef CONFIG_REFCOUNT_FULL
+
/**
* refcount_add_not_zero - add a value to a refcount unless it is 0
* @i: the value to add to the refcount
@@ -225,6 +227,7 @@ void refcount_dec(refcount_t *r)
WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n");
}
EXPORT_SYMBOL(refcount_dec);
+#endif /* CONFIG_REFCOUNT_FULL */

/**
* refcount_dec_if_one - decrement a refcount if it is 1
--
2.7.4
Reshetova, Elena
2017-05-31 10:45:09 UTC
Permalink
Post by Kees Cook
Many subsystems will not use refcount_t unless there is a way to build the
kernel so that there is no regression in speed compared to atomic_t. This
adds CONFIG_REFCOUNT_FULL to enable the full refcount_t implementation
which has the validation but is slightly slower. When not enabled,
refcount_t uses the basic unchecked atomic_t routines, which results in
no code changes compared to just using atomic_t directly.
---
arch/Kconfig | 9 +++++++++
include/linux/refcount.h | 44 ++++++++++++++++++++++++++++++++++++++++++++
lib/refcount.c | 3 +++
3 files changed, 56 insertions(+)
diff --git a/arch/Kconfig b/arch/Kconfig
index 6c00e5b00f8b..fba3bf186728 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -867,4 +867,13 @@ config STRICT_MODULE_RWX
config ARCH_WANT_RELAX_ORDER
bool
+config REFCOUNT_FULL
+ bool "Perform full reference count validation at the expense of speed"
+ help
+ Enabling this switches the refcounting infrastructure from a fast
+ unchecked atomic_t implementation to a fully state checked
+ implementation, which can be slower but provides protections
+ against various use-after-free conditions that can be used in
+ security flaw exploits.
+
source "kernel/gcov/Kconfig"
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
index b34aa649d204..68ecb431dbab 100644
--- a/include/linux/refcount.h
+++ b/include/linux/refcount.h
@@ -41,6 +41,7 @@ static inline unsigned int refcount_read(const refcount_t *r)
return atomic_read(&r->refs);
}
+#ifdef CONFIG_REFCOUNT_FULL
extern __must_check bool refcount_add_not_zero(unsigned int i, refcount_t *r);
extern void refcount_add(unsigned int i, refcount_t *r);
@@ -52,6 +53,49 @@ extern void refcount_sub(unsigned int i, refcount_t *r);
extern __must_check bool refcount_dec_and_test(refcount_t *r);
extern void refcount_dec(refcount_t *r);
+#else
+static inline __must_check bool refcount_add_not_zero(unsigned int i,
+
refcount_t *r)
+{
+ return atomic_add_return(i, &r->refs) != 0;
+}
Maybe atomic_add_unless(&r->refs, i, 0) in order to be consistent with the below inc_not_zero implementation?
Post by Kees Cook
+
+static inline void refcount_add(unsigned int i, refcount_t *r)
+{
+ atomic_add(i, &r->refs);
+}
+
+static inline __must_check bool refcount_inc_not_zero(refcount_t *r)
+{
+ return atomic_add_unless(&r->refs, 1, 0);
+}
+
+static inline void refcount_inc(refcount_t *r)
+{
+ atomic_inc(&r->refs);
+}
+
+static inline __must_check bool refcount_sub_and_test(unsigned int i,
+
refcount_t *r)
+{
+ return atomic_sub_return(i, &r->refs) == 0;
+}
Any reason for not using atomic_sub_and_test() here?
Post by Kees Cook
+
+static inline void refcount_sub(unsigned int i, refcount_t *r)
+{
+ atomic_sub(i, &r->refs);
+}
+
+static inline __must_check bool refcount_dec_and_test(refcount_t *r)
+{
+ return atomic_dec_return(&r->refs) == 0;
+}
Same here: atomic_dec_and_test()?

Best Regards,
Elena.
Post by Kees Cook
+
+static inline void refcount_dec(refcount_t *r)
+{
+ atomic_dec(&r->refs);
+}
+#endif /* CONFIG_REFCOUNT_FULL */
extern __must_check bool refcount_dec_if_one(refcount_t *r);
extern __must_check bool refcount_dec_not_one(refcount_t *r);
diff --git a/lib/refcount.c b/lib/refcount.c
index 9f906783987e..5d0582a9480c 100644
--- a/lib/refcount.c
+++ b/lib/refcount.c
@@ -37,6 +37,8 @@
#include <linux/refcount.h>
#include <linux/bug.h>
+#ifdef CONFIG_REFCOUNT_FULL
+
/**
* refcount_add_not_zero - add a value to a refcount unless it is 0
@@ -225,6 +227,7 @@ void refcount_dec(refcount_t *r)
WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n");
}
EXPORT_SYMBOL(refcount_dec);
+#endif /* CONFIG_REFCOUNT_FULL */
/**
* refcount_dec_if_one - decrement a refcount if it is 1
--
2.7.4
Peter Zijlstra
2017-05-31 11:09:18 UTC
Permalink
Post by Reshetova, Elena
Post by Kees Cook
+static inline __must_check bool refcount_add_not_zero(unsigned int i,
+
refcount_t *r)
+{
+ return atomic_add_return(i, &r->refs) != 0;
+}
Maybe atomic_add_unless(&r->refs, i, 0) in order to be consistent with the below inc_not_zero implementation?
Yes, atomic_add_return() is strictly incorrect here since the add is
unconditional.
Post by Reshetova, Elena
Post by Kees Cook
+static inline __must_check bool refcount_sub_and_test(unsigned int i,
+
refcount_t *r)
+{
+ return atomic_sub_return(i, &r->refs) == 0;
+}
Any reason for not using atomic_sub_and_test() here?
Post by Kees Cook
+static inline __must_check bool refcount_dec_and_test(refcount_t *r)
+{
+ return atomic_dec_return(&r->refs) == 0;
+}
Same here: atomic_dec_and_test()?
Both those are better because they return condition codes generated from
the operand itself.
Kees Cook
2017-06-01 14:43:37 UTC
Permalink
Post by Peter Zijlstra
Post by Reshetova, Elena
Post by Kees Cook
+static inline __must_check bool refcount_add_not_zero(unsigned int i,
+
refcount_t *r)
+{
+ return atomic_add_return(i, &r->refs) != 0;
+}
Maybe atomic_add_unless(&r->refs, i, 0) in order to be consistent with the below inc_not_zero implementation?
Yes, atomic_add_return() is strictly incorrect here since the add is
unconditional.
Post by Reshetova, Elena
Post by Kees Cook
+static inline __must_check bool refcount_sub_and_test(unsigned int i,
+
refcount_t *r)
+{
+ return atomic_sub_return(i, &r->refs) == 0;
+}
Any reason for not using atomic_sub_and_test() here?
Post by Kees Cook
+static inline __must_check bool refcount_dec_and_test(refcount_t *r)
+{
+ return atomic_dec_return(&r->refs) == 0;
+}
Same here: atomic_dec_and_test()?
Both those are better because they return condition codes generated from
the operand itself.
Ah yes, thanks to both of you for the corrections. I'll send a new version...

-Kees
--
Kees Cook
Pixel Security
Kees Cook
2017-05-30 21:39:51 UTC
Permalink
The coming x86 refcount protection needs to be able to add trailing
instructions to the GEN_*_RMWcc() operations. This extracts the
difference between the goto/non-goto cases so the helper macros
can be defined outside the #ifdef cases. Additionally adds argument
naming to the resulting asm for referencing from suffixed
instructions, and adds clobbers for "cc", and "cx" to let suffixes
use _ASM_CX, and retain any set flags.

Signed-off-by: Kees Cook <***@chromium.org>
---
arch/x86/include/asm/rmwcc.h | 37 ++++++++++++++++++++++++-------------
1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/rmwcc.h b/arch/x86/include/asm/rmwcc.h
index 661dd305694a..045f99211a99 100644
--- a/arch/x86/include/asm/rmwcc.h
+++ b/arch/x86/include/asm/rmwcc.h
@@ -1,45 +1,56 @@
#ifndef _ASM_X86_RMWcc
#define _ASM_X86_RMWcc

+#define __CLOBBERS_MEM "memory"
+#define __CLOBBERS_MEM_CC_CX "memory", "cc", "cx"
+
#if !defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(CC_HAVE_ASM_GOTO)

/* Use asm goto */

-#define __GEN_RMWcc(fullop, var, cc, ...) \
+#define __GEN_RMWcc(fullop, var, cc, clobbers, ...) \
do { \
asm_volatile_goto (fullop "; j" #cc " %l[cc_label]" \
- : : "m" (var), ## __VA_ARGS__ \
- : "memory" : cc_label); \
+ : : [counter] "m" (var), ## __VA_ARGS__ \
+ : clobbers : cc_label); \
return 0; \
cc_label: \
return 1; \
} while (0)

-#define GEN_UNARY_RMWcc(op, var, arg0, cc) \
- __GEN_RMWcc(op " " arg0, var, cc)
+#define __BINARY_RMWcc_ARG " %1, "

-#define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
- __GEN_RMWcc(op " %1, " arg0, var, cc, vcon (val))

#else /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */

/* Use flags output or a set instruction */

-#define __GEN_RMWcc(fullop, var, cc, ...) \
+#define __GEN_RMWcc(fullop, var, cc, clobbers, ...) \
do { \
bool c; \
asm volatile (fullop ";" CC_SET(cc) \
- : "+m" (var), CC_OUT(cc) (c) \
- : __VA_ARGS__ : "memory"); \
+ : [counter] "+m" (var), CC_OUT(cc) (c) \
+ : __VA_ARGS__ : clobbers); \
return c; \
} while (0)

+#define __BINARY_RMWcc_ARG " %2, "
+
+#endif /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
+
#define GEN_UNARY_RMWcc(op, var, arg0, cc) \
- __GEN_RMWcc(op " " arg0, var, cc)
+ __GEN_RMWcc(op " " arg0, var, cc, __CLOBBERS_MEM)
+
+#define GEN_UNARY_SUFFIXED_RMWcc(op, suffix, var, arg0, cc) \
+ __GEN_RMWcc(op " " arg0 "\n\t" suffix, var, cc, \
+ __CLOBBERS_MEM_CC_CX)

#define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
- __GEN_RMWcc(op " %2, " arg0, var, cc, vcon (val))
+ __GEN_RMWcc(op __BINARY_RMWcc_ARG arg0, var, cc, \
+ __CLOBBERS_MEM, vcon (val))

-#endif /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
+#define GEN_BINARY_SUFFIXED_RMWcc(op, suffix, var, vcon, val, arg0, cc) \
+ __GEN_RMWcc(op __BINARY_RMWcc_ARG arg0 "\n\t" suffix, var, cc, \
+ __CLOBBERS_MEM_CC_CX, vcon (val))

#endif /* _ASM_X86_RMWcc */
--
2.7.4
Peter Zijlstra
2017-05-31 11:13:09 UTC
Permalink
Post by Kees Cook
The coming x86 refcount protection needs to be able to add trailing
instructions to the GEN_*_RMWcc() operations. This extracts the
difference between the goto/non-goto cases so the helper macros
can be defined outside the #ifdef cases. Additionally adds argument
naming to the resulting asm for referencing from suffixed
instructions, and adds clobbers for "cc", and "cx" to let suffixes
use _ASM_CX, and retain any set flags.
Another option is to simply require __GCC_ASM_FLAG_OUTPUT__ for the fast
refcount stuff. That would result in simpler and more readable code.
Kees Cook
2017-05-31 13:17:16 UTC
Permalink
Post by Peter Zijlstra
Post by Kees Cook
The coming x86 refcount protection needs to be able to add trailing
instructions to the GEN_*_RMWcc() operations. This extracts the
difference between the goto/non-goto cases so the helper macros
can be defined outside the #ifdef cases. Additionally adds argument
naming to the resulting asm for referencing from suffixed
instructions, and adds clobbers for "cc", and "cx" to let suffixes
use _ASM_CX, and retain any set flags.
Another option is to simply require __GCC_ASM_FLAG_OUTPUT__ for the fast
refcount stuff. That would result in simpler and more readable code.
What versions of GCC support that?

-Kees
--
Kees Cook
Pixel Security
Peter Zijlstra
2017-05-31 14:03:18 UTC
Permalink
Post by Kees Cook
Post by Peter Zijlstra
Post by Kees Cook
The coming x86 refcount protection needs to be able to add trailing
instructions to the GEN_*_RMWcc() operations. This extracts the
difference between the goto/non-goto cases so the helper macros
can be defined outside the #ifdef cases. Additionally adds argument
naming to the resulting asm for referencing from suffixed
instructions, and adds clobbers for "cc", and "cx" to let suffixes
use _ASM_CX, and retain any set flags.
Another option is to simply require __GCC_ASM_FLAG_OUTPUT__ for the fast
refcount stuff. That would result in simpler and more readable code.
What versions of GCC support that?
IIRC 6+
Kees Cook
2017-05-31 16:09:16 UTC
Permalink
Post by Peter Zijlstra
Post by Kees Cook
Post by Peter Zijlstra
Post by Kees Cook
The coming x86 refcount protection needs to be able to add trailing
instructions to the GEN_*_RMWcc() operations. This extracts the
difference between the goto/non-goto cases so the helper macros
can be defined outside the #ifdef cases. Additionally adds argument
naming to the resulting asm for referencing from suffixed
instructions, and adds clobbers for "cc", and "cx" to let suffixes
use _ASM_CX, and retain any set flags.
Another option is to simply require __GCC_ASM_FLAG_OUTPUT__ for the fast
refcount stuff. That would result in simpler and more readable code.
What versions of GCC support that?
IIRC 6+
Given how many folks are still using 4.9 (and lower, see the thread
with Arnd[1]), I'd like to just keep this as I have it. It's not much
less readable, IMO (It was already pretty complex). I cleaned it up a
little before making it more ugly, so I think on sum, it's only a
little more weird. I think that's better than making this
compiler-specific or copy/pasting.

-Kees

[1] https://lkml.org/lkml/2017/4/25/66
--
Kees Cook
Pixel Security
Kees Cook
2017-05-30 21:39:52 UTC
Permalink
This protection is a modified version of the x86 PAX_REFCOUNT defense
from PaX/grsecurity. This speeds up the refcount_t API by duplicating
the existing atomic_t implementation with a single instruction added to
detect if the refcount has wrapped past INT_MAX (or below 0) resulting
in a negative value, where the handler then restores the refcount_t to
INT_MAX or saturates to INT_MIN / 2. With this overflow protection, the
use-after-free following a refcount_t wrap is blocked from happening,
avoiding the vulnerability entirely.

While this defense only perfectly protects the overflow case, as that
can be detected and stopped before the reference is freed and left to be
abused by an attacker, it also notices some of the "inc from 0" and "below
0" cases. However, these only indicate that a use-after-free has already
happened. Such notifications are likely avoidable by an attacker that has
already exploited a use-after-free vulnerability, but it's better to have
them than allow such conditions to remain universally silent.

On overflow detection (actually "negative value" detection), the refcount
value is reset to INT_MAX, the offending process is killed, and a report
and stack trace are generated. This allows the system to attempt to
keep operating. In the case of a below-zero decrement or other negative
value results, the refcount is saturated to INT_MIN / 2 to keep it from
reaching zero again. (For the INT_MAX reset, another option would be to
choose (INT_MAX - N) with some small N to provide some headroom for
legitimate users of the reference counter.)

On the matter of races, since the entire range beyond INT_MAX but before 0
is negative, every inc will trap, leaving no overflow-only race condition.

As for performance, this implementation adds a single "js" instruction to
the regular execution flow of a copy of the regular atomic_t operations.
Since this is a forward jump, it is by default the non-predicted path,
which will be reinforced by dynamic branch prediction. The result is
this protection having no measurable change in performance over standard
atomic_t operations. The error path, located in .text.unlikely, saves
the refcount location and then uses UD0 to fire a refcount exception
handler, which resets the refcount, reports the error, marks the process
to be killed, and returns to regular execution. This keeps the changes to
.text size minimal, avoiding return jumps and open-coded calls to the
error reporting routine.

Assembly comparison:

atomic_inc
.text:
ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp)

refcount_inc
.text:
ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp)
ffffffff8154614d: 0f 88 80 d5 17 00 js ffffffff816c36d3
...
.text.unlikely:
ffffffff816c36d3: 48 8d 4d f4 lea -0xc(%rbp),%rcx
ffffffff816c36d7: 0f ff (bad)

Thanks to PaX Team for various suggestions for improvement.

Signed-off-by: Kees Cook <***@chromium.org>
Reviewed-by: Josh Poimboeuf <***@redhat.com>
---
arch/Kconfig | 9 +++++
arch/x86/Kconfig | 1 +
arch/x86/include/asm/asm.h | 6 +++
arch/x86/include/asm/refcount.h | 87 +++++++++++++++++++++++++++++++++++++++++
arch/x86/mm/extable.c | 40 +++++++++++++++++++
include/linux/kernel.h | 6 +++
include/linux/refcount.h | 4 ++
kernel/panic.c | 22 +++++++++++
8 files changed, 175 insertions(+)
create mode 100644 arch/x86/include/asm/refcount.h

diff --git a/arch/Kconfig b/arch/Kconfig
index fba3bf186728..e9445ac0e899 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -867,6 +867,15 @@ config STRICT_MODULE_RWX
config ARCH_WANT_RELAX_ORDER
bool

+config ARCH_HAS_REFCOUNT
+ bool
+ help
+ An architecture selects this when it has implemented refcount_t
+ using primitizes that provide a faster runtime at the expense
+ of some full refcount state checks. The refcount overflow condition,
+ however, must be retained. Catching overflows is the primary
+ security concern for protecting against bugs in reference counts.
+
config REFCOUNT_FULL
bool "Perform full reference count validation at the expense of speed"
help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index cd18994a9555..65525f76b27c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -54,6 +54,7 @@ config X86
select ARCH_HAS_KCOV if X86_64
select ARCH_HAS_MMIO_FLUSH
select ARCH_HAS_PMEM_API if X86_64
+ select ARCH_HAS_REFCOUNT
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_SG_CHAIN
select ARCH_HAS_STRICT_KERNEL_RWX
diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index 7a9df3beb89b..676ee5807d86 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -74,6 +74,9 @@
# define _ASM_EXTABLE_EX(from, to) \
_ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)

+# define _ASM_EXTABLE_REFCOUNT(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount)
+
# define _ASM_NOKPROBE(entry) \
.pushsection "_kprobe_blacklist","aw" ; \
_ASM_ALIGN ; \
@@ -123,6 +126,9 @@
# define _ASM_EXTABLE_EX(from, to) \
_ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)

+# define _ASM_EXTABLE_REFCOUNT(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount)
+
/* For C file, we already have NOKPROBE_SYMBOL macro */
#endif

diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h
new file mode 100644
index 000000000000..aaf9bb3abd71
--- /dev/null
+++ b/arch/x86/include/asm/refcount.h
@@ -0,0 +1,87 @@
+#ifndef __ASM_X86_REFCOUNT_H
+#define __ASM_X86_REFCOUNT_H
+/*
+ * x86-specific implementation of refcount_t. Ported from PAX_REFCOUNT
+ * from PaX/grsecurity.
+ */
+#include <linux/refcount.h>
+
+#define _REFCOUNT_EXCEPTION \
+ ".pushsection .text.unlikely\n" \
+ "111:\tlea %[counter], %%" _ASM_CX "\n" \
+ "112:\t" ASM_UD0 "\n" \
+ ".popsection\n" \
+ "113:\n" \
+ _ASM_EXTABLE_REFCOUNT(112b, 113b)
+
+#define REFCOUNT_CHECK \
+ "js 111f\n\t" \
+ _REFCOUNT_EXCEPTION
+
+#define REFCOUNT_ERROR \
+ "jmp 111f\n\t" \
+ _REFCOUNT_EXCEPTION
+
+static __always_inline void refcount_add(unsigned int i, refcount_t *r)
+{
+ asm volatile(LOCK_PREFIX "addl %1,%0\n\t"
+ REFCOUNT_CHECK
+ : [counter] "+m" (r->refs.counter)
+ : "ir" (i)
+ : "cc", "cx");
+}
+
+static __always_inline void refcount_inc(refcount_t *r)
+{
+ asm volatile(LOCK_PREFIX "incl %0\n\t"
+ REFCOUNT_CHECK
+ : [counter] "+m" (r->refs.counter)
+ : : "cc", "cx");
+}
+
+static __always_inline void refcount_dec(refcount_t *r)
+{
+ asm volatile(LOCK_PREFIX "decl %0\n\t"
+ REFCOUNT_CHECK
+ : [counter] "+m" (r->refs.counter)
+ : : "cc", "cx");
+}
+
+static __always_inline __must_check
+bool refcount_sub_and_test(unsigned int i, refcount_t *r)
+{
+ GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", REFCOUNT_CHECK,
+ r->refs.counter, "er", i, "%0", e);
+}
+
+static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
+{
+ GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", REFCOUNT_CHECK,
+ r->refs.counter, "%0", e);
+}
+
+static __always_inline __must_check bool refcount_inc_not_zero(refcount_t *r)
+{
+ int c;
+
+ c = atomic_read(&(r->refs));
+ do {
+ if (unlikely(c <= 0 || c == INT_MAX))
+ break;
+ } while (!atomic_try_cmpxchg(&(r->refs), &c, c + 1));
+
+ /* Did we try to increment from an undesirable state? */
+ if (unlikely(c < 0 || c == INT_MAX)) {
+ /*
+ * Since the overflow flag will have been reset, this will
+ * always saturate.
+ */
+ asm volatile(REFCOUNT_ERROR
+ : : [counter] "m" (r->refs.counter)
+ : "cc", "cx");
+ }
+
+ return c != 0;
+}
+
+#endif
diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index 35ea061010a1..33324dfe8799 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -36,6 +36,46 @@ bool ex_handler_fault(const struct exception_table_entry *fixup,
}
EXPORT_SYMBOL_GPL(ex_handler_fault);

+/*
+ * Handler for UD0 exception following a negative-value test (via "js"
+ * instruction) against the result of a refcount inc/dec/add/sub.
+ */
+bool ex_handler_refcount(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ int reset;
+
+ /*
+ * If we crossed from INT_MAX to INT_MIN, the OF flag (result
+ * wrapped around) and the SF flag (result is negative) will be
+ * set. In this case, reset to INT_MAX in an attempt to leave the
+ * refcount usable. Otherwise, we've landed here due to producing
+ * a negative result from either decrementing zero or operating on
+ * a negative value. In this case things are badly broken, so we
+ * we saturate to INT_MIN / 2.
+ */
+ if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_SF))
+ reset = INT_MAX;
+ else
+ reset = INT_MIN / 2;
+ *(int *)regs->cx = reset;
+
+ /*
+ * Strictly speaking, this reports the fixup destination, not
+ * the fault location, and not the actually overflowing
+ * instruction, which is the instruction before the "js", but
+ * since that instruction could be a variety of lengths, just
+ * report the location after the overflow, which should be close
+ * enough for finding the overflow, as it's at least back in
+ * the function, having returned from .text.unlikely.
+ */
+ regs->ip = ex_fixup_addr(fixup);
+ refcount_error_report(regs);
+
+ return true;
+}
+EXPORT_SYMBOL_GPL(ex_handler_refcount);
+
bool ex_handler_ext(const struct exception_table_entry *fixup,
struct pt_regs *regs, int trapnr)
{
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 13bc08aba704..b9a842750e08 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -276,6 +276,12 @@ extern int oops_may_print(void);
void do_exit(long error_code) __noreturn;
void complete_and_exit(struct completion *, long) __noreturn;

+#ifdef CONFIG_ARCH_HAS_REFCOUNT
+void refcount_error_report(struct pt_regs *regs);
+#else
+static inline void refcount_error_report(struct pt_regs *regs) { }
+#endif
+
/* Internal, do not use. */
int __must_check _kstrtoul(const char *s, unsigned int base, unsigned long *res);
int __must_check _kstrtol(const char *s, unsigned int base, long *res);
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
index 68ecb431dbab..8750fbfe8476 100644
--- a/include/linux/refcount.h
+++ b/include/linux/refcount.h
@@ -54,6 +54,9 @@ extern void refcount_sub(unsigned int i, refcount_t *r);
extern __must_check bool refcount_dec_and_test(refcount_t *r);
extern void refcount_dec(refcount_t *r);
#else
+# ifdef CONFIG_ARCH_HAS_REFCOUNT
+# include <asm/refcount.h>
+# else
static inline __must_check bool refcount_add_not_zero(unsigned int i,
refcount_t *r)
{
@@ -95,6 +98,7 @@ static inline void refcount_dec(refcount_t *r)
{
atomic_dec(&r->refs);
}
+# endif /* !CONFIG_ARCH_HAS_REFCOUNT */
#endif /* CONFIG_REFCOUNT_FULL */

extern __must_check bool refcount_dec_if_one(refcount_t *r);
diff --git a/kernel/panic.c b/kernel/panic.c
index a58932b41700..fb8576ce1638 100644
--- a/kernel/panic.c
+++ b/kernel/panic.c
@@ -26,6 +26,7 @@
#include <linux/nmi.h>
#include <linux/console.h>
#include <linux/bug.h>
+#include <linux/ratelimit.h>

#define PANIC_TIMER_STEP 100
#define PANIC_BLINK_SPD 18
@@ -601,6 +602,27 @@ EXPORT_SYMBOL(__stack_chk_fail);

#endif

+#ifdef CONFIG_ARCH_HAS_REFCOUNT
+static DEFINE_RATELIMIT_STATE(refcount_ratelimit, 15 * HZ, 3);
+
+void refcount_error_report(struct pt_regs *regs)
+{
+ /* Always make sure triggering process will be terminated. */
+ do_send_sig_info(SIGKILL, SEND_SIG_FORCED, current, true);
+
+ if (!__ratelimit(&refcount_ratelimit))
+ return;
+
+ pr_emerg("refcount overflow detected in: %s:%d, uid/euid: %u/%u\n",
+ current->comm, task_pid_nr(current),
+ from_kuid_munged(&init_user_ns, current_uid()),
+ from_kuid_munged(&init_user_ns, current_euid()));
+ print_symbol(KERN_EMERG "refcount error occurred at: %s\n",
+ instruction_pointer(regs));
+ show_regs(regs);
+}
+#endif
+
core_param(panic, panic_timeout, int, 0644);
core_param(pause_on_oops, pause_on_oops, int, 0644);
core_param(panic_on_warn, panic_on_warn, int, 0644);
--
2.7.4
Li Kun
2017-06-29 04:13:17 UTC
Permalink
Hi Kees,
Post by Kees Cook
This protection is a modified version of the x86 PAX_REFCOUNT defense
from PaX/grsecurity. This speeds up the refcount_t API by duplicating
the existing atomic_t implementation with a single instruction added to
detect if the refcount has wrapped past INT_MAX (or below 0) resulting
in a negative value, where the handler then restores the refcount_t to
INT_MAX or saturates to INT_MIN / 2. With this overflow protection, the
use-after-free following a refcount_t wrap is blocked from happening,
avoiding the vulnerability entirely.
While this defense only perfectly protects the overflow case, as that
can be detected and stopped before the reference is freed and left to be
abused by an attacker, it also notices some of the "inc from 0" and "below
0" cases. However, these only indicate that a use-after-free has already
happened. Such notifications are likely avoidable by an attacker that has
already exploited a use-after-free vulnerability, but it's better to have
them than allow such conditions to remain universally silent.
On overflow detection (actually "negative value" detection), the refcount
value is reset to INT_MAX, the offending process is killed, and a report
and stack trace are generated. This allows the system to attempt to
keep operating. In the case of a below-zero decrement or other negative
value results, the refcount is saturated to INT_MIN / 2 to keep it from
reaching zero again. (For the INT_MAX reset, another option would be to
choose (INT_MAX - N) with some small N to provide some headroom for
legitimate users of the reference counter.)
On the matter of races, since the entire range beyond INT_MAX but before 0
is negative, every inc will trap, leaving no overflow-only race condition.
As for performance, this implementation adds a single "js" instruction to
the regular execution flow of a copy of the regular atomic_t operations.
Since this is a forward jump, it is by default the non-predicted path,
which will be reinforced by dynamic branch prediction. The result is
this protection having no measurable change in performance over standard
atomic_t operations. The error path, located in .text.unlikely, saves
the refcount location and then uses UD0 to fire a refcount exception
handler, which resets the refcount, reports the error, marks the process
to be killed, and returns to regular execution. This keeps the changes to
.text size minimal, avoiding return jumps and open-coded calls to the
error reporting routine.
atomic_inc
ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp)
refcount_inc
ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp)
ffffffff8154614d: 0f 88 80 d5 17 00 js ffffffff816c36d3
...
ffffffff816c36d3: 48 8d 4d f4 lea -0xc(%rbp),%rcx
ffffffff816c36d7: 0f ff (bad)
Thanks to PaX Team for various suggestions for improvement.
---
arch/Kconfig | 9 +++++
arch/x86/Kconfig | 1 +
arch/x86/include/asm/asm.h | 6 +++
arch/x86/include/asm/refcount.h | 87 +++++++++++++++++++++++++++++++++++++++++
arch/x86/mm/extable.c | 40 +++++++++++++++++++
include/linux/kernel.h | 6 +++
include/linux/refcount.h | 4 ++
kernel/panic.c | 22 +++++++++++
8 files changed, 175 insertions(+)
create mode 100644 arch/x86/include/asm/refcount.h
diff --git a/arch/Kconfig b/arch/Kconfig
index fba3bf186728..e9445ac0e899 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -867,6 +867,15 @@ config STRICT_MODULE_RWX
config ARCH_WANT_RELAX_ORDER
bool
+config ARCH_HAS_REFCOUNT
+ bool
+ help
+ An architecture selects this when it has implemented refcount_t
+ using primitizes that provide a faster runtime at the expense
+ of some full refcount state checks. The refcount overflow condition,
+ however, must be retained. Catching overflows is the primary
+ security concern for protecting against bugs in reference counts.
+
config REFCOUNT_FULL
bool "Perform full reference count validation at the expense of speed"
help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index cd18994a9555..65525f76b27c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -54,6 +54,7 @@ config X86
select ARCH_HAS_KCOV if X86_64
select ARCH_HAS_MMIO_FLUSH
select ARCH_HAS_PMEM_API if X86_64
+ select ARCH_HAS_REFCOUNT
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_SG_CHAIN
select ARCH_HAS_STRICT_KERNEL_RWX
diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index 7a9df3beb89b..676ee5807d86 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -74,6 +74,9 @@
# define _ASM_EXTABLE_EX(from, to) \
_ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
+# define _ASM_EXTABLE_REFCOUNT(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount)
+
# define _ASM_NOKPROBE(entry) \
.pushsection "_kprobe_blacklist","aw" ; \
_ASM_ALIGN ; \
@@ -123,6 +126,9 @@
# define _ASM_EXTABLE_EX(from, to) \
_ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
+# define _ASM_EXTABLE_REFCOUNT(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount)
+
/* For C file, we already have NOKPROBE_SYMBOL macro */
#endif
diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h
new file mode 100644
index 000000000000..aaf9bb3abd71
--- /dev/null
+++ b/arch/x86/include/asm/refcount.h
@@ -0,0 +1,87 @@
+#ifndef __ASM_X86_REFCOUNT_H
+#define __ASM_X86_REFCOUNT_H
+/*
+ * x86-specific implementation of refcount_t. Ported from PAX_REFCOUNT
+ * from PaX/grsecurity.
+ */
+#include <linux/refcount.h>
+
+#define _REFCOUNT_EXCEPTION \
+ ".pushsection .text.unlikely\n" \
+ "111:\tlea %[counter], %%" _ASM_CX "\n" \
+ "112:\t" ASM_UD0 "\n" \
+ ".popsection\n" \
+ "113:\n" \
+ _ASM_EXTABLE_REFCOUNT(112b, 113b)
+
+#define REFCOUNT_CHECK \
+ "js 111f\n\t" \
+ _REFCOUNT_EXCEPTION
+
+#define REFCOUNT_ERROR \
+ "jmp 111f\n\t" \
+ _REFCOUNT_EXCEPTION
+
+static __always_inline void refcount_add(unsigned int i, refcount_t *r)
+{
+ asm volatile(LOCK_PREFIX "addl %1,%0\n\t"
+ REFCOUNT_CHECK
+ : [counter] "+m" (r->refs.counter)
+ : "ir" (i)
+ : "cc", "cx");
+}
+
+static __always_inline void refcount_inc(refcount_t *r)
+{
+ asm volatile(LOCK_PREFIX "incl %0\n\t"
+ REFCOUNT_CHECK
+ : [counter] "+m" (r->refs.counter)
+ : : "cc", "cx");
+}
+
+static __always_inline void refcount_dec(refcount_t *r)
+{
+ asm volatile(LOCK_PREFIX "decl %0\n\t"
+ REFCOUNT_CHECK
+ : [counter] "+m" (r->refs.counter)
+ : : "cc", "cx");
+}
+
+static __always_inline __must_check
+bool refcount_sub_and_test(unsigned int i, refcount_t *r)
+{
+ GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", REFCOUNT_CHECK,
+ r->refs.counter, "er", i, "%0", e);
+}
+
+static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
+{
+ GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", REFCOUNT_CHECK,
+ r->refs.counter, "%0", e);
+}
+
+static __always_inline __must_check bool refcount_inc_not_zero(refcount_t *r)
+{
+ int c;
+
+ c = atomic_read(&(r->refs));
+ do {
+ if (unlikely(c <= 0 || c == INT_MAX))
+ break;
+ } while (!atomic_try_cmpxchg(&(r->refs), &c, c + 1));
+
+ /* Did we try to increment from an undesirable state? */
+ if (unlikely(c < 0 || c == INT_MAX)) {
+ /*
+ * Since the overflow flag will have been reset, this will
+ * always saturate.
+ */
+ asm volatile(REFCOUNT_ERROR
+ : : [counter] "m" (r->refs.counter)
+ : "cc", "cx");
+ }
+
+ return c != 0;
+}
+
+#endif
diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index 35ea061010a1..33324dfe8799 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -36,6 +36,46 @@ bool ex_handler_fault(const struct exception_table_entry *fixup,
}
EXPORT_SYMBOL_GPL(ex_handler_fault);
+/*
+ * Handler for UD0 exception following a negative-value test (via "js"
+ * instruction) against the result of a refcount inc/dec/add/sub.
+ */
+bool ex_handler_refcount(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ int reset;
+
+ /*
+ * If we crossed from INT_MAX to INT_MIN, the OF flag (result
+ * wrapped around) and the SF flag (result is negative) will be
+ * set. In this case, reset to INT_MAX in an attempt to leave the
+ * refcount usable. Otherwise, we've landed here due to producing
+ * a negative result from either decrementing zero or operating on
+ * a negative value. In this case things are badly broken, so we
+ * we saturate to INT_MIN / 2.
+ */
+ if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_SF))
+ reset = INT_MAX;
Should it be like this to indicate that the refcount is wapped from
INT_MAX to INT_MIN ?

if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_SF)
== (X86_EFLAGS_OF | X86_EFLAGS_SF))
reset = INT_MAX;
Post by Kees Cook
+ else
+ reset = INT_MIN / 2;
+ *(int *)regs->cx = reset;
+
+ /*
+ * Strictly speaking, this reports the fixup destination, not
+ * the fault location, and not the actually overflowing
+ * instruction, which is the instruction before the "js", but
+ * since that instruction could be a variety of lengths, just
+ * report the location after the overflow, which should be close
+ * enough for finding the overflow, as it's at least back in
+ * the function, having returned from .text.unlikely.
+ */
+ regs->ip = ex_fixup_addr(fixup);
+ refcount_error_report(regs);
+
+ return true;
+}
+EXPORT_SYMBOL_GPL(ex_handler_refcount);
+
bool ex_handler_ext(const struct exception_table_entry *fixup,
struct pt_regs *regs, int trapnr)
{
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 13bc08aba704..b9a842750e08 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -276,6 +276,12 @@ extern int oops_may_print(void);
void do_exit(long error_code) __noreturn;
void complete_and_exit(struct completion *, long) __noreturn;
+#ifdef CONFIG_ARCH_HAS_REFCOUNT
+void refcount_error_report(struct pt_regs *regs);
+#else
+static inline void refcount_error_report(struct pt_regs *regs) { }
+#endif
+
/* Internal, do not use. */
int __must_check _kstrtoul(const char *s, unsigned int base, unsigned long *res);
int __must_check _kstrtol(const char *s, unsigned int base, long *res);
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
index 68ecb431dbab..8750fbfe8476 100644
--- a/include/linux/refcount.h
+++ b/include/linux/refcount.h
@@ -54,6 +54,9 @@ extern void refcount_sub(unsigned int i, refcount_t *r);
extern __must_check bool refcount_dec_and_test(refcount_t *r);
extern void refcount_dec(refcount_t *r);
#else
+# ifdef CONFIG_ARCH_HAS_REFCOUNT
+# include <asm/refcount.h>
+# else
static inline __must_check bool refcount_add_not_zero(unsigned int i,
refcount_t *r)
{
@@ -95,6 +98,7 @@ static inline void refcount_dec(refcount_t *r)
{
atomic_dec(&r->refs);
}
+# endif /* !CONFIG_ARCH_HAS_REFCOUNT */
#endif /* CONFIG_REFCOUNT_FULL */
extern __must_check bool refcount_dec_if_one(refcount_t *r);
diff --git a/kernel/panic.c b/kernel/panic.c
index a58932b41700..fb8576ce1638 100644
--- a/kernel/panic.c
+++ b/kernel/panic.c
@@ -26,6 +26,7 @@
#include <linux/nmi.h>
#include <linux/console.h>
#include <linux/bug.h>
+#include <linux/ratelimit.h>
#define PANIC_TIMER_STEP 100
#define PANIC_BLINK_SPD 18
@@ -601,6 +602,27 @@ EXPORT_SYMBOL(__stack_chk_fail);
#endif
+#ifdef CONFIG_ARCH_HAS_REFCOUNT
+static DEFINE_RATELIMIT_STATE(refcount_ratelimit, 15 * HZ, 3);
+
+void refcount_error_report(struct pt_regs *regs)
+{
+ /* Always make sure triggering process will be terminated. */
+ do_send_sig_info(SIGKILL, SEND_SIG_FORCED, current, true);
+
+ if (!__ratelimit(&refcount_ratelimit))
+ return;
+
+ pr_emerg("refcount overflow detected in: %s:%d, uid/euid: %u/%u\n",
+ current->comm, task_pid_nr(current),
+ from_kuid_munged(&init_user_ns, current_uid()),
+ from_kuid_munged(&init_user_ns, current_euid()));
+ print_symbol(KERN_EMERG "refcount error occurred at: %s\n",
+ instruction_pointer(regs));
+ show_regs(regs);
+}
+#endif
+
core_param(panic, panic_timeout, int, 0644);
core_param(pause_on_oops, pause_on_oops, int, 0644);
core_param(panic_on_warn, panic_on_warn, int, 0644);
--
Best Regards
Li Kun
Kees Cook
2017-06-29 22:05:44 UTC
Permalink
Post by Li Kun
Post by Kees Cook
+bool ex_handler_refcount(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ int reset;
+
+ /*
+ * If we crossed from INT_MAX to INT_MIN, the OF flag (result
+ * wrapped around) and the SF flag (result is negative) will be
+ * set. In this case, reset to INT_MAX in an attempt to leave the
+ * refcount usable. Otherwise, we've landed here due to producing
+ * a negative result from either decrementing zero or operating on
+ * a negative value. In this case things are badly broken, so we
+ * we saturate to INT_MIN / 2.
+ */
+ if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_SF))
+ reset = INT_MAX;
Should it be like this to indicate that the refcount is wapped from
INT_MAX to INT_MIN ?
if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_SF)
== (X86_EFLAGS_OF | X86_EFLAGS_SF))
reset = INT_MAX;
Ah yes, thanks for the catch. Yeah, that test is expecting both
condition flags to be set.

I'm still on the fence about the best way to deal with the bad states.
I've been pondering just strictly using a saturation value (INT_MIN /
2), which should offer the same system state protection (except for
the inherent resource leak), but that means there isn't really a good
way to kill an offending process (since after saturation ALL processes
will look like violators). It can be argued that killing the process
doesn't actually provide any benefit since the system is still safe,
though.
Post by Li Kun
Post by Kees Cook
+ else
+ reset = INT_MIN / 2;
+ *(int *)regs->cx = reset;
Thanks for looking at this!

-Kees
--
Kees Cook
Pixel Security
Li Kun
2017-06-30 02:42:05 UTC
Permalink
Post by Kees Cook
Post by Li Kun
Post by Kees Cook
+bool ex_handler_refcount(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ int reset;
+
+ /*
+ * If we crossed from INT_MAX to INT_MIN, the OF flag (result
+ * wrapped around) and the SF flag (result is negative) will be
+ * set. In this case, reset to INT_MAX in an attempt to leave the
+ * refcount usable. Otherwise, we've landed here due to producing
+ * a negative result from either decrementing zero or operating on
+ * a negative value. In this case things are badly broken, so we
+ * we saturate to INT_MIN / 2.
+ */
+ if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_SF))
+ reset = INT_MAX;
Should it be like this to indicate that the refcount is wapped from
INT_MAX to INT_MIN ?
if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_SF)
== (X86_EFLAGS_OF | X86_EFLAGS_SF))
reset = INT_MAX;
Ah yes, thanks for the catch. Yeah, that test is expecting both
condition flags to be set.
I'm still on the fence about the best way to deal with the bad states.
I've been pondering just strictly using a saturation value (INT_MIN /
2), which should offer the same system state protection (except for
the inherent resource leak), but that means there isn't really a good
way to kill an offending process (since after saturation ALL processes
will look like violators). It can be argued that killing the process
doesn't actually provide any benefit since the system is still safe,
though.
An immature idea,can we set the count to INT_MAX/2 when we detect and
kill the offending process,
and wait to see if there will be another offender touching the fence.
Er,not very acurate,but better than
killing all the processes doing refcount_add ,i think.
Post by Kees Cook
Post by Li Kun
Post by Kees Cook
+ else
+ reset = INT_MIN / 2;
+ *(int *)regs->cx = reset;
Thanks for looking at this!
-Kees
--
Best Regards
Li Kun
Kees Cook
2017-06-30 03:58:10 UTC
Permalink
Post by Kees Cook
Post by Li Kun
Post by Kees Cook
+bool ex_handler_refcount(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ int reset;
+
+ /*
+ * If we crossed from INT_MAX to INT_MIN, the OF flag (result
+ * wrapped around) and the SF flag (result is negative) will be
+ * set. In this case, reset to INT_MAX in an attempt to leave the
+ * refcount usable. Otherwise, we've landed here due to producing
+ * a negative result from either decrementing zero or operating on
+ * a negative value. In this case things are badly broken, so we
+ * we saturate to INT_MIN / 2.
+ */
+ if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_SF))
+ reset = INT_MAX;
Should it be like this to indicate that the refcount is wapped from
INT_MAX to INT_MIN ?
if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_SF)
== (X86_EFLAGS_OF | X86_EFLAGS_SF))
reset = INT_MAX;
Ah yes, thanks for the catch. Yeah, that test is expecting both
condition flags to be set.
I'm still on the fence about the best way to deal with the bad states.
I've been pondering just strictly using a saturation value (INT_MIN /
2), which should offer the same system state protection (except for
the inherent resource leak), but that means there isn't really a good
way to kill an offending process (since after saturation ALL processes
will look like violators). It can be argued that killing the process
doesn't actually provide any benefit since the system is still safe,
though.
An immature idea,can we set the count to INT_MAX/2 when we detect and kill
the offending process,
and wait to see if there will be another offender touching the fence. Er,not
very acurate,but better than
killing all the processes doing refcount_add ,i think.
Post by Kees Cook
Post by Li Kun
Post by Kees Cook
+ else
+ reset = INT_MIN / 2;
+ *(int *)regs->cx = reset;
I suppose we could kill a process if it did the wrap from INT_MAX to
INT_MIN, and then ignore (though maintain saturation of) the rest.
i.e. if X86_EFLAGS_OF, kill and saturate. If X86_EFLAGS_SF, saturate.
I'm still curious about catching refcount_dec() (not
refcount_dec_and_test()) hitting zero.

-Kees
--
Kees Cook
Pixel Security
Davidlohr Bueso
2017-05-31 12:27:32 UTC
Permalink
Post by Kees Cook
A new patch has been added at the start of this series to make the default
refcount_t implementation just use an unchecked atomic_t implementation,
since many kernel subsystems want to be able to opt out of the full
validation, since it includes a small performance overhead. When enabling
CONFIG_REFCOUNT_FULL, the full validation is used.
The other two patches provide overflow protection on x86 without incurring
a performance penalty. The changelog for patch 3 is reproduced here for
To be sure I'm getting this right, after this all archs with the exception
of x86 will use the regular atomic_t ("unsecure") flavor, right?

Thanks,
Davidlohr
Kees Cook
2017-05-31 13:20:25 UTC
Permalink
Post by Davidlohr Bueso
Post by Kees Cook
A new patch has been added at the start of this series to make the default
refcount_t implementation just use an unchecked atomic_t implementation,
since many kernel subsystems want to be able to opt out of the full
validation, since it includes a small performance overhead. When enabling
CONFIG_REFCOUNT_FULL, the full validation is used.
The other two patches provide overflow protection on x86 without incurring
a performance penalty. The changelog for patch 3 is reproduced here for
To be sure I'm getting this right, after this all archs with the exception
of x86 will use the regular atomic_t ("unsecure") flavor, right?
If a build does not select CONFIG_REFCOUNT_FULL and lacks
CONFIG_ARCH_HAS_REFCOUNT, refcount_t will be the same at atomic_t
(i.e. no change from the historical behavior where all the ref
counters in the kernel used atomic_t).

-Kees
--
Kees Cook
Pixel Security
Loading...