Linus Torvalds writes: (Summary)
For example, "max_t/min_t" really don't care at all, since they - by
definition - just take the single specified type.
definition - just take the single specified type.
So I'm wondering if we should just drop the types from __max/__min (and everything they call) entirely, and instead do (and everything they call) entirely, and instead do #define __check_type(x,y) ((void)((typeof(x)*)1==(typeof(y)*)1)) #define min(x,y) (__check_type(x,y),__min(x,y))
#define max(x,y) (__check_type(x,y),__max(x,y))
#define max(x,y) (__check_type(x,y),__max(x,y))
#define min_t(t,x,y) __min((t)(x),(t)(y))
#define max_t(t,x,y) __max((t)(x),(t)(y))
#define max_t(t,x,y) __max((t)(x),(t)(y))
and then __min/__max and friends are much simpler (and can just assume that the type is already fine, and the casting has been done).
definition - just take the single specified type.
So I'm wondering if we should just drop the types from __max/__min (and everything they call) entirely, and instead do (and everything they call) entirely, and instead do #define __check_type(x,y) ((void)((typeof(x)*)1==(typeof(y)*)1)) #define min(x,y) (__check_type(x,y),__min(x,y))
#define max(x,y) (__check_type(x,y),__max(x,y))
#define max(x,y) (__check_type(x,y),__max(x,y))
#define min_t(t,x,y) __min((t)(x),(t)(y))
#define max_t(t,x,y) __max((t)(x),(t)(y))
#define max_t(t,x,y) __max((t)(x),(t)(y))
and then __min/__max and friends are much simpler (and can just assume that the type is already fine, and the casting has been done).