diff options
author | Koen Kooi <koen@dominion.thruhere.net> | 2011-03-17 21:41:22 +0100 |
---|---|---|
committer | Koen Kooi <koen@dominion.thruhere.net> | 2011-03-17 21:41:22 +0100 |
commit | c58cc7d3796dcee6e93885c835ed04cb566abeb2 (patch) | |
tree | 3eea4d4ef6a4ef79e0f4e025d7012c1a5cc38835 /meta-oe/recipes-devtools/gcc/gcc-4.5/linaro/gcc-4.5-linaro-r99379.patch | |
parent | eec6ab97f712e06eb52c9f7c99e19ffab3ce9d74 (diff) | |
download | meta-openembedded-c58cc7d3796dcee6e93885c835ed04cb566abeb2.tar.gz |
move layer into meta-oe in preparation for future splits
As per TSC decision
Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Diffstat (limited to 'meta-oe/recipes-devtools/gcc/gcc-4.5/linaro/gcc-4.5-linaro-r99379.patch')
-rw-r--r-- | meta-oe/recipes-devtools/gcc/gcc-4.5/linaro/gcc-4.5-linaro-r99379.patch | 2011 |
1 files changed, 2011 insertions, 0 deletions
diff --git a/meta-oe/recipes-devtools/gcc/gcc-4.5/linaro/gcc-4.5-linaro-r99379.patch b/meta-oe/recipes-devtools/gcc/gcc-4.5/linaro/gcc-4.5-linaro-r99379.patch new file mode 100644 index 0000000000..e1e89bf8af --- /dev/null +++ b/meta-oe/recipes-devtools/gcc/gcc-4.5/linaro/gcc-4.5-linaro-r99379.patch @@ -0,0 +1,2011 @@ +2010-08-29 Chung-Lin Tang <cltang@codesourcery.com> + + Backport from mainline: + + 2010-04-16 Bernd Schmidt <bernds@codesourcery.com> + + PR target/41514 + gcc/ + * config/arm/arm.md (cbranchsi4_insn): Renamed from "*cbranchsi4_insn". + If the previous insn is a cbranchsi4_insn with the same arguments, + omit the compare instruction. + + gcc/testsuite/ + * gcc.target/arm/thumb-comparisons.c: New test. + + gcc/ + * config/arm/arm.md (addsi3_cbranch): If destination is a high + register, inputs must be low registers and we need a low register + scratch. Handle alternative 2 like alternative 3. + + PR target/40603 + gcc/ + * config/arm/arm.md (cbranchqi4): New pattern. + * config/arm/predicates.md (const0_operand, + cbranchqi4_comparison_operator): New predicates. + + gcc/testsuite/ + * gcc.target/arm/thumb-cbranchqi.c: New test. + + 2010-04-27 Bernd Schmidt <bernds@codesourcery.com> + + PR target/40657 + gcc/ + * config/arm/arm.c (thumb1_extra_regs_pushed): New function. + (thumb1_expand_prologue, thumb1_output_function_prologue): Call it + here to determine which regs to push and how much stack to reserve. + + gcc/testsuite/ + * gcc.target/arm/thumb-stackframe.c: New test. + + 2010-07-02 Bernd Schmidt <bernds@codesourcery.com> + + PR target/42835 + gcc/ + * config/arm/arm-modes.def (CC_NOTB): New mode. + * config/arm/arm.c (get_arm_condition_code): Handle it. + * config/arm/thumb2.md (thumb2_compare_scc): Delete pattern. + * config/arm/arm.md (subsi3_compare0_c): New pattern. + (compare_scc): Now a define_and_split. Add a number of extra + splitters before it. + + gcc/testsuite/ + * gcc.target/arm/pr42835.c: New test. + + PR target/42172 + gcc/ + * config/arm/arm.c (thumb1_rtx_costs): Improve support for SIGN_EXTEND + and ZERO_EXTEND. + (arm_rtx_costs_1): Likewise. + (arm_size_rtx_costs): Use arm_rtx_costs_1 for these codes. + * config/arm/arm.md (is_arch6): New attribute. + (zero_extendhisi2, zero_extendqisi2, extendhisi2, + extendqisi2): Tighten the code somewhat, avoiding invalid + RTL to occur in the expander patterns. + (thumb1_zero_extendhisi2): Merge with thumb1_zero_extendhisi2_v6. + (thumb1_zero_extendhisi2_v6): Delete. + (thumb1_extendhisi2): Merge with thumb1_extendhisi2_v6. + (thumb1_extendhisi2_v6): Delete. + (thumb1_extendqisi2): Merge with thumb1_extendhisi2_v6. + (thumb1_extendqisi2_v6): Delete. + (zero_extendhisi2 for register input splitter): New. + (zero_extendqisi2 for register input splitter): New. + (thumb1_extendhisi2 for register input splitter): New. + (extendhisi2 for register input splitter): New. + (extendqisi2 for register input splitter): New. + (TARGET_THUMB1 extendqisi2 for memory input splitter): New. + (arm_zero_extendhisi2): Allow nonimmediate_operand for operand 1, + and add support for a register alternative requiring a split. + (thumb1_zero_extendqisi2): Likewise. + (arm_zero_extendqisi2): Likewise. + (arm_extendhisi2): Likewise. + (arm_extendqisi2): Likewise. + + gcc/testsuite/ + * gcc.target/arm/pr42172-1.c: New test. + + 2010-07-05 Bernd Schmidt <bernds@codesourcery.com> + + * config/arm/arm.c (get_arm_condition_code): Remove CC_NOTBmode case. + * arm-modes.def (CC_NOTB): Don't define. + * config/arm/arm.md (arm_adddi3): Generate canonical RTL. + (adddi_sesidi_di, adddi_zesidi_di): Likewise. + (LTUGEU): New code_iterator. + (cnb, optab): New corresponding code_attrs. + (addsi3_carryin_<optab>): Renamed from addsi3_carryin. Change pattern + to canonical form. Operands 1 and 2 are commutative. Parametrize + using LTUGEU. + (addsi3_carryin_shift_<optab>): Likewise. + (addsi3_carryin_alt2_<optab>): Renamed from addsi3_carryin_alt2. + Operands 1 and 2 are commutative. Parametrize using LTUGEU. + (addsi3_carryin_alt1, addsi3_carryin_alt3): Remove. + (subsi3_compare): Renamed from subsi3_compare0_c. Change CC_NOTB to + CC. + (arm_subsi3_insn): Allow constants for operand 0. + (compare_scc peephole for eq case): New. + (compare_scc splitters): Change CC_NOTB to CC. + + 2010-07-09 Bernd Schmidt <bernds@codesourcery.com> + + PR target/40657 + gcc/ + * config/arm/arm.c (thumb1_extra_regs_pushed): New arg FOR_PROLOGUE. + All callers changed. + Handle the case when we're called for the epilogue. + (thumb_unexpanded_epilogue): Use it. + (thumb1_expand_epilogue): Likewise. + + gcc/testsuite/ + * gcc.target/arm/pr40657-1.c: New test. + * gcc.target/arm/pr40657-2.c: New test. + * gcc.c-torture/execute/pr40657.c: New test. + + gcc/ + * config/arm/arm.md (addsi3_cbranch): Switch alternatives 0 and 1. + + * config/arm/arm.md (Thumb-1 ldrsb peephole): New. + + * config/arm/arm.md (cbranchqi4): Fix array size. + (addsi3_cbranch): Also andle alternative 2 like alternative 3 when + calculating length. + + 2010-08-27 Paul Brook <paul@codesourcery.com> + + gcc/ + +=== modified file 'gcc/config/arm/arm-modes.def' +--- old/gcc/config/arm/arm-modes.def 2010-07-29 16:58:56 +0000 ++++ new/gcc/config/arm/arm-modes.def 2010-08-31 10:00:27 +0000 +@@ -34,6 +34,8 @@ + CCFPmode should be used with floating equalities. + CC_NOOVmode should be used with SImode integer equalities. + CC_Zmode should be used if only the Z flag is set correctly ++ CC_Cmode should be used if only the C flag is set correctly, after an ++ addition. + CC_Nmode should be used if only the N (sign) flag is set correctly + CC_CZmode should be used if only the C and Z flags are correct + (used for DImode unsigned comparisons). + +=== modified file 'gcc/config/arm/arm.c' +--- old/gcc/config/arm/arm.c 2010-08-25 16:22:17 +0000 ++++ new/gcc/config/arm/arm.c 2010-08-31 10:00:27 +0000 +@@ -6443,6 +6443,7 @@ + thumb1_rtx_costs (rtx x, enum rtx_code code, enum rtx_code outer) + { + enum machine_mode mode = GET_MODE (x); ++ int total; + + switch (code) + { +@@ -6545,24 +6546,20 @@ + return 14; + return 2; + ++ case SIGN_EXTEND: + case ZERO_EXTEND: +- /* XXX still guessing. */ +- switch (GET_MODE (XEXP (x, 0))) +- { +- case QImode: +- return (1 + (mode == DImode ? 4 : 0) +- + (GET_CODE (XEXP (x, 0)) == MEM ? 10 : 0)); +- +- case HImode: +- return (4 + (mode == DImode ? 4 : 0) +- + (GET_CODE (XEXP (x, 0)) == MEM ? 10 : 0)); +- +- case SImode: +- return (1 + (GET_CODE (XEXP (x, 0)) == MEM ? 10 : 0)); +- +- default: +- return 99; +- } ++ total = mode == DImode ? COSTS_N_INSNS (1) : 0; ++ total += thumb1_rtx_costs (XEXP (x, 0), GET_CODE (XEXP (x, 0)), code); ++ ++ if (mode == SImode) ++ return total; ++ ++ if (arm_arch6) ++ return total + COSTS_N_INSNS (1); ++ ++ /* Assume a two-shift sequence. Increase the cost slightly so ++ we prefer actual shifts over an extend operation. */ ++ return total + 1 + COSTS_N_INSNS (2); + + default: + return 99; +@@ -7046,44 +7043,39 @@ + return false; + + case SIGN_EXTEND: +- if (GET_MODE_CLASS (mode) == MODE_INT) +- { +- *total = 0; +- if (mode == DImode) +- *total += COSTS_N_INSNS (1); +- +- if (GET_MODE (XEXP (x, 0)) != SImode) +- { +- if (arm_arch6) +- { +- if (GET_CODE (XEXP (x, 0)) != MEM) +- *total += COSTS_N_INSNS (1); +- } +- else if (!arm_arch4 || GET_CODE (XEXP (x, 0)) != MEM) +- *total += COSTS_N_INSNS (2); +- } +- +- return false; +- } +- +- /* Fall through */ + case ZERO_EXTEND: + *total = 0; + if (GET_MODE_CLASS (mode) == MODE_INT) + { ++ rtx op = XEXP (x, 0); ++ enum machine_mode opmode = GET_MODE (op); ++ + if (mode == DImode) + *total += COSTS_N_INSNS (1); + +- if (GET_MODE (XEXP (x, 0)) != SImode) ++ if (opmode != SImode) + { +- if (arm_arch6) ++ if (MEM_P (op)) + { +- if (GET_CODE (XEXP (x, 0)) != MEM) +- *total += COSTS_N_INSNS (1); ++ /* If !arm_arch4, we use one of the extendhisi2_mem ++ or movhi_bytes patterns for HImode. For a QImode ++ sign extension, we first zero-extend from memory ++ and then perform a shift sequence. */ ++ if (!arm_arch4 && (opmode != QImode || code == SIGN_EXTEND)) ++ *total += COSTS_N_INSNS (2); + } +- else if (!arm_arch4 || GET_CODE (XEXP (x, 0)) != MEM) +- *total += COSTS_N_INSNS (GET_MODE (XEXP (x, 0)) == QImode ? +- 1 : 2); ++ else if (arm_arch6) ++ *total += COSTS_N_INSNS (1); ++ ++ /* We don't have the necessary insn, so we need to perform some ++ other operation. */ ++ else if (TARGET_ARM && code == ZERO_EXTEND && mode == QImode) ++ /* An and with constant 255. */ ++ *total += COSTS_N_INSNS (1); ++ else ++ /* A shift sequence. Increase costs slightly to avoid ++ combining two shifts into an extend operation. */ ++ *total += COSTS_N_INSNS (2) + 1; + } + + return false; +@@ -7333,41 +7325,8 @@ + return false; + + case SIGN_EXTEND: +- *total = 0; +- if (GET_MODE_SIZE (GET_MODE (XEXP (x, 0))) < 4) +- { +- if (!(arm_arch4 && MEM_P (XEXP (x, 0)))) +- *total += COSTS_N_INSNS (arm_arch6 ? 1 : 2); +- } +- if (mode == DImode) +- *total += COSTS_N_INSNS (1); +- return false; +- + case ZERO_EXTEND: +- *total = 0; +- if (!(arm_arch4 && MEM_P (XEXP (x, 0)))) +- { +- switch (GET_MODE (XEXP (x, 0))) +- { +- case QImode: +- *total += COSTS_N_INSNS (1); +- break; +- +- case HImode: +- *total += COSTS_N_INSNS (arm_arch6 ? 1 : 2); +- +- case SImode: +- break; +- +- default: +- *total += COSTS_N_INSNS (2); +- } +- } +- +- if (mode == DImode) +- *total += COSTS_N_INSNS (1); +- +- return false; ++ return arm_rtx_costs_1 (x, outer_code, total, 0); + + case CONST_INT: + if (const_ok_for_arm (INTVAL (x))) +@@ -16898,11 +16857,11 @@ + + case CC_Cmode: + switch (comp_code) +- { +- case LTU: return ARM_CS; +- case GEU: return ARM_CC; +- default: gcc_unreachable (); +- } ++ { ++ case LTU: return ARM_CS; ++ case GEU: return ARM_CC; ++ default: gcc_unreachable (); ++ } + + case CC_CZmode: + switch (comp_code) +@@ -20127,6 +20086,81 @@ + #endif + } + ++/* Given the stack offsets and register mask in OFFSETS, decide how ++ many additional registers to push instead of subtracting a constant ++ from SP. For epilogues the principle is the same except we use pop. ++ FOR_PROLOGUE indicates which we're generating. */ ++static int ++thumb1_extra_regs_pushed (arm_stack_offsets *offsets, bool for_prologue) ++{ ++ HOST_WIDE_INT amount; ++ unsigned long live_regs_mask = offsets->saved_regs_mask; ++ /* Extract a mask of the ones we can give to the Thumb's push/pop ++ instruction. */ ++ unsigned long l_mask = live_regs_mask & (for_prologue ? 0x40ff : 0xff); ++ /* Then count how many other high registers will need to be pushed. */ ++ unsigned long high_regs_pushed = bit_count (live_regs_mask & 0x0f00); ++ int n_free, reg_base; ++ ++ if (!for_prologue && frame_pointer_needed) ++ amount = offsets->locals_base - offsets->saved_regs; ++ else ++ amount = offsets->outgoing_args - offsets->saved_regs; ++ ++ /* If the stack frame size is 512 exactly, we can save one load ++ instruction, which should make this a win even when optimizing ++ for speed. */ ++ if (!optimize_size && amount != 512) ++ return 0; ++ ++ /* Can't do this if there are high registers to push. */ ++ if (high_regs_pushed != 0) ++ return 0; ++ ++ /* Shouldn't do it in the prologue if no registers would normally ++ be pushed at all. In the epilogue, also allow it if we'll have ++ a pop insn for the PC. */ ++ if (l_mask == 0 ++ && (for_prologue ++ || TARGET_BACKTRACE ++ || (live_regs_mask & 1 << LR_REGNUM) == 0 ++ || TARGET_INTERWORK ++ || crtl->args.pretend_args_size != 0)) ++ return 0; ++ ++ /* Don't do this if thumb_expand_prologue wants to emit instructions ++ between the push and the stack frame allocation. */ ++ if (for_prologue ++ && ((flag_pic && arm_pic_register != INVALID_REGNUM) ++ || (!frame_pointer_needed && CALLER_INTERWORKING_SLOT_SIZE > 0))) ++ return 0; ++ ++ reg_base = 0; ++ n_free = 0; ++ if (!for_prologue) ++ { ++ reg_base = arm_size_return_regs () / UNITS_PER_WORD; ++ live_regs_mask >>= reg_base; ++ } ++ ++ while (reg_base + n_free < 8 && !(live_regs_mask & 1) ++ && (for_prologue || call_used_regs[reg_base + n_free])) ++ { ++ live_regs_mask >>= 1; ++ n_free++; ++ } ++ ++ if (n_free == 0) ++ return 0; ++ gcc_assert (amount / 4 * 4 == amount); ++ ++ if (amount >= 512 && (amount - n_free * 4) < 512) ++ return (amount - 508) / 4; ++ if (amount <= n_free * 4) ++ return amount / 4; ++ return 0; ++} ++ + /* The bits which aren't usefully expanded as rtl. */ + const char * + thumb_unexpanded_epilogue (void) +@@ -20135,6 +20169,7 @@ + int regno; + unsigned long live_regs_mask = 0; + int high_regs_pushed = 0; ++ int extra_pop; + int had_to_push_lr; + int size; + +@@ -20154,6 +20189,13 @@ + the register is used to hold a return value. */ + size = arm_size_return_regs (); + ++ extra_pop = thumb1_extra_regs_pushed (offsets, false); ++ if (extra_pop > 0) ++ { ++ unsigned long extra_mask = (1 << extra_pop) - 1; ++ live_regs_mask |= extra_mask << (size / UNITS_PER_WORD); ++ } ++ + /* The prolog may have pushed some high registers to use as + work registers. e.g. the testsuite file: + gcc/testsuite/gcc/gcc.c-torture/execute/complex-2.c +@@ -20237,7 +20279,9 @@ + live_regs_mask); + + /* We have either just popped the return address into the +- PC or it is was kept in LR for the entire function. */ ++ PC or it is was kept in LR for the entire function. ++ Note that thumb_pushpop has already called thumb_exit if the ++ PC was in the list. */ + if (!had_to_push_lr) + thumb_exit (asm_out_file, LR_REGNUM); + } +@@ -20419,6 +20463,7 @@ + stack_pointer_rtx); + + amount = offsets->outgoing_args - offsets->saved_regs; ++ amount -= 4 * thumb1_extra_regs_pushed (offsets, true); + if (amount) + { + if (amount < 512) +@@ -20503,6 +20548,7 @@ + emit_insn (gen_movsi (stack_pointer_rtx, hard_frame_pointer_rtx)); + amount = offsets->locals_base - offsets->saved_regs; + } ++ amount -= 4 * thumb1_extra_regs_pushed (offsets, false); + + gcc_assert (amount >= 0); + if (amount) +@@ -20723,7 +20769,11 @@ + register. */ + else if ((l_mask & 0xff) != 0 + || (high_regs_pushed == 0 && l_mask)) +- thumb_pushpop (f, l_mask, 1, &cfa_offset, l_mask); ++ { ++ unsigned long mask = l_mask; ++ mask |= (1 << thumb1_extra_regs_pushed (offsets, true)) - 1; ++ thumb_pushpop (f, mask, 1, &cfa_offset, mask); ++ } + + if (high_regs_pushed) + { + +=== modified file 'gcc/config/arm/arm.md' +--- old/gcc/config/arm/arm.md 2010-08-25 16:22:17 +0000 ++++ new/gcc/config/arm/arm.md 2010-08-31 10:00:27 +0000 +@@ -150,6 +150,9 @@ + ; patterns that share the same RTL in both ARM and Thumb code. + (define_attr "is_thumb" "no,yes" (const (symbol_ref "thumb_code"))) + ++; IS_ARCH6 is set to 'yes' when we are generating code form ARMv6. ++(define_attr "is_arch6" "no,yes" (const (symbol_ref "arm_arch6"))) ++ + ;; Operand number of an input operand that is shifted. Zero if the + ;; given instruction does not shift one of its input operands. + (define_attr "shift" "" (const_int 0)) +@@ -515,8 +518,8 @@ + (compare:CC_C (plus:SI (match_dup 1) (match_dup 2)) + (match_dup 1))) + (set (match_dup 0) (plus:SI (match_dup 1) (match_dup 2)))]) +- (set (match_dup 3) (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0)) +- (plus:SI (match_dup 4) (match_dup 5))))] ++ (set (match_dup 3) (plus:SI (plus:SI (match_dup 4) (match_dup 5)) ++ (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))))] + " + { + operands[3] = gen_highpart (SImode, operands[0]); +@@ -543,10 +546,10 @@ + (compare:CC_C (plus:SI (match_dup 1) (match_dup 2)) + (match_dup 1))) + (set (match_dup 0) (plus:SI (match_dup 1) (match_dup 2)))]) +- (set (match_dup 3) (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0)) +- (plus:SI (ashiftrt:SI (match_dup 2) ++ (set (match_dup 3) (plus:SI (plus:SI (ashiftrt:SI (match_dup 2) + (const_int 31)) +- (match_dup 4))))] ++ (match_dup 4)) ++ (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))))] + " + { + operands[3] = gen_highpart (SImode, operands[0]); +@@ -572,8 +575,8 @@ + (compare:CC_C (plus:SI (match_dup 1) (match_dup 2)) + (match_dup 1))) + (set (match_dup 0) (plus:SI (match_dup 1) (match_dup 2)))]) +- (set (match_dup 3) (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0)) +- (plus:SI (match_dup 4) (const_int 0))))] ++ (set (match_dup 3) (plus:SI (plus:SI (match_dup 4) (const_int 0)) ++ (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))))] + " + { + operands[3] = gen_highpart (SImode, operands[0]); +@@ -861,24 +864,38 @@ + [(set_attr "conds" "set")] + ) + +-(define_insn "*addsi3_carryin" +- [(set (match_operand:SI 0 "s_register_operand" "=r") +- (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0)) +- (plus:SI (match_operand:SI 1 "s_register_operand" "r") +- (match_operand:SI 2 "arm_rhs_operand" "rI"))))] +- "TARGET_32BIT" +- "adc%?\\t%0, %1, %2" +- [(set_attr "conds" "use")] +-) +- +-(define_insn "*addsi3_carryin_shift" +- [(set (match_operand:SI 0 "s_register_operand" "=r") +- (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0)) +- (plus:SI +- (match_operator:SI 2 "shift_operator" +- [(match_operand:SI 3 "s_register_operand" "r") +- (match_operand:SI 4 "reg_or_int_operand" "rM")]) +- (match_operand:SI 1 "s_register_operand" "r"))))] ++(define_code_iterator LTUGEU [ltu geu]) ++(define_code_attr cnb [(ltu "CC_C") (geu "CC")]) ++(define_code_attr optab [(ltu "ltu") (geu "geu")]) ++ ++(define_insn "*addsi3_carryin_<optab>" ++ [(set (match_operand:SI 0 "s_register_operand" "=r") ++ (plus:SI (plus:SI (match_operand:SI 1 "s_register_operand" "%r") ++ (match_operand:SI 2 "arm_rhs_operand" "rI")) ++ (LTUGEU:SI (reg:<cnb> CC_REGNUM) (const_int 0))))] ++ "TARGET_32BIT" ++ "adc%?\\t%0, %1, %2" ++ [(set_attr "conds" "use")] ++) ++ ++(define_insn "*addsi3_carryin_alt2_<optab>" ++ [(set (match_operand:SI 0 "s_register_operand" "=r") ++ (plus:SI (plus:SI (LTUGEU:SI (reg:<cnb> CC_REGNUM) (const_int 0)) ++ (match_operand:SI 1 "s_register_operand" "%r")) ++ (match_operand:SI 2 "arm_rhs_operand" "rI")))] ++ "TARGET_32BIT" ++ "adc%?\\t%0, %1, %2" ++ [(set_attr "conds" "use")] ++) ++ ++(define_insn "*addsi3_carryin_shift_<optab>" ++ [(set (match_operand:SI 0 "s_register_operand" "=r") ++ (plus:SI (plus:SI ++ (match_operator:SI 2 "shift_operator" ++ [(match_operand:SI 3 "s_register_operand" "r") ++ (match_operand:SI 4 "reg_or_int_operand" "rM")]) ++ (match_operand:SI 1 "s_register_operand" "r")) ++ (LTUGEU:SI (reg:<cnb> CC_REGNUM) (const_int 0))))] + "TARGET_32BIT" + "adc%?\\t%0, %1, %3%S2" + [(set_attr "conds" "use") +@@ -887,36 +904,6 @@ + (const_string "alu_shift_reg")))] + ) + +-(define_insn "*addsi3_carryin_alt1" +- [(set (match_operand:SI 0 "s_register_operand" "=r") +- (plus:SI (plus:SI (match_operand:SI 1 "s_register_operand" "r") +- (match_operand:SI 2 "arm_rhs_operand" "rI")) +- (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))))] +- "TARGET_32BIT" +- "adc%?\\t%0, %1, %2" +- [(set_attr "conds" "use")] +-) +- +-(define_insn "*addsi3_carryin_alt2" +- [(set (match_operand:SI 0 "s_register_operand" "=r") +- (plus:SI (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0)) +- (match_operand:SI 1 "s_register_operand" "r")) +- (match_operand:SI 2 "arm_rhs_operand" "rI")))] +- "TARGET_32BIT" +- "adc%?\\t%0, %1, %2" +- [(set_attr "conds" "use")] +-) +- +-(define_insn "*addsi3_carryin_alt3" +- [(set (match_operand:SI 0 "s_register_operand" "=r") +- (plus:SI (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0)) +- (match_operand:SI 2 "arm_rhs_operand" "rI")) +- (match_operand:SI 1 "s_register_operand" "r")))] +- "TARGET_32BIT" +- "adc%?\\t%0, %1, %2" +- [(set_attr "conds" "use")] +-) +- + (define_expand "incscc" + [(set (match_operand:SI 0 "s_register_operand" "=r,r") + (plus:SI (match_operator:SI 2 "arm_comparison_operator" +@@ -1116,24 +1103,27 @@ + + ; ??? Check Thumb-2 split length + (define_insn_and_split "*arm_subsi3_insn" +- [(set (match_operand:SI 0 "s_register_operand" "=r,rk,r") +- (minus:SI (match_operand:SI 1 "reg_or_int_operand" "rI,!k,?n") +- (match_operand:SI 2 "s_register_operand" "r, r, r")))] ++ [(set (match_operand:SI 0 "s_register_operand" "=r,r,rk,r,r") ++ (minus:SI (match_operand:SI 1 "reg_or_int_operand" "rI,r,!k,?n,r") ++ (match_operand:SI 2 "reg_or_int_operand" "r,rI, r, r,?n")))] + "TARGET_32BIT" + "@ + rsb%?\\t%0, %2, %1 + sub%?\\t%0, %1, %2 ++ sub%?\\t%0, %1, %2 ++ # + #" +- "TARGET_32BIT +- && GET_CODE (operands[1]) == CONST_INT +- && !const_ok_for_arm (INTVAL (operands[1]))" ++ "&& ((GET_CODE (operands[1]) == CONST_INT ++ && !const_ok_for_arm (INTVAL (operands[1]))) ++ || (GET_CODE (operands[2]) == CONST_INT ++ && !const_ok_for_arm (INTVAL (operands[2]))))" + [(clobber (const_int 0))] + " + arm_split_constant (MINUS, SImode, curr_insn, + INTVAL (operands[1]), operands[0], operands[2], 0); + DONE; + " +- [(set_attr "length" "4,4,16") ++ [(set_attr "length" "4,4,4,16,16") + (set_attr "predicable" "yes")] + ) + +@@ -1165,6 +1155,19 @@ + [(set_attr "conds" "set")] + ) + ++(define_insn "*subsi3_compare" ++ [(set (reg:CC CC_REGNUM) ++ (compare:CC (match_operand:SI 1 "arm_rhs_operand" "r,I") ++ (match_operand:SI 2 "arm_rhs_operand" "rI,r"))) ++ (set (match_operand:SI 0 "s_register_operand" "=r,r") ++ (minus:SI (match_dup 1) (match_dup 2)))] ++ "TARGET_32BIT" ++ "@ ++ sub%.\\t%0, %1, %2 ++ rsb%.\\t%0, %2, %1" ++ [(set_attr "conds" "set")] ++) ++ + (define_expand "decscc" + [(set (match_operand:SI 0 "s_register_operand" "=r,r") + (minus:SI (match_operand:SI 1 "s_register_operand" "0,?r") +@@ -4050,93 +4053,46 @@ + ) + + (define_expand "zero_extendhisi2" +- [(set (match_dup 2) +- (ashift:SI (match_operand:HI 1 "nonimmediate_operand" "") +- (const_int 16))) +- (set (match_operand:SI 0 "s_register_operand" "") +- (lshiftrt:SI (match_dup 2) (const_int 16)))] ++ [(set (match_operand:SI 0 "s_register_operand" "") ++ (zero_extend:SI (match_operand:HI 1 "nonimmediate_operand" "")))] + "TARGET_EITHER" +- " +- { +- if ((TARGET_THUMB1 || arm_arch4) && GET_CODE (operands[1]) == MEM) +- { +- emit_insn (gen_rtx_SET (VOIDmode, operands[0], +- gen_rtx_ZERO_EXTEND (SImode, operands[1]))); +- DONE; +- } +- +- if (TARGET_ARM && GET_CODE (operands[1]) == MEM) +- { +- emit_insn (gen_movhi_bytes (operands[0], operands[1])); +- DONE; +- } +- +- if (!s_register_operand (operands[1], HImode)) +- operands[1] = copy_to_mode_reg (HImode, operands[1]); +- +- if (arm_arch6) +- { +- emit_insn (gen_rtx_SET (VOIDmode, operands[0], +- gen_rtx_ZERO_EXTEND (SImode, operands[1]))); +- DONE; +- } +- +- operands[1] = gen_lowpart (SImode, operands[1]); +- operands[2] = gen_reg_rtx (SImode); +- }" +-) ++{ ++ if (TARGET_ARM && !arm_arch4 && MEM_P (operands[1])) ++ { ++ emit_insn (gen_movhi_bytes (operands[0], operands[1])); ++ DONE; ++ } ++ if (!arm_arch6 && !MEM_P (operands[1])) ++ { ++ rtx t = gen_lowpart (SImode, operands[1]); ++ rtx tmp = gen_reg_rtx (SImode); ++ emit_insn (gen_ashlsi3 (tmp, t, GEN_INT (16))); ++ emit_insn (gen_lshrsi3 (operands[0], tmp, GEN_INT (16))); ++ DONE; ++ } ++}) ++ ++(define_split ++ [(set (match_operand:SI 0 "register_operand" "") ++ (zero_extend:SI (match_operand:HI 1 "register_operand" "l,m")))] ++ "!TARGET_THUMB2 && !arm_arch6" ++ [(set (match_dup 0) (ashift:SI (match_dup 2) (const_int 16))) ++ (set (match_dup 0) (lshiftrt:SI (match_dup 0) (const_int 16)))] ++{ ++ operands[2] = gen_lowpart (SImode, operands[1]); ++}) + + (define_insn "*thumb1_zero_extendhisi2" +- [(set (match_operand:SI 0 "register_operand" "=l") +- (zero_extend:SI (match_operand:HI 1 "memory_operand" "m")))] +- "TARGET_THUMB1 && !arm_arch6" +- "* +- rtx mem = XEXP (operands[1], 0); +- +- if (GET_CODE (mem) == CONST) +- mem = XEXP (mem, 0); +- +- if (GET_CODE (mem) == LABEL_REF) +- return \"ldr\\t%0, %1\"; +- +- if (GET_CODE (mem) == PLUS) +- { +- rtx a = XEXP (mem, 0); +- rtx b = XEXP (mem, 1); +- +- /* This can happen due to bugs in reload. */ +- if (GET_CODE (a) == REG && REGNO (a) == SP_REGNUM) +- { +- rtx ops[2]; +- ops[0] = operands[0]; +- ops[1] = a; +- +- output_asm_insn (\"mov %0, %1\", ops); +- +- XEXP (mem, 0) = operands[0]; +- } +- +- else if ( GET_CODE (a) == LABEL_REF +- && GET_CODE (b) == CONST_INT) +- return \"ldr\\t%0, %1\"; +- } +- +- return \"ldrh\\t%0, %1\"; +- " +- [(set_attr "length" "4") +- (set_attr "type" "load_byte") +- (set_attr "pool_range" "60")] +-) +- +-(define_insn "*thumb1_zero_extendhisi2_v6" + [(set (match_operand:SI 0 "register_operand" "=l,l") + (zero_extend:SI (match_operand:HI 1 "nonimmediate_operand" "l,m")))] +- "TARGET_THUMB1 && arm_arch6" ++ "TARGET_THUMB1" + "* + rtx mem; + +- if (which_alternative == 0) ++ if (which_alternative == 0 && arm_arch6) + return \"uxth\\t%0, %1\"; ++ if (which_alternative == 0) ++ return \"#\"; + + mem = XEXP (operands[1], 0); + +@@ -4170,20 +4126,25 @@ + + return \"ldrh\\t%0, %1\"; + " +- [(set_attr "length" "2,4") ++ [(set_attr_alternative "length" ++ [(if_then_else (eq_attr "is_arch6" "yes") ++ (const_int 2) (const_int 4)) ++ (const_int 4)]) + (set_attr "type" "alu_shift,load_byte") + (set_attr "pool_range" "*,60")] + ) + + (define_insn "*arm_zero_extendhisi2" +- [(set (match_operand:SI 0 "s_register_operand" "=r") +- (zero_extend:SI (match_operand:HI 1 "memory_operand" "m")))] ++ [(set (match_operand:SI 0 "s_register_operand" "=r,r") ++ (zero_extend:SI (match_operand:HI 1 "nonimmediate_operand" "r,m")))] + "TARGET_ARM && arm_arch4 && !arm_arch6" +- "ldr%(h%)\\t%0, %1" +- [(set_attr "type" "load_byte") ++ "@ ++ # ++ ldr%(h%)\\t%0, %1" ++ [(set_attr "type" "alu_shift,load_byte") + (set_attr "predicable" "yes") +- (set_attr "pool_range" "256") +- (set_attr "neg_pool_range" "244")] ++ (set_attr "pool_range" "*,256") ++ (set_attr "neg_pool_range" "*,244")] + ) + + (define_insn "*arm_zero_extendhisi2_v6" +@@ -4213,50 +4174,49 @@ + [(set (match_operand:SI 0 "s_register_operand" "") + (zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "")))] + "TARGET_EITHER" +- " +- if (!arm_arch6 && GET_CODE (operands[1]) != MEM) +- { +- if (TARGET_ARM) +- { +- emit_insn (gen_andsi3 (operands[0], +- gen_lowpart (SImode, operands[1]), +- GEN_INT (255))); +- } +- else /* TARGET_THUMB */ +- { +- rtx temp = gen_reg_rtx (SImode); +- rtx ops[3]; +- +- operands[1] = copy_to_mode_reg (QImode, operands[1]); +- operands[1] = gen_lowpart (SImode, operands[1]); +- +- ops[0] = temp; +- ops[1] = operands[1]; +- ops[2] = GEN_INT (24); +- +- emit_insn (gen_rtx_SET (VOIDmode, ops[0], +- gen_rtx_ASHIFT (SImode, ops[1], ops[2]))); +- +- ops[0] = operands[0]; +- ops[1] = temp; +- ops[2] = GEN_INT (24); +- +- emit_insn (gen_rtx_SET (VOIDmode, ops[0], +- gen_rtx_LSHIFTRT (SImode, ops[1], ops[2]))); +- } +- DONE; +- } +- " +-) ++{ ++ if (TARGET_ARM && !arm_arch6 && GET_CODE (operands[1]) != MEM) ++ { ++ emit_insn (gen_andsi3 (operands[0], ++ gen_lowpart (SImode, operands[1]), ++ GEN_INT (255))); ++ DONE; ++ } ++ if (!arm_arch6 && !MEM_P (operands[1])) ++ { ++ rtx t = gen_lowpart (SImode, operands[1]); ++ rtx tmp = gen_reg_rtx (SImode); ++ emit_insn (gen_ashlsi3 (tmp, t, GEN_INT (24))); ++ emit_insn (gen_lshrsi3 (operands[0], tmp, GEN_INT (24))); ++ DONE; ++ } ++}) ++ ++(define_split ++ [(set (match_operand:SI 0 "register_operand" "") ++ (zero_extend:SI (match_operand:QI 1 "register_operand" "")))] ++ "!arm_arch6" ++ [(set (match_dup 0) (ashift:SI (match_dup 2) (const_int 24))) ++ (set (match_dup 0) (lshiftrt:SI (match_dup 0) (const_int 24)))] ++{ ++ operands[2] = simplify_gen_subreg (SImode, operands[1], QImode, 0); ++ if (TARGET_ARM) ++ { ++ emit_insn (gen_andsi3 (operands[0], operands[2], GEN_INT (255))); ++ DONE; ++ } ++}) + + (define_insn "*thumb1_zero_extendqisi2" +- [(set (match_operand:SI 0 "register_operand" "=l") +- (zero_extend:SI (match_operand:QI 1 "memory_operand" "m")))] ++ [(set (match_operand:SI 0 "register_operand" "=l,l") ++ (zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "l,m")))] + "TARGET_THUMB1 && !arm_arch6" +- "ldrb\\t%0, %1" +- [(set_attr "length" "2") +- (set_attr "type" "load_byte") +- (set_attr "pool_range" "32")] ++ "@ ++ # ++ ldrb\\t%0, %1" ++ [(set_attr "length" "4,2") ++ (set_attr "type" "alu_shift,load_byte") ++ (set_attr "pool_range" "*,32")] + ) + + (define_insn "*thumb1_zero_extendqisi2_v6" +@@ -4272,14 +4232,17 @@ + ) + + (define_insn "*arm_zero_extendqisi2" +- [(set (match_operand:SI 0 "s_register_operand" "=r") +- (zero_extend:SI (match_operand:QI 1 "memory_operand" "m")))] ++ [(set (match_operand:SI 0 "s_register_operand" "=r,r") ++ (zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "r,m")))] + "TARGET_ARM && !arm_arch6" +- "ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2" +- [(set_attr "type" "load_byte") ++ "@ ++ # ++ ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2" ++ [(set_attr "length" "8,4") ++ (set_attr "type" "alu_shift,load_byte") + (set_attr "predicable" "yes") +- (set_attr "pool_range" "4096") +- (set_attr "neg_pool_range" "4084")] ++ (set_attr "pool_range" "*,4096") ++ (set_attr "neg_pool_range" "*,4084")] + ) + + (define_insn "*arm_zero_extendqisi2_v6" +@@ -4358,108 +4321,42 @@ + ) + + (define_expand "extendhisi2" +- [(set (match_dup 2) +- (ashift:SI (match_operand:HI 1 "nonimmediate_operand" "") +- (const_int 16))) +- (set (match_operand:SI 0 "s_register_operand" "") +- (ashiftrt:SI (match_dup 2) +- (const_int 16)))] ++ [(set (match_operand:SI 0 "s_register_operand" "") ++ (sign_extend:SI (match_operand:HI 1 "nonimmediate_operand" "")))] + "TARGET_EITHER" +- " +- { +- if (GET_CODE (operands[1]) == MEM) +- { +- if (TARGET_THUMB1) +- { +- emit_insn (gen_thumb1_extendhisi2 (operands[0], operands[1])); +- DONE; +- } +- else if (arm_arch4) +- { +- emit_insn (gen_rtx_SET (VOIDmode, operands[0], +- gen_rtx_SIGN_EXTEND (SImode, operands[1]))); +- DONE; +- } +- } +- +- if (TARGET_ARM && GET_CODE (operands[1]) == MEM) +- { +- emit_insn (gen_extendhisi2_mem (operands[0], operands[1])); +- DONE; +- } +- +- if (!s_register_operand (operands[1], HImode)) +- operands[1] = copy_to_mode_reg (HImode, operands[1]); +- +- if (arm_arch6) +- { +- if (TARGET_THUMB1) +- emit_insn (gen_thumb1_extendhisi2 (operands[0], operands[1])); +- else +- emit_insn (gen_rtx_SET (VOIDmode, operands[0], +- gen_rtx_SIGN_EXTEND (SImode, operands[1]))); +- +- DONE; +- } +- +- operands[1] = gen_lowpart (SImode, operands[1]); +- operands[2] = gen_reg_rtx (SImode); +- }" +-) +- +-(define_insn "thumb1_extendhisi2" +- [(set (match_operand:SI 0 "register_operand" "=l") +- (sign_extend:SI (match_operand:HI 1 "memory_operand" "m"))) +- (clobber (match_scratch:SI 2 "=&l"))] +- "TARGET_THUMB1 && !arm_arch6" +- "* +- { +- rtx ops[4]; +- rtx mem = XEXP (operands[1], 0); +- +- /* This code used to try to use 'V', and fix the address only if it was +- offsettable, but this fails for e.g. REG+48 because 48 is outside the +- range of QImode offsets, and offsettable_address_p does a QImode +- address check. */ +- +- if (GET_CODE (mem) == CONST) +- mem = XEXP (mem, 0); +- +- if (GET_CODE (mem) == LABEL_REF) +- return \"ldr\\t%0, %1\"; +- +- if (GET_CODE (mem) == PLUS) +- { +- rtx a = XEXP (mem, 0); +- rtx b = XEXP (mem, 1); +- +- if (GET_CODE (a) == LABEL_REF +- && GET_CODE (b) == CONST_INT) +- return \"ldr\\t%0, %1\"; +- +- if (GET_CODE (b) == REG) +- return \"ldrsh\\t%0, %1\"; +- +- ops[1] = a; +- ops[2] = b; +- } +- else +- { +- ops[1] = mem; +- ops[2] = const0_rtx; +- } +- +- gcc_assert (GET_CODE (ops[1]) == REG); +- +- ops[0] = operands[0]; +- ops[3] = operands[2]; +- output_asm_insn (\"mov\\t%3, %2\;ldrsh\\t%0, [%1, %3]\", ops); +- return \"\"; +- }" +- [(set_attr "length" "4") +- (set_attr "type" "load_byte") +- (set_attr "pool_range" "1020")] +-) ++{ ++ if (TARGET_THUMB1) ++ { ++ emit_insn (gen_thumb1_extendhisi2 (operands[0], operands[1])); ++ DONE; ++ } ++ if (MEM_P (operands[1]) && TARGET_ARM && !arm_arch4) ++ { ++ emit_insn (gen_extendhisi2_mem (operands[0], operands[1])); ++ DONE; ++ } ++ ++ if (!arm_arch6 && !MEM_P (operands[1])) ++ { ++ rtx t = gen_lowpart (SImode, operands[1]); ++ rtx tmp = gen_reg_rtx (SImode); ++ emit_insn (gen_ashlsi3 (tmp, t, GEN_INT (16))); ++ emit_insn (gen_ashrsi3 (operands[0], tmp, GEN_INT (16))); ++ DONE; ++ } ++}) ++ ++(define_split ++ [(parallel ++ [(set (match_operand:SI 0 "register_operand" "") ++ (sign_extend:SI (match_operand:HI 1 "register_operand" ""))) ++ (clobber (match_scratch:SI 2 ""))])] ++ "!arm_arch6" ++ [(set (match_dup 0) (ashift:SI (match_dup 2) (const_int 16))) ++ (set (match_dup 0) (ashiftrt:SI (match_dup 0) (const_int 16)))] ++{ ++ operands[2] = simplify_gen_subreg (SImode, operands[1], HImode, 0); ++}) + + ;; We used to have an early-clobber on the scratch register here. + ;; However, there's a bug somewhere in reload which means that this +@@ -4468,16 +4365,18 @@ + ;; we try to verify the operands. Fortunately, we don't really need + ;; the early-clobber: we can always use operand 0 if operand 2 + ;; overlaps the address. +-(define_insn "*thumb1_extendhisi2_insn_v6" ++(define_insn "thumb1_extendhisi2" + [(set (match_operand:SI 0 "register_operand" "=l,l") + (sign_extend:SI (match_operand:HI 1 "nonimmediate_operand" "l,m"))) + (clobber (match_scratch:SI 2 "=X,l"))] +- "TARGET_THUMB1 && arm_arch6" ++ "TARGET_THUMB1" + "* + { + rtx ops[4]; + rtx mem; + ++ if (which_alternative == 0 && !arm_arch6) ++ return \"#\"; + if (which_alternative == 0) + return \"sxth\\t%0, %1\"; + +@@ -4525,7 +4424,10 @@ + output_asm_insn (\"mov\\t%3, %2\;ldrsh\\t%0, [%1, %3]\", ops); + return \"\"; + }" +- [(set_attr "length" "2,4") ++ [(set_attr_alternative "length" ++ [(if_then_else (eq_attr "is_arch6" "yes") ++ (const_int 2) (const_int 4)) ++ (const_int 4)]) + (set_attr "type" "alu_shift,load_byte") + (set_attr "pool_range" "*,1020")] + ) +@@ -4566,15 +4468,28 @@ + }" + ) + ++(define_split ++ [(set (match_operand:SI 0 "register_operand" "") ++ (sign_extend:SI (match_operand:HI 1 "register_operand" "")))] ++ "!arm_arch6" ++ [(set (match_dup 0) (ashift:SI (match_dup 2) (const_int 16))) ++ (set (match_dup 0) (ashiftrt:SI (match_dup 0) (const_int 16)))] ++{ ++ operands[2] = simplify_gen_subreg (SImode, operands[1], HImode, 0); ++}) ++ + (define_insn "*arm_extendhisi2" +- [(set (match_operand:SI 0 "s_register_operand" "=r") +- (sign_extend:SI (match_operand:HI 1 "memory_operand" "m")))] ++ [(set (match_operand:SI 0 "s_register_operand" "=r,r") ++ (sign_extend:SI (match_operand:HI 1 "nonimmediate_operand" "r,m")))] + "TARGET_ARM && arm_arch4 && !arm_arch6" +- "ldr%(sh%)\\t%0, %1" +- [(set_attr "type" "load_byte") ++ "@ ++ # ++ ldr%(sh%)\\t%0, %1" ++ [(set_attr "length" "8,4") ++ (set_attr "type" "alu_shift,load_byte") + (set_attr "predicable" "yes") +- (set_attr "pool_range" "256") +- (set_attr "neg_pool_range" "244")] ++ (set_attr "pool_range" "*,256") ++ (set_attr "neg_pool_range" "*,244")] + ) + + ;; ??? Check Thumb-2 pool range +@@ -4636,46 +4551,45 @@ + ) + + (define_expand "extendqisi2" +- [(set (match_dup 2) +- (ashift:SI (match_operand:QI 1 "arm_reg_or_extendqisi_mem_op" "") +- (const_int 24))) +- (set (match_operand:SI 0 "s_register_operand" "") +- (ashiftrt:SI (match_dup 2) +- (const_int 24)))] ++ [(set (match_operand:SI 0 "s_register_operand" "") ++ (sign_extend:SI (match_operand:QI 1 "arm_reg_or_extendqisi_mem_op" "")))] + "TARGET_EITHER" +- " +- { +- if ((TARGET_THUMB || arm_arch4) && GET_CODE (operands[1]) == MEM) +- { +- emit_insn (gen_rtx_SET (VOIDmode, operands[0], +- gen_rtx_SIGN_EXTEND (SImode, operands[1]))); +- DONE; +- } +- +- if (!s_register_operand (operands[1], QImode)) +- operands[1] = copy_to_mode_reg (QImode, operands[1]); +- +- if (arm_arch6) +- { +- emit_insn (gen_rtx_SET (VOIDmode, operands[0], +- gen_rtx_SIGN_EXTEND (SImode, operands[1]))); +- DONE; +- } +- +- operands[1] = gen_lowpart (SImode, operands[1]); +- operands[2] = gen_reg_rtx (SImode); +- }" +-) ++{ ++ if (!arm_arch4 && MEM_P (operands[1])) ++ operands[1] = copy_to_mode_reg (QImode, operands[1]); ++ ++ if (!arm_arch6 && !MEM_P (operands[1])) ++ { ++ rtx t = gen_lowpart (SImode, operands[1]); ++ rtx tmp = gen_reg_rtx (SImode); ++ emit_insn (gen_ashlsi3 (tmp, t, GEN_INT (24))); ++ emit_insn (gen_ashrsi3 (operands[0], tmp, GEN_INT (24))); ++ DONE; ++ } ++}) ++ ++(define_split ++ [(set (match_operand:SI 0 "register_operand" "") ++ (sign_extend:SI (match_operand:QI 1 "register_operand" "")))] ++ "!arm_arch6" ++ [(set (match_dup 0) (ashift:SI (match_dup 2) (const_int 24))) ++ (set (match_dup 0) (ashiftrt:SI (match_dup 0) (const_int 24)))] ++{ ++ operands[2] = simplify_gen_subreg (SImode, operands[1], QImode, 0); ++}) + + (define_insn "*arm_extendqisi" +- [(set (match_operand:SI 0 "s_register_operand" "=r") +- (sign_extend:SI (match_operand:QI 1 "arm_extendqisi_mem_op" "Uq")))] ++ [(set (match_operand:SI 0 "s_register_operand" "=r,r") ++ (sign_extend:SI (match_operand:QI 1 "arm_reg_or_extendqisi_mem_op" "r,Uq")))] + "TARGET_ARM && arm_arch4 && !arm_arch6" +- "ldr%(sb%)\\t%0, %1" +- [(set_attr "type" "load_byte") ++ "@ ++ # ++ ldr%(sb%)\\t%0, %1" ++ [(set_attr "length" "8,4") ++ (set_attr "type" "alu_shift,load_byte") + (set_attr "predicable" "yes") +- (set_attr "pool_range" "256") +- (set_attr "neg_pool_range" "244")] ++ (set_attr "pool_range" "*,256") ++ (set_attr "neg_pool_range" "*,244")] + ) + + (define_insn "*arm_extendqisi_v6" +@@ -4703,162 +4617,103 @@ + (set_attr "predicable" "yes")] + ) + +-(define_insn "*thumb1_extendqisi2" +- [(set (match_operand:SI 0 "register_operand" "=l,l") +- (sign_extend:SI (match_operand:QI 1 "memory_operand" "V,m")))] +- "TARGET_THUMB1 && !arm_arch6" +- "* +- { +- rtx ops[3]; +- rtx mem = XEXP (operands[1], 0); +- +- if (GET_CODE (mem) == CONST) +- mem = XEXP (mem, 0); +- +- if (GET_CODE (mem) == LABEL_REF) +- return \"ldr\\t%0, %1\"; +- +- if (GET_CODE (mem) == PLUS +- && GET_CODE (XEXP (mem, 0)) == LABEL_REF) +- return \"ldr\\t%0, %1\"; +- +- if (which_alternative == 0) +- return \"ldrsb\\t%0, %1\"; +- +- ops[0] = operands[0]; +- +- if (GET_CODE (mem) == PLUS) +- { +- rtx a = XEXP (mem, 0); +- rtx b = XEXP (mem, 1); +- +- ops[1] = a; +- ops[2] = b; +- +- if (GET_CODE (a) == REG) +- { +- if (GET_CODE (b) == REG) +- output_asm_insn (\"ldrsb\\t%0, [%1, %2]\", ops); +- else if (REGNO (a) == REGNO (ops[0])) +- { +- output_asm_insn (\"ldrb\\t%0, [%1, %2]\", ops); +- output_asm_insn (\"lsl\\t%0, %0, #24\", ops); +- output_asm_insn (\"asr\\t%0, %0, #24\", ops); +- } +- else +- output_asm_insn (\"mov\\t%0, %2\;ldrsb\\t%0, [%1, %0]\", ops); +- } +- else +- { +- gcc_assert (GET_CODE (b) == REG); +- if (REGNO (b) == REGNO (ops[0])) +- { +- output_asm_insn (\"ldrb\\t%0, [%2, %1]\", ops); +- output_asm_insn (\"lsl\\t%0, %0, #24\", ops); +- output_asm_insn (\"asr\\t%0, %0, #24\", ops); +- } +- else +- output_asm_insn (\"mov\\t%0, %2\;ldrsb\\t%0, [%1, %0]\", ops); +- } +- } +- else if (GET_CODE (mem) == REG && REGNO (ops[0]) == REGNO (mem)) +- { +- output_asm_insn (\"ldrb\\t%0, [%0, #0]\", ops); +- output_asm_insn (\"lsl\\t%0, %0, #24\", ops); +- output_asm_insn (\"asr\\t%0, %0, #24\", ops); +- } +- else +- { +- ops[1] = mem; +- ops[2] = const0_rtx; +- +- output_asm_insn (\"mov\\t%0, %2\;ldrsb\\t%0, [%1, %0]\", ops); +- } +- return \"\"; +- }" +- [(set_attr "length" "2,6") +- (set_attr "type" "load_byte,load_byte") +- (set_attr "pool_range" "32,32")] +-) +- +-(define_insn "*thumb1_extendqisi2_v6" ++(define_split ++ [(set (match_operand:SI 0 "register_operand" "") ++ (sign_extend:SI (match_operand:QI 1 "memory_operand" "")))] ++ "TARGET_THUMB1 && reload_completed" ++ [(set (match_dup 0) (match_dup 2)) ++ (set (match_dup 0) (sign_extend:SI (match_dup 3)))] ++{ ++ rtx addr = XEXP (operands[1], 0); ++ ++ if (GET_CODE (addr) == CONST) ++ addr = XEXP (addr, 0); ++ ++ if (GET_CODE (addr) == PLUS ++ && REG_P (XEXP (addr, 0)) && REG_P (XEXP (addr, 1))) ++ /* No split necessary. */ ++ FAIL; ++ ++ if (GET_CODE (addr) == PLUS ++ && !REG_P (XEXP (addr, 0)) && !REG_P (XEXP (addr, 1))) ++ FAIL; ++ ++ if (reg_overlap_mentioned_p (operands[0], addr)) ++ { ++ rtx t = gen_lowpart (QImode, operands[0]); ++ emit_move_insn (t, operands[1]); ++ emit_insn (gen_thumb1_extendqisi2 (operands[0], t)); ++ DONE; ++ } ++ ++ if (REG_P (addr)) ++ { ++ addr = gen_rtx_PLUS (Pmode, addr, operands[0]); ++ operands[2] = const0_rtx; ++ } ++ else if (GET_CODE (addr) != PLUS) ++ FAIL; ++ else if (REG_P (XEXP (addr, 0))) ++ { ++ operands[2] = XEXP (addr, 1); ++ addr = gen_rtx_PLUS (Pmode, XEXP (addr, 0), operands[0]); ++ } ++ else ++ { ++ operands[2] = XEXP (addr, 0); ++ addr = gen_rtx_PLUS (Pmode, XEXP (addr, 1), operands[0]); ++ } ++ ++ operands[3] = change_address (operands[1], QImode, addr); ++}) ++ ++(define_peephole2 ++ [(set (match_operand:SI 0 "register_operand" "") ++ (plus:SI (match_dup 0) (match_operand 1 "const_int_operand"))) ++ (set (match_operand:SI 2 "register_operand" "") (const_int 0)) ++ (set (match_operand:SI 3 "register_operand" "") ++ (sign_extend:SI (match_operand:QI 4 "memory_operand" "")))] ++ "TARGET_THUMB1 ++ && GET_CODE (XEXP (operands[4], 0)) == PLUS ++ && rtx_equal_p (operands[0], XEXP (XEXP (operands[4], 0), 0)) ++ && rtx_equal_p (operands[2], XEXP (XEXP (operands[4], 0), 1)) ++ && (peep2_reg_dead_p (3, operands[0]) ++ || rtx_equal_p (operands[0], operands[3])) ++ && (peep2_reg_dead_p (3, operands[2]) ++ || rtx_equal_p (operands[2], operands[3]))" ++ [(set (match_dup 2) (match_dup 1)) ++ (set (match_dup 3) (sign_extend:SI (match_dup 4)))] ++{ ++ rtx addr = gen_rtx_PLUS (Pmode, operands[0], operands[2]); ++ operands[4] = change_address (operands[4], QImode, addr); ++}) ++ ++(define_insn "thumb1_extendqisi2" + [(set (match_operand:SI 0 "register_operand" "=l,l,l") + (sign_extend:SI (match_operand:QI 1 "nonimmediate_operand" "l,V,m")))] +- "TARGET_THUMB1 && arm_arch6" +- "* +- { +- rtx ops[3]; +- rtx mem; +- +- if (which_alternative == 0) +- return \"sxtb\\t%0, %1\"; +- +- mem = XEXP (operands[1], 0); +- +- if (GET_CODE (mem) == CONST) +- mem = XEXP (mem, 0); +- +- if (GET_CODE (mem) == LABEL_REF) +- return \"ldr\\t%0, %1\"; +- +- if (GET_CODE (mem) == PLUS +- && GET_CODE (XEXP (mem, 0)) == LABEL_REF) +- return \"ldr\\t%0, %1\"; +- +- if (which_alternative == 0) +- return \"ldrsb\\t%0, %1\"; +- +- ops[0] = operands[0]; +- +- if (GET_CODE (mem) == PLUS) +- { +- rtx a = XEXP (mem, 0); +- rtx b = XEXP (mem, 1); +- +- ops[1] = a; +- ops[2] = b; +- +- if (GET_CODE (a) == REG) +- { +- if (GET_CODE (b) == REG) +- output_asm_insn (\"ldrsb\\t%0, [%1, %2]\", ops); +- else if (REGNO (a) == REGNO (ops[0])) +- { +- output_asm_insn (\"ldrb\\t%0, [%1, %2]\", ops); +- output_asm_insn (\"sxtb\\t%0, %0\", ops); +- } +- else +- output_asm_insn (\"mov\\t%0, %2\;ldrsb\\t%0, [%1, %0]\", ops); +- } +- else +- { +- gcc_assert (GET_CODE (b) == REG); +- if (REGNO (b) == REGNO (ops[0])) +- { +- output_asm_insn (\"ldrb\\t%0, [%2, %1]\", ops); +- output_asm_insn (\"sxtb\\t%0, %0\", ops); +- } +- else +- output_asm_insn (\"mov\\t%0, %2\;ldrsb\\t%0, [%1, %0]\", ops); +- } +- } +- else if (GET_CODE (mem) == REG && REGNO (ops[0]) == REGNO (mem)) +- { +- output_asm_insn (\"ldrb\\t%0, [%0, #0]\", ops); +- output_asm_insn (\"sxtb\\t%0, %0\", ops); +- } +- else +- { +- ops[1] = mem; +- ops[2] = const0_rtx; +- +- output_asm_insn (\"mov\\t%0, %2\;ldrsb\\t%0, [%1, %0]\", ops); +- } +- return \"\"; +- }" +- [(set_attr "length" "2,2,4") +- (set_attr "type" "alu_shift,load_byte,load_byte") +- (set_attr "pool_range" "*,32,32")] ++ "TARGET_THUMB1" ++{ ++ rtx addr; ++ ++ if (which_alternative == 0 && arm_arch6) ++ return "sxtb\\t%0, %1"; ++ if (which_alternative == 0) ++ return "#"; ++ ++ addr = XEXP (operands[1], 0); ++ if (GET_CODE (addr) == PLUS ++ && REG_P (XEXP (addr, 0)) && REG_P (XEXP (addr, 1))) ++ return "ldrsb\\t%0, %1"; ++ ++ return "#"; ++} ++ [(set_attr_alternative "length" ++ [(if_then_else (eq_attr "is_arch6" "yes") ++ (const_int 2) (const_int 4)) ++ (const_int 2) ++ (if_then_else (eq_attr "is_arch6" "yes") ++ (const_int 4) (const_int 6))]) ++ (set_attr "type" "alu_shift,load_byte,load_byte")] + ) + + (define_expand "extendsfdf2" +@@ -6784,6 +6639,30 @@ + operands[2] = force_reg (SImode, operands[2]); + ") + ++;; A pattern to recognize a special situation and optimize for it. ++;; On the thumb, zero-extension from memory is preferrable to sign-extension ++;; due to the available addressing modes. Hence, convert a signed comparison ++;; with zero into an unsigned comparison with 127 if possible. ++(define_expand "cbranchqi4" ++ [(set (pc) (if_then_else ++ (match_operator 0 "lt_ge_comparison_operator" ++ [(match_operand:QI 1 "memory_operand" "") ++ (match_operand:QI 2 "const0_operand" "")]) ++ (label_ref (match_operand 3 "" "")) ++ (pc)))] ++ "TARGET_THUMB1" ++{ ++ rtx xops[4]; ++ xops[1] = gen_reg_rtx (SImode); ++ emit_insn (gen_zero_extendqisi2 (xops[1], operands[1])); ++ xops[2] = GEN_INT (127); ++ xops[0] = gen_rtx_fmt_ee (GET_CODE (operands[0]) == GE ? LEU : GTU, ++ VOIDmode, xops[1], xops[2]); ++ xops[3] = operands[3]; ++ emit_insn (gen_cbranchsi4 (xops[0], xops[1], xops[2], xops[3])); ++ DONE; ++}) ++ + (define_expand "cbranchsf4" + [(set (pc) (if_then_else + (match_operator 0 "arm_comparison_operator" +@@ -6849,7 +6728,7 @@ + }" + ) + +-(define_insn "*cbranchsi4_insn" ++(define_insn "cbranchsi4_insn" + [(set (pc) (if_then_else + (match_operator 0 "arm_comparison_operator" + [(match_operand:SI 1 "s_register_operand" "l,*h") +@@ -6858,7 +6737,20 @@ + (pc)))] + "TARGET_THUMB1" + "* +- output_asm_insn (\"cmp\\t%1, %2\", operands); ++ rtx t = prev_nonnote_insn (insn); ++ if (t != NULL_RTX ++ && INSN_P (t) ++ && INSN_CODE (t) == CODE_FOR_cbranchsi4_insn) ++ { ++ t = XEXP (SET_SRC (PATTERN (t)), 0); ++ if (!rtx_equal_p (XEXP (t, 0), operands[1]) ++ || !rtx_equal_p (XEXP (t, 1), operands[2])) ++ t = NULL_RTX; ++ } ++ else ++ t = NULL_RTX; ++ if (t == NULL_RTX) ++ output_asm_insn (\"cmp\\t%1, %2\", operands); + + switch (get_attr_length (insn)) + { +@@ -7674,15 +7566,15 @@ + (if_then_else + (match_operator 4 "arm_comparison_operator" + [(plus:SI +- (match_operand:SI 2 "s_register_operand" "%l,0,*0,1,1,1") +- (match_operand:SI 3 "reg_or_int_operand" "lL,IJ,*r,lIJ,lIJ,lIJ")) ++ (match_operand:SI 2 "s_register_operand" "%0,l,*l,1,1,1") ++ (match_operand:SI 3 "reg_or_int_operand" "IJ,lL,*l,lIJ,lIJ,lIJ")) + (const_int 0)]) + (label_ref (match_operand 5 "" "")) + (pc))) + (set + (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,l,*!h,*?h,*?m,*?m") + (plus:SI (match_dup 2) (match_dup 3))) +- (clobber (match_scratch:SI 1 "=X,X,X,l,&l,&l"))] ++ (clobber (match_scratch:SI 1 "=X,X,l,l,&l,&l"))] + "TARGET_THUMB1 + && (GET_CODE (operands[4]) == EQ + || GET_CODE (operands[4]) == NE +@@ -7692,8 +7584,7 @@ + { + rtx cond[3]; + +- +- cond[0] = (which_alternative < 3) ? operands[0] : operands[1]; ++ cond[0] = (which_alternative < 2) ? operands[0] : operands[1]; + cond[1] = operands[2]; + cond[2] = operands[3]; + +@@ -7702,13 +7593,13 @@ + else + output_asm_insn (\"add\\t%0, %1, %2\", cond); + +- if (which_alternative >= 3 ++ if (which_alternative >= 2 + && which_alternative < 4) + output_asm_insn (\"mov\\t%0, %1\", operands); + else if (which_alternative >= 4) + output_asm_insn (\"str\\t%1, %0\", operands); + +- switch (get_attr_length (insn) - ((which_alternative >= 3) ? 2 : 0)) ++ switch (get_attr_length (insn) - ((which_alternative >= 2) ? 2 : 0)) + { + case 4: + return \"b%d4\\t%l5\"; +@@ -7722,7 +7613,7 @@ + [(set (attr "far_jump") + (if_then_else + (ior (and (lt (symbol_ref ("which_alternative")) +- (const_int 3)) ++ (const_int 2)) + (eq_attr "length" "8")) + (eq_attr "length" "10")) + (const_string "yes") +@@ -7730,7 +7621,7 @@ + (set (attr "length") + (if_then_else + (lt (symbol_ref ("which_alternative")) +- (const_int 3)) ++ (const_int 2)) + (if_then_else + (and (ge (minus (match_dup 5) (pc)) (const_int -250)) + (le (minus (match_dup 5) (pc)) (const_int 256))) +@@ -9483,41 +9374,117 @@ + (set_attr "length" "4,8")] + ) + +-(define_insn "*compare_scc" ++; A series of splitters for the compare_scc pattern below. Note that ++; order is important. ++(define_split ++ [(set (match_operand:SI 0 "s_register_operand" "") ++ (lt:SI (match_operand:SI 1 "s_register_operand" "") ++ (const_int 0))) ++ (clobber (reg:CC CC_REGNUM))] ++ "TARGET_32BIT && reload_completed" ++ [(set (match_dup 0) (lshiftrt:SI (match_dup 1) (const_int 31)))]) ++ ++(define_split ++ [(set (match_operand:SI 0 "s_register_operand" "") ++ (ge:SI (match_operand:SI 1 "s_register_operand" "") ++ (const_int 0))) ++ (clobber (reg:CC CC_REGNUM))] ++ "TARGET_32BIT && reload_completed" ++ [(set (match_dup 0) (not:SI (match_dup 1))) ++ (set (match_dup 0) (lshiftrt:SI (match_dup 0) (const_int 31)))]) ++ ++(define_split ++ [(set (match_operand:SI 0 "s_register_operand" "") ++ (eq:SI (match_operand:SI 1 "s_register_operand" "") ++ (const_int 0))) ++ (clobber (reg:CC CC_REGNUM))] ++ "TARGET_32BIT && reload_completed" ++ [(parallel ++ [(set (reg:CC CC_REGNUM) ++ (compare:CC (const_int 1) (match_dup 1))) ++ (set (match_dup 0) ++ (minus:SI (const_int 1) (match_dup 1)))]) ++ (cond_exec (ltu:CC (reg:CC CC_REGNUM) (const_int 0)) ++ (set (match_dup 0) (const_int 0)))]) ++ ++(define_split ++ [(set (match_operand:SI 0 "s_register_operand" "") ++ (ne:SI (match_operand:SI 1 "s_register_operand" "") ++ (match_operand:SI 2 "const_int_operand" ""))) ++ (clobber (reg:CC CC_REGNUM))] ++ "TARGET_32BIT && reload_completed" ++ [(parallel ++ [(set (reg:CC CC_REGNUM) ++ (compare:CC (match_dup 1) (match_dup 2))) ++ (set (match_dup 0) (plus:SI (match_dup 1) (match_dup 3)))]) ++ (cond_exec (ne:CC (reg:CC CC_REGNUM) (const_int 0)) ++ (set (match_dup 0) (const_int 1)))] ++{ ++ operands[3] = GEN_INT (-INTVAL (operands[2])); ++}) ++ ++(define_split ++ [(set (match_operand:SI 0 "s_register_operand" "") ++ (ne:SI (match_operand:SI 1 "s_register_operand" "") ++ (match_operand:SI 2 "arm_add_operand" ""))) ++ (clobber (reg:CC CC_REGNUM))] ++ "TARGET_32BIT && reload_completed" ++ [(parallel ++ [(set (reg:CC_NOOV CC_REGNUM) ++ (compare:CC_NOOV (minus:SI (match_dup 1) (match_dup 2)) ++ (const_int 0))) ++ (set (match_dup 0) (minus:SI (match_dup 1) (match_dup 2)))]) ++ (cond_exec (ne:CC_NOOV (reg:CC_NOOV CC_REGNUM) (const_int 0)) ++ (set (match_dup 0) (const_int 1)))]) ++ ++(define_insn_and_split "*compare_scc" + [(set (match_operand:SI 0 "s_register_operand" "=r,r") + (match_operator:SI 1 "arm_comparison_operator" + [(match_operand:SI 2 "s_register_operand" "r,r") + (match_operand:SI 3 "arm_add_operand" "rI,L")])) + (clobber (reg:CC CC_REGNUM))] +- "TARGET_ARM" +- "* +- if (operands[3] == const0_rtx) +- { +- if (GET_CODE (operands[1]) == LT) +- return \"mov\\t%0, %2, lsr #31\"; +- +- if (GET_CODE (operands[1]) == GE) +- return \"mvn\\t%0, %2\;mov\\t%0, %0, lsr #31\"; +- +- if (GET_CODE (operands[1]) == EQ) +- return \"rsbs\\t%0, %2, #1\;movcc\\t%0, #0\"; +- } +- +- if (GET_CODE (operands[1]) == NE) +- { +- if (which_alternative == 1) +- return \"adds\\t%0, %2, #%n3\;movne\\t%0, #1\"; +- return \"subs\\t%0, %2, %3\;movne\\t%0, #1\"; +- } +- if (which_alternative == 1) +- output_asm_insn (\"cmn\\t%2, #%n3\", operands); +- else +- output_asm_insn (\"cmp\\t%2, %3\", operands); +- return \"mov%D1\\t%0, #0\;mov%d1\\t%0, #1\"; +- " +- [(set_attr "conds" "clob") +- (set_attr "length" "12")] +-) ++ "TARGET_32BIT" ++ "#" ++ "&& reload_completed" ++ [(set (reg:CC CC_REGNUM) (compare:CC (match_dup 2) (match_dup 3))) ++ (cond_exec (match_dup 4) (set (match_dup 0) (const_int 0))) ++ (cond_exec (match_dup 5) (set (match_dup 0) (const_int 1)))] ++{ ++ rtx tmp1; ++ enum machine_mode mode = SELECT_CC_MODE (GET_CODE (operands[1]), ++ operands[2], operands[3]); ++ enum rtx_code rc = GET_CODE (operands[1]); ++ ++ tmp1 = gen_rtx_REG (mode, CC_REGNUM); ++ ++ operands[5] = gen_rtx_fmt_ee (rc, VOIDmode, tmp1, const0_rtx); ++ if (mode == CCFPmode || mode == CCFPEmode) ++ rc = reverse_condition_maybe_unordered (rc); ++ else ++ rc = reverse_condition (rc); ++ operands[4] = gen_rtx_fmt_ee (rc, VOIDmode, tmp1, const0_rtx); ++}) ++ ++;; Attempt to improve the sequence generated by the compare_scc splitters ++;; not to use conditional execution. ++(define_peephole2 ++ [(set (reg:CC CC_REGNUM) ++ (compare:CC (match_operand:SI 1 "register_operand" "") ++ (match_operand:SI 2 "arm_rhs_operand" ""))) ++ (cond_exec (ne (reg:CC CC_REGNUM) (const_int 0)) ++ (set (match_operand:SI 0 "register_operand" "") (const_int 0))) ++ (cond_exec (eq (reg:CC CC_REGNUM) (const_int 0)) ++ (set (match_dup 0) (const_int 1))) ++ (match_scratch:SI 3 "r")] ++ "TARGET_32BIT" ++ [(set (match_dup 3) (minus:SI (match_dup 1) (match_dup 2))) ++ (parallel ++ [(set (reg:CC CC_REGNUM) ++ (compare:CC (const_int 0) (match_dup 3))) ++ (set (match_dup 0) (minus:SI (const_int 0) (match_dup 3)))]) ++ (set (match_dup 0) ++ (plus:SI (plus:SI (match_dup 0) (match_dup 3)) ++ (geu:SI (reg:CC CC_REGNUM) (const_int 0))))]) + + (define_insn "*cond_move" + [(set (match_operand:SI 0 "s_register_operand" "=r,r,r") + +=== modified file 'gcc/config/arm/predicates.md' +--- old/gcc/config/arm/predicates.md 2010-08-31 09:40:16 +0000 ++++ new/gcc/config/arm/predicates.md 2010-08-31 10:00:27 +0000 +@@ -115,6 +115,10 @@ + (and (match_code "const_int") + (match_test "const_ok_for_arm (~INTVAL (op))"))) + ++(define_predicate "const0_operand" ++ (and (match_code "const_int") ++ (match_test "INTVAL (op) == 0"))) ++ + ;; Something valid on the RHS of an ARM data-processing instruction + (define_predicate "arm_rhs_operand" + (ior (match_operand 0 "s_register_operand") +@@ -233,6 +237,9 @@ + && (TARGET_FPA || TARGET_VFP)") + (match_code "unordered,ordered,unlt,unle,unge,ungt")))) + ++(define_special_predicate "lt_ge_comparison_operator" ++ (match_code "lt,ge")) ++ + (define_special_predicate "minmax_operator" + (and (match_code "smin,smax,umin,umax") + (match_test "mode == GET_MODE (op)"))) + +=== modified file 'gcc/config/arm/thumb2.md' +--- old/gcc/config/arm/thumb2.md 2010-08-31 09:40:16 +0000 ++++ new/gcc/config/arm/thumb2.md 2010-08-31 10:00:27 +0000 +@@ -599,42 +599,6 @@ + (set_attr "length" "6,10")] + ) + +-(define_insn "*thumb2_compare_scc" +- [(set (match_operand:SI 0 "s_register_operand" "=r,r") +- (match_operator:SI 1 "arm_comparison_operator" +- [(match_operand:SI 2 "s_register_operand" "r,r") +- (match_operand:SI 3 "arm_add_operand" "rI,L")])) +- (clobber (reg:CC CC_REGNUM))] +- "TARGET_THUMB2" +- "* +- if (operands[3] == const0_rtx) +- { +- if (GET_CODE (operands[1]) == LT) +- return \"lsr\\t%0, %2, #31\"; +- +- if (GET_CODE (operands[1]) == GE) +- return \"mvn\\t%0, %2\;lsr\\t%0, %0, #31\"; +- +- if (GET_CODE (operands[1]) == EQ) +- return \"rsbs\\t%0, %2, #1\;it\\tcc\;movcc\\t%0, #0\"; +- } +- +- if (GET_CODE (operands[1]) == NE) +- { +- if (which_alternative == 1) +- return \"adds\\t%0, %2, #%n3\;it\\tne\;movne\\t%0, #1\"; +- return \"subs\\t%0, %2, %3\;it\\tne\;movne\\t%0, #1\"; +- } +- if (which_alternative == 1) +- output_asm_insn (\"cmn\\t%2, #%n3\", operands); +- else +- output_asm_insn (\"cmp\\t%2, %3\", operands); +- return \"ite\\t%D1\;mov%D1\\t%0, #0\;mov%d1\\t%0, #1\"; +- " +- [(set_attr "conds" "clob") +- (set_attr "length" "14")] +-) +- + (define_insn "*thumb2_cond_move" + [(set (match_operand:SI 0 "s_register_operand" "=r,r,r") + (if_then_else:SI (match_operator 3 "equality_operator" + +=== added file 'gcc/testsuite/gcc.c-torture/execute/pr40657.c' +--- old/gcc/testsuite/gcc.c-torture/execute/pr40657.c 1970-01-01 00:00:00 +0000 ++++ new/gcc/testsuite/gcc.c-torture/execute/pr40657.c 2010-08-31 10:00:27 +0000 +@@ -0,0 +1,23 @@ ++/* Verify that that Thumb-1 epilogue size optimization does not clobber the ++ return value. */ ++ ++long long v = 0x123456789abc; ++ ++__attribute__((noinline)) void bar (int *x) ++{ ++ asm volatile ("" : "=m" (x) ::); ++} ++ ++__attribute__((noinline)) long long foo() ++{ ++ int x; ++ bar(&x); ++ return v; ++} ++ ++int main () ++{ ++ if (foo () != v) ++ abort (); ++ exit (0); ++} + +=== added file 'gcc/testsuite/gcc.target/arm/pr40657-1.c' +--- old/gcc/testsuite/gcc.target/arm/pr40657-1.c 1970-01-01 00:00:00 +0000 ++++ new/gcc/testsuite/gcc.target/arm/pr40657-1.c 2010-08-31 10:00:27 +0000 +@@ -0,0 +1,13 @@ ++/* { dg-options "-Os -march=armv5te -mthumb" } */ ++/* { dg-require-effective-target arm_thumb1_ok } */ ++/* { dg-final { scan-assembler "pop.*r1.*pc" } } */ ++/* { dg-final { scan-assembler-not "sub\[\\t \]*sp,\[\\t \]*sp" } } */ ++/* { dg-final { scan-assembler-not "add\[\\t \]*sp,\[\\t \]*sp" } } */ ++ ++extern void bar(int*); ++int foo() ++{ ++ int x; ++ bar(&x); ++ return x; ++} + +=== added file 'gcc/testsuite/gcc.target/arm/pr40657-2.c' +--- old/gcc/testsuite/gcc.target/arm/pr40657-2.c 1970-01-01 00:00:00 +0000 ++++ new/gcc/testsuite/gcc.target/arm/pr40657-2.c 2010-08-31 10:00:27 +0000 +@@ -0,0 +1,20 @@ ++/* { dg-options "-Os -march=armv4t -mthumb" } */ ++/* { dg-require-effective-target arm_thumb1_ok } */ ++/* { dg-final { scan-assembler-not "sub\[\\t \]*sp,\[\\t \]*sp" } } */ ++/* { dg-final { scan-assembler-not "add\[\\t \]*sp,\[\\t \]*sp" } } */ ++ ++/* Here, we test that if there's a pop of r[4567] in the epilogue, ++ add sp,sp,#12 is removed and replaced by three additional pops ++ of lower-numbered regs. */ ++ ++extern void bar(int*); ++ ++int t1, t2, t3, t4, t5; ++int foo() ++{ ++ int i,j,k,x = 0; ++ for (i = 0; i < t1; i++) ++ for (j = 0; j < t2; j++) ++ bar(&x); ++ return x; ++} + +=== added file 'gcc/testsuite/gcc.target/arm/pr42172-1.c' +--- old/gcc/testsuite/gcc.target/arm/pr42172-1.c 1970-01-01 00:00:00 +0000 ++++ new/gcc/testsuite/gcc.target/arm/pr42172-1.c 2010-08-31 10:00:27 +0000 +@@ -0,0 +1,19 @@ ++/* { dg-options "-O2" } */ ++ ++struct A { ++ unsigned int f1 : 3; ++ unsigned int f2 : 3; ++ unsigned int f3 : 1; ++ unsigned int f4 : 1; ++ ++}; ++ ++void init_A (struct A *this) ++{ ++ this->f1 = 0; ++ this->f2 = 1; ++ this->f3 = 0; ++ this->f4 = 0; ++} ++ ++/* { dg-final { scan-assembler-times "ldr" 1 } } */ + +=== added file 'gcc/testsuite/gcc.target/arm/pr42835.c' +--- old/gcc/testsuite/gcc.target/arm/pr42835.c 1970-01-01 00:00:00 +0000 ++++ new/gcc/testsuite/gcc.target/arm/pr42835.c 2010-08-31 10:00:27 +0000 +@@ -0,0 +1,12 @@ ++/* { dg-do compile } */ ++/* { dg-options "-mthumb -Os" } */ ++/* { dg-require-effective-target arm_thumb2_ok } */ ++ ++int foo(int *p, int i) ++{ ++ return( (i < 0 && *p == 1) ++ || (i > 0 && *p == 2) ); ++} ++ ++/* { dg-final { scan-assembler-times "movne\[\\t \]*r.,\[\\t \]*#" 1 } } */ ++/* { dg-final { scan-assembler-times "moveq\[\\t \]*r.,\[\\t \]*#" 1 } } */ + +=== added file 'gcc/testsuite/gcc.target/arm/thumb-cbranchqi.c' +--- old/gcc/testsuite/gcc.target/arm/thumb-cbranchqi.c 1970-01-01 00:00:00 +0000 ++++ new/gcc/testsuite/gcc.target/arm/thumb-cbranchqi.c 2010-08-31 10:00:27 +0000 +@@ -0,0 +1,15 @@ ++/* { dg-do compile } */ ++/* { dg-options "-mthumb -Os" } */ ++/* { dg-require-effective-target arm_thumb1_ok } */ ++ ++int ldrb(unsigned char* p) ++{ ++ if (p[8] <= 0x7F) ++ return 2; ++ else ++ return 5; ++} ++ ++ ++/* { dg-final { scan-assembler "127" } } */ ++/* { dg-final { scan-assembler "bhi" } } */ + +=== added file 'gcc/testsuite/gcc.target/arm/thumb-comparisons.c' +--- old/gcc/testsuite/gcc.target/arm/thumb-comparisons.c 1970-01-01 00:00:00 +0000 ++++ new/gcc/testsuite/gcc.target/arm/thumb-comparisons.c 2010-08-31 10:00:27 +0000 +@@ -0,0 +1,18 @@ ++/* { dg-do compile } */ ++/* { dg-options "-mthumb -Os" } */ ++/* { dg-require-effective-target arm_thumb1_ok } */ ++ ++int foo(char ch) ++{ ++ switch (ch) { ++ case '-': ++ case '?': ++ case '/': ++ case 99: ++ return 1; ++ default: ++ return 0; ++ } ++} ++ ++/* { dg-final { scan-assembler-times "cmp\[\\t \]*r.,\[\\t \]*#63" 1 } } */ + +=== added file 'gcc/testsuite/gcc.target/arm/thumb-stackframe.c' +--- old/gcc/testsuite/gcc.target/arm/thumb-stackframe.c 1970-01-01 00:00:00 +0000 ++++ new/gcc/testsuite/gcc.target/arm/thumb-stackframe.c 2010-08-31 10:00:27 +0000 +@@ -0,0 +1,13 @@ ++/* { dg-do compile } */ ++/* { dg-options "-mthumb -Os" } */ ++/* { dg-require-effective-target arm_thumb1_ok } */ ++ ++extern void bar(int*); ++int foo() ++{ ++ int x; ++ bar(&x); ++ return x; ++} ++ ++/* { dg-final { scan-assembler-not "sub\[\\t \]*sp,\[\\t \]*sp," } } */ + |