[
  {
    "path": ".gitignore",
    "content": "h264enc_*\nqemu-prof\n*.gcda\n*.gcno\n*.gcov"
  },
  {
    "path": ".travis.yml",
    "content": "language: c\naddons:\n  apt:\n    packages:\n      - build-essential\n      - libc6-dev-i386\n      - linux-libc-dev:i386\n      - gcc-arm-none-eabi\n      - gcc-arm-linux-gnueabihf\n      - libnewlib-arm-none-eabi\n      - clang\n      - gcc-5-multilib\n      - gcc-arm-linux-gnueabihf\n      - gcc-aarch64-linux-gnu\n      - gcc-powerpc-linux-gnu\n      - gcc-5-arm-linux-gnueabihf\n      - gcc-5-aarch64-linux-gnu\n      - gcc-5-powerpc-linux-gnu\n      - libc6-armhf-cross\n      - libc6-arm64-cross\n      - libc6-powerpc-cross\n      - libc6-dev-armhf-cross\n      - libc6-dev-arm64-cross\n      - libc6-dev-powerpc-cross\n      - qemu\n\nos:\n    - linux\n\ncompiler:\n    - gcc\n\nscript:\n    - scripts/build_x86.sh\n    - scripts/build_arm.sh\n    - scripts/test.sh\n"
  },
  {
    "path": "LICENSE",
    "content": "CC0 1.0 Universal\n\nStatement of Purpose\n\nThe laws of most jurisdictions throughout the world automatically confer\nexclusive Copyright and Related Rights (defined below) upon the creator and\nsubsequent owner(s) (each and all, an \"owner\") of an original work of\nauthorship and/or a database (each, a \"Work\").\n\nCertain owners wish to permanently relinquish those rights to a Work for the\npurpose of contributing to a commons of creative, cultural and scientific\nworks (\"Commons\") that the public can reliably and without fear of later\nclaims of infringement build upon, modify, incorporate in other works, reuse\nand redistribute as freely as possible in any form whatsoever and for any\npurposes, including without limitation commercial purposes. These owners may\ncontribute to the Commons to promote the ideal of a free culture and the\nfurther production of creative, cultural and scientific works, or to gain\nreputation or greater distribution for their Work in part through the use and\nefforts of others.\n\nFor these and/or other purposes and motivations, and without any expectation\nof additional consideration or compensation, the person associating CC0 with a\nWork (the \"Affirmer\"), to the extent that he or she is an owner of Copyright\nand Related Rights in the Work, voluntarily elects to apply CC0 to the Work\nand publicly distribute the Work under its terms, with knowledge of his or her\nCopyright and Related Rights in the Work and the meaning and intended legal\neffect of CC0 on those rights.\n\n1. Copyright and Related Rights. A Work made available under CC0 may be\nprotected by copyright and related or neighboring rights (\"Copyright and\nRelated Rights\"). Copyright and Related Rights include, but are not limited\nto, the following:\n\n  i. the right to reproduce, adapt, distribute, perform, display, communicate,\n  and translate a Work;\n\n  ii. moral rights retained by the original author(s) and/or performer(s);\n\n  iii. publicity and privacy rights pertaining to a person's image or likeness\n  depicted in a Work;\n\n  iv. rights protecting against unfair competition in regards to a Work,\n  subject to the limitations in paragraph 4(a), below;\n\n  v. rights protecting the extraction, dissemination, use and reuse of data in\n  a Work;\n\n  vi. database rights (such as those arising under Directive 96/9/EC of the\n  European Parliament and of the Council of 11 March 1996 on the legal\n  protection of databases, and under any national implementation thereof,\n  including any amended or successor version of such directive); and\n\n  vii. other similar, equivalent or corresponding rights throughout the world\n  based on applicable law or treaty, and any national implementations thereof.\n\n2. Waiver. To the greatest extent permitted by, but not in contravention of,\napplicable law, Affirmer hereby overtly, fully, permanently, irrevocably and\nunconditionally waives, abandons, and surrenders all of Affirmer's Copyright\nand Related Rights and associated claims and causes of action, whether now\nknown or unknown (including existing as well as future claims and causes of\naction), in the Work (i) in all territories worldwide, (ii) for the maximum\nduration provided by applicable law or treaty (including future time\nextensions), (iii) in any current or future medium and for any number of\ncopies, and (iv) for any purpose whatsoever, including without limitation\ncommercial, advertising or promotional purposes (the \"Waiver\"). Affirmer makes\nthe Waiver for the benefit of each member of the public at large and to the\ndetriment of Affirmer's heirs and successors, fully intending that such Waiver\nshall not be subject to revocation, rescission, cancellation, termination, or\nany other legal or equitable action to disrupt the quiet enjoyment of the Work\nby the public as contemplated by Affirmer's express Statement of Purpose.\n\n3. Public License Fallback. Should any part of the Waiver for any reason be\njudged legally invalid or ineffective under applicable law, then the Waiver\nshall be preserved to the maximum extent permitted taking into account\nAffirmer's express Statement of Purpose. In addition, to the extent the Waiver\nis so judged Affirmer hereby grants to each affected person a royalty-free,\nnon transferable, non sublicensable, non exclusive, irrevocable and\nunconditional license to exercise Affirmer's Copyright and Related Rights in\nthe Work (i) in all territories worldwide, (ii) for the maximum duration\nprovided by applicable law or treaty (including future time extensions), (iii)\nin any current or future medium and for any number of copies, and (iv) for any\npurpose whatsoever, including without limitation commercial, advertising or\npromotional purposes (the \"License\"). The License shall be deemed effective as\nof the date CC0 was applied by Affirmer to the Work. Should any part of the\nLicense for any reason be judged legally invalid or ineffective under\napplicable law, such partial invalidity or ineffectiveness shall not\ninvalidate the remainder of the License, and in such case Affirmer hereby\naffirms that he or she will not (i) exercise any of his or her remaining\nCopyright and Related Rights in the Work or (ii) assert any associated claims\nand causes of action with respect to the Work, in either case contrary to\nAffirmer's express Statement of Purpose.\n\n4. Limitations and Disclaimers.\n\n  a. No trademark or patent rights held by Affirmer are waived, abandoned,\n  surrendered, licensed or otherwise affected by this document.\n\n  b. Affirmer offers the Work as-is and makes no representations or warranties\n  of any kind concerning the Work, express, implied, statutory or otherwise,\n  including without limitation warranties of title, merchantability, fitness\n  for a particular purpose, non infringement, or the absence of latent or\n  other defects, accuracy, or the present or absence of errors, whether or not\n  discoverable, all to the greatest extent permissible under applicable law.\n\n  c. Affirmer disclaims responsibility for clearing rights of other persons\n  that may apply to the Work or any use thereof, including without limitation\n  any person's Copyright and Related Rights in the Work. Further, Affirmer\n  disclaims responsibility for obtaining any necessary consents, permissions\n  or other rights required for any use of the Work.\n\n  d. Affirmer understands and acknowledges that Creative Commons is not a\n  party to this document and has no duty or obligation with respect to this\n  CC0 or use of the Work.\n\nFor more information, please see\n<http://creativecommons.org/publicdomain/zero/1.0/>\n\n"
  },
  {
    "path": "README.md",
    "content": "minih264\n==========\n\n[![Build Status](https://travis-ci.org/lieff/minih264.svg)](https://travis-ci.org/lieff/minih264)\n\nSmall, but yet reasonably fast H264/SVC encoder single-header library with SSE/NEON optimizations.\nDecoder can be popped up in future.\n\nDisclaimer: code highly experimental.\n\n## Comparison with [x264](https://www.videolan.org/developers/x264.html)\n\nRough comparison with x264 on an i7-6700K:\n\n`x264 -I 30 --profile baseline --preset veryfast --tune zerolatency -b 0 -r 1 --qp 33 --ipratio 1.0 --qcomp 1.0 -o x264.264 --fps 30 vectors/foreman.cif --input-res 352x288 --slices 1 --threads 1`\n\nvs\n\n`./h264enc_x64 vectors/foreman.cif`\n\n| x264         | minih264 |\n| ------------ | -------- |\n| source: ~4.6mb | 409kb |\n| binary: 1.2mb | 100kb |\n| time: 0,282s | 0,503s |\n| out size: 320kb | 391kb  |\n\nPSNR:\n```\nx264:     PSNR y:32.774824 u:38.874450 v:39.926132 average:34.084281 min:31.842667 max:36.630286\nminih264: PSNR y:33.321686 u:38.858879 v:39.955914 average:34.574459 min:32.389171 max:37.174073\n```\n\nFirst intra frame screenshot (left-to-right: original 152064, minih264 5067, x264 5297 bytes):\n\n![Intra screenshot](images/intra.png?raw=true)\n\nYou can compare results in motion using ffplay/mpv players on vectors/out_ref.264 and vectors/x264.264 .\n\n## Usage\n\nTBD\n\n## SVC\n\nMinih264 supports both spatial and temporal layers. Spatial layers are almost same as encode 2 independent AVC streams except for Intra frames prediction.\nFollowing diagram shows minih264 SVC scheme for two spatial layers:\n\n![SVC diargam](images/svc.png?raw=true)\n\nThat's because P frames spatial prediction are almost useless in practice. But for Intra frames there is a ~20% benefit in full resolution frame size.\nNote that decoder must have both base layer I frame _and_ full resolution SVC I frame to decode whole sequence of next P frames in full resolution.\n\n## Limitations\n\nThe following major features are not supported compared to x264 (baseline):\n\n * Trellis quantization.\n * Select prediction mode using Sum of Absolute Transform Differences (SATD).\n * 4x4 motion compensation.\n\n## Interesting links\n\n * https://www.videolan.org/developers/x264.html\n * https://www.openh264.org/\n * https://github.com/cisco/openh264\n * http://iphome.hhi.de/suehring/tml/\n * https://github.com/oneam/h264bsd\n * https://github.com/fhunleth/hollowcore-h264\n * https://github.com/digetx/h264_decoder\n * https://github.com/lspbeyond/p264decoder\n * https://github.com/jcasal-homer/HomerHEVC\n * https://github.com/ultravideo/kvazaar\n * https://github.com/neocoretechs/h264j\n * https://github.com/jcodec/jcodec\n"
  },
  {
    "path": "asm/minih264e_asm.h",
    "content": "#define H264E_API(type, name, args) type name args; \\\ntype name##_sse2 args;  \\\ntype name##_arm11 args; \\\ntype name##_neon args;\n// h264e_qpel\nH264E_API(void, h264e_qpel_interpolate_chroma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_interpolate_luma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_average_wh_align, (const uint8_t *p0, const uint8_t *p1, uint8_t *h264e_restrict d, point_t wh))\n// h264e_deblock\nH264E_API(void, h264e_deblock_chroma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\nH264E_API(void, h264e_deblock_luma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\n// h264e_intra\nH264E_API(void, h264e_intra_predict_chroma,  (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(void, h264e_intra_predict_16x16, (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(int,  h264e_intra_choose_4x4, (const pix_t *blockin, pix_t *blockpred, int avail, const pix_t *edge, int mpred, int penalty))\n// h264e_cavlc\nH264E_API(void,     h264e_bs_put_bits, (bs_t *bs, unsigned n, unsigned val))\nH264E_API(void,     h264e_bs_flush, (bs_t *bs))\nH264E_API(unsigned, h264e_bs_get_pos_bits, (const bs_t *bs))\nH264E_API(unsigned, h264e_bs_byte_align, (bs_t *bs))\nH264E_API(void,     h264e_bs_put_golomb, (bs_t *bs, unsigned val))\nH264E_API(void,     h264e_bs_put_sgolomb, (bs_t *bs, int val))\nH264E_API(void,     h264e_bs_init_bits, (bs_t *bs, void *data))\nH264E_API(void,     h264e_vlc_encode, (bs_t *bs, int16_t *quant, int maxNumCoeff, uint8_t *nz_ctx))\n// h264e_sad\nH264E_API(int,  h264e_sad_mb_unlaign_8x8, (const pix_t *a, int a_stride, const pix_t *b, int sad[4]))\nH264E_API(int,  h264e_sad_mb_unlaign_wh, (const pix_t *a, int a_stride, const pix_t *b, point_t wh))\nH264E_API(void, h264e_copy_8x8, (pix_t *d, int d_stride, const pix_t *s))\nH264E_API(void, h264e_copy_16x16, (pix_t *d, int d_stride, const pix_t *s, int s_stride))\nH264E_API(void, h264e_copy_borders, (unsigned char *pic, int w, int h, int guard))\n// h264e_transform\nH264E_API(void, h264e_transform_add, (pix_t *out, int out_stride, const pix_t *pred, quant_t *q, int side, int32_t mask))\nH264E_API(int,  h264e_transform_sub_quant_dequant, (const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q, const uint16_t *qdat))\nH264E_API(void, h264e_quant_luma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\nH264E_API(int,  h264e_quant_chroma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\n// h264e_denoise\nH264E_API(void, h264e_denoise_run, (unsigned char *frm, unsigned char *frmprev, int w, int h, int stride_frm, int stride_frmprev))\n#undef H264E_API\n"
  },
  {
    "path": "asm/neon/h264e_cavlc_arm11.s",
    "content": "        .arm\r\n        .text\r\n        .align 2\r\n        .type  h264e_bs_put_sgolomb_arm11, %function\r\nh264e_bs_put_sgolomb_arm11:\r\n        MVN             r2,     #0\r\n        ADD             r1,     r2,     r1,     lsl #1\r\n        EOR             r1,     r1,     r1,     asr #31\r\n        .size  h264e_bs_put_sgolomb_arm11, .-h264e_bs_put_sgolomb_arm11\r\n\r\n        .type  h264e_bs_put_golomb_arm11, %function\r\nh264e_bs_put_golomb_arm11:\r\n        ADD             r2,     r1,     #1\r\n        CLZ             r1,     r2\r\n        MOV             r3,     #63\r\n        SUB             r1,     r3,     r1,     lsl #1\r\n        .size  h264e_bs_put_golomb_arm11, .-h264e_bs_put_golomb_arm11\r\n\r\n        .type  h264e_bs_put_bits_arm11, %function\r\nh264e_bs_put_bits_arm11:\r\n        LDMIA           r0,     {r3,    r12}\r\n        SUBS            r3,     r3,     r1\r\n        BMI             local_cavlc_1_0\r\n        ORR             r12,    r12,    r2,     lsl r3\r\n        STMIA           r0,     {r3,    r12}\r\n        BX              lr\r\nlocal_cavlc_1_0:\r\n        RSB             r1,     r3,     #0\r\n        ORR             r12,    r12,    r2,     lsr r1\r\n        LDR             r1,     [r0,    #8]\r\n        REV             r12,    r12\r\n        ADD             r3,     r3,     #32\r\n        STR             r12,    [r1],   #4\r\n        MOV             r12,    r2,     lsl r3\r\n        STMIA           r0,     {r3,    r12}\r\n        STR             r1,     [r0,    #8]\r\n        BX              lr\r\n        .size  h264e_bs_put_bits_arm11, .-h264e_bs_put_bits_arm11\r\n\r\n        .type  h264e_bs_flush_arm11, %function\r\nh264e_bs_flush_arm11:\r\n        LDMIB           r0,     {r0,    r1}\r\n        REV             r0,     r0\r\n        STR             r0,     [r1]\r\n        BX              lr\r\n        .size  h264e_bs_flush_arm11, .-h264e_bs_flush_arm11\r\n\r\n        .type  h264e_bs_get_pos_bits_arm11, %function\r\nh264e_bs_get_pos_bits_arm11:\r\n        LDMIA           r0,     {r0-r3}\r\n        SUB             r2,     r2,     r3\r\n        RSB             r0,     r0,     #0x20\r\n        ADD             r0,     r0,     r2,     lsl #3\r\n        BX              lr\r\n        .size  h264e_bs_get_pos_bits_arm11, .-h264e_bs_get_pos_bits_arm11\r\n\r\n        .type  h264e_bs_byte_align_arm11, %function\r\nh264e_bs_byte_align_arm11:\r\n        PUSH            {r0,    lr}\r\n        BL              h264e_bs_get_pos_bits_arm11\r\n        RSB             r1,     r0,     #0\r\n        AND             r1,     r1,     #7\r\n        ADD             r3,     r0,     r1\r\n        MOV             r2,     #0\r\n        LDR             r0,     [sp]\r\n        STR             r3,     [sp]\r\n        BL              h264e_bs_put_bits_arm11\r\n        POP             {r0,    pc}\r\n        .size  h264e_bs_byte_align_arm11, .-h264e_bs_byte_align_arm11\r\n\r\n        .type  h264e_bs_init_bits_arm11, %function\r\nh264e_bs_init_bits_arm11:\r\n        MOV             r12,    r1\r\n        MOV             r3,     r1\r\n        MOV             r2,     #0\r\n        MOV             r1,     #32\r\n        STMIA           r0,     {r1-r3, r12}\r\n        BX              lr\r\n        .size  h264e_bs_init_bits_arm11, .-h264e_bs_init_bits_arm11\r\n\r\n        .type  h264e_vlc_encode_arm11, %function\r\nh264e_vlc_encode_arm11:\r\n        PUSH            {r4-r11,        lr}\r\n        CMP             r2,     #4\r\n        MOVNE           r4,     #0x10\r\n        MOVEQ           r4,     #4\r\n        LDMIA           r0,     {r10-r12}\r\n        SUB             sp,     sp,     #0x10\r\n        MOV             r8,     #0\r\n        ADD             r4,     r1,     r4,     lsl #1\r\n        MOV             r9,     r8\r\n        MOV             r5,     sp\r\n        MOV             r1,     r4\r\n        MOV             lr,     r2\r\nlocal_cavlc_1_1:\r\n        LDRSH           r7,     [r4,    #-2]!\r\n        MOVS            r7,     r7,     lsl #1\r\n        STRNEH          r7,     [r1,    #-2]!\r\n        STRNEB          lr,     [r5],   #1\r\n        SUBS            lr,     lr,     #1\r\n        BNE             local_cavlc_1_1\r\n        ADD             r4,     r4,     r2,     lsl #1\r\n        SUB             r5,     r4,     r1\r\n        MOVS            r5,     r5,     asr #1\r\n        BEQ             no_nz1\r\n        CMP             r5,     #3\r\n        MOVLE           r6,     r5\r\n        MOVGT           r6,     #3\r\n        SUB             r1,     r4,     #2\r\nlocal_cavlc_1_2:\r\n        LDRSH           r4,     [r1,    #0]\r\n        ADD             r7,     r4,     #2\r\n        CMP             r7,     #4\r\n        BHI             no_nz1\r\n        MOV             r7,     r9,     lsl #1\r\n        SUBS            r6,     r6,     #1\r\n        ORR             r9,     r7,     r4,     lsr #31\r\n        SUB             r1,     r1,     #2\r\n        ADD             r8,     r8,     #1\r\n        BNE             local_cavlc_1_2\r\nno_nz1:\r\n        LDRB            r4,     [r3,    #-1]\r\n        LDRB            r7,     [r3,    #1]\r\n        STRB            r5,     [r3,    #0]\r\n        SUB             r6,     r5,     r8\r\n        ADD             r3,     r4,     r7\r\n        CMP             r3,     #0x22\r\n        ADDLE           r3,     r3,     #1\r\n        LDR             r4,     =h264e_g_coeff_token\r\n        MOVLE           r3,     r3,     asr #1\r\n        AND             r3,     r3,     #0x1f\r\n        MOV             r7,     #6\r\n        LDRB            r3,     [r4,    r3]\r\n        ADD             lr,     r3,     r8\r\n        ADD             lr,     lr,     r6,     lsl #2\r\n        CMP             r3,     #0xe6\r\n        LDRB            r4,     [r4,    lr]\r\n        ANDNE           r3,     r4,     #0xf\r\n        ADDNE           r7,     r3,     #1\r\n        MOVNE           r4,     r4,     lsr #4\r\n        SUBS            r10,    r10,    r7\r\n        BLMI            bs_flush_sub\r\n        ORR             r11,    r11,    r4,     lsl r10\r\n        CMP             r5,     #0\r\n        BEQ             l1.1272\r\n        CMP             r8,     #0\r\n        BEQ             l1.864\r\n        SUBS            r10,    r10,    r8\r\n        MOV             r4,     r9\r\n        BLMI            bs_flush_sub\r\n        ORR             r11,    r11,    r4,     lsl r10\r\nl1.864:\r\n        CMP             r6,     #0\r\n        BEQ             l1.1120\r\n        LDRSH           r7,     [r1,    #0]\r\n        SUB             lr,     r1,     #2\r\n        MVN             r4,     #2\r\n        SUBS            r1,     r7,     #2\r\n        SUBMI           r1,     r4,     r1\r\n        CMP             r1,     #6\r\n        MOV             r9,     #1\r\n        MOVGE           r9,     #2\r\n        CMP             r8,     #3\r\n        BGE             l1.952\r\n        CMP             r5,     #0xa\r\n        SUB             r1,     r1,     #2\r\n        BLE             l1.952\r\n        MOV             r7,     r1,     asr #1\r\n        CMP             r7,     #0xf\r\n        MOVGE           r7,     #0xf\r\n        MOV             r8,     #1\r\n        MOVGE           r8,     #0xc\r\n        SUB             r1,     r1,     r7,     lsl #1\r\n        RSB             r7,     #2\r\n        B               loop_enter\r\nl1.952:\r\n        CMP             r1,     #0xe\r\n        MOVLT           r7,     r1\r\n        MOVLT           r1,     #0\r\n        MOVLT           r8,     r1\r\n        RSBLT           r7,     #2\r\n        BLT             loop_enter\r\n        CMP             r1,     #0x1e\r\n        MOVGE           r9,     #1\r\n        BGE             escape\r\n        MOV             r7,     #0xe\r\n        MOV             r8,     #4\r\n        SUB             r1,     r1,     #0xe\r\n        RSB             r7,     #2\r\n        B               loop_enter\r\nlocal_cavlc_1_3:\r\n        SUBS            r1,     r1,     #2\r\n        SUBMI           r1,     r4,     r1\r\n        MOV             r7,     r1,     asr r9\r\n        CMP             r7,     #0xf\r\n        MOV             r8,     r9\r\nescape:\r\n        MOVGE           r7,     #0xf\r\n        MOVGE           r8,     #0xc\r\n        SUB             r1,     r1,     r7,     lsl r9\r\n        RSBS            r7,     #2\r\n        CMPLT           r9,     #6\r\n        ADDLT           r9,     r9,     #1\r\nloop_enter:\r\n        MOV             r3,     #1\r\n        ORR             r1,     r1,     r3,     lsl r8\r\n        RSB             r7,     r7,     #3\r\n        ADD             r7,     r7,     r8\r\n        SUBS            r10,    r10,    r7\r\n        BMI             bs_flush_1\r\nbs_flush_1_return:\r\n        ORR             r11,    r11,    r1,     lsl r10\r\n        SUBS            r6,     r6,     #1\r\n        LDRNESH         r1,     [lr],   #-2\r\n        BNE             local_cavlc_1_3\r\nl1.1120:\r\n        CMP             r5,     r2\r\n        BGE             l1.1272\r\n        LDRB            r8,     [sp,    #0]\r\n        CMP             r2,     #4\r\n        ADD             r6,     sp,     #1\r\n        SUB             r1,     r8,     r5\r\n        SUB             r9,     r5,     #1\r\n        LDRNE           r7,     =h264e_g_total_zeros\r\n        LDREQ           r7,     =h264e_g_total_zeros_cr_2x2\r\n        ADD             r5,     r5,     r6\r\n        MVN             r2,     #0\r\n        MOV             lr,     #0x10\r\n        ADD             r2,     r2,     r1,     lsl #1\r\n        STRB            lr,     [r5,    #-1]\r\nl1.1176:\r\n        LDRB            r5,     [r7,    r9]\r\n        ADD             r7,     r7,     r1\r\n        LDRB            r5,     [r5,    r7]\r\n        AND             r7,     r5,     #0xf\r\n        SUBS            r10,    r10,    r7\r\n        MOV             r4,     r5,     lsr #4\r\n        BLMI            bs_flush_sub\r\n        ORR             r11,    r11,    r4,     lsl r10\r\n        SUBS            r2,     r2,     r1\r\n        BMI             l1.1272\r\n        LDRB            r1,     [r6],   #1\r\n        MOV             r5,     r8\r\n        MOV             r8,     r1\r\n        SUB             r1,     r5,     r1\r\n        SUBS            r1,     r1,     #1\r\n        LDRPL           r7,     =h264e_g_run_before\r\n        MOVPL           r9,     r2\r\n        BPL             l1.1176\r\nl1.1272:\r\n        STMIA           r0,     {r10,   r11,    r12}\r\n        ADD             sp,     sp,     #0x10\r\n        POP             {r4-r11,        pc}\r\nbs_flush_sub:\r\n        RSB             r7,     r10,    #0\r\n        ADD             r10,    r10,    #0x20\r\n        ORR             r11,    r11,    r4,     asr r7\r\n        REV             r11,    r11\r\n        STR             r11,    [r12],  #4\r\n        MOV             r11,    #0\r\n        BX              lr\r\nbs_flush_1:\r\n        RSB             r7,     r10,    #0\r\n        ADD             r10,    r10,    #0x20\r\n        ORR             r11,    r11,    r1,     asr r7\r\n        REV             r11,    r11\r\n        STR             r11,    [r12],  #4\r\n        MOV             r11,    #0\r\n        B               bs_flush_1_return\r\n        .size  h264e_vlc_encode_arm11, .-h264e_vlc_encode_arm11\r\n\r\n        .global         h264e_bs_put_bits_arm11\r\n        .global         h264e_bs_flush_arm11\r\n        .global         h264e_bs_get_pos_bits_arm11\r\n        .global         h264e_bs_byte_align_arm11\r\n        .global         h264e_bs_put_golomb_arm11\r\n        .global         h264e_bs_put_sgolomb_arm11\r\n        .global         h264e_bs_init_bits_arm11\r\n        .global         h264e_vlc_encode_arm11\r\n"
  },
  {
    "path": "asm/neon/h264e_deblock_neon.s",
    "content": "        .arm\n        .text\n        .align 2\n\n        .type  deblock_luma_h_s4, %function\ndeblock_luma_h_s4:\n        VPUSH           {q4-q7}\n        SUB             r0,     r0,     r1,     lsl #2\n        VLD1.8          {q8},   [r0],   r1\n        VLD1.8          {q9},   [r0],   r1\n        VLD1.8          {q10},  [r0],   r1\n        VLD1.8          {q11},  [r0],   r1\n        VLD1.8          {q12},  [r0],   r1\n        VLD1.8          {q13},  [r0],   r1\n        VLD1.8          {q14},  [r0],   r1\n        VLD1.8          {q15},  [r0],   r1\n        VDUP.8          q3,     r2\n        VABD.U8         q0,     q11,    q12\n        VCLT.U8         q2,     q0,     q3\n        VDUP.8          q3,     r3\n        VABD.U8         q1,     q11,    q10\n        VCLT.U8         q1,     q1,     q3\n        VAND            q2,     q2,     q1\n        VABD.U8         q1,     q12,    q13\n        VCLT.U8         q1,     q1,     q3\n        VAND            q2,     q2,     q1\n        MOV             r12,    r2,     lsr #2\n        ADD             r12,    r12,    #2\n        VDUP.8          q4,     r12\n        VCLT.U8         q1,     q0,     q4\n        VAND            q1,     q1,     q2\n        VABD.U8         q0,     q9,     q11\n        VCLT.U8         q0,     q0,     q3\n        VAND            q0,     q0,     q1\n        VABD.U8         q7,     q14,    q12\n        VCLT.U8         q3,     q7,     q3\n        VAND            q3,     q3,     q1\n        VHADD.U8                q4,     q9,     q10\n        VHADD.U8                q5,     q11,    q12\n        VRHADD.U8               q6,     q9,     q10\n        VRHADD.U8               q7,     q11,    q12\n        VSUB.I8         q6,     q6,     q4\n        VSUB.I8         q7,     q7,     q5\n        VHADD.U8                q6,     q6,     q7\n        VRHADD.U8               q7,     q4,     q8\n        VHADD.U8                q4,     q4,     q8\n        VSUB.I8         q7,     q7,     q4\n        VADD.I8         q6,     q6,     q7\n        VRHADD.U8               q7,     q5,     q9\n        VHADD.U8                q5,     q5,     q9\n        VSUB.I8         q7,     q7,     q5\n        VHADD.U8                q6,     q6,     q7\n        VRHADD.U8               q7,     q4,     q5\n        VHADD.U8                q4,     q4,     q5\n        VSUB.I8         q7,     q7,     q4\n        VRHADD.U8               q6,     q6,     q7\n        VADD.I8         q4,     q4,     q6\n        VMOV            q6,     q9\n        VBIT            q6,     q4,     q0\n        VPUSH           {q6}\n        VHADD.U8                q4,     q14,    q13\n        VHADD.U8                q5,     q12,    q11\n        VRHADD.U8               q6,     q14,    q13\n        VRHADD.U8               q7,     q12,    q11\n        VSUB.I8         q6,     q6,     q4\n        VSUB.I8         q7,     q7,     q5\n        VHADD.U8                q6,     q6,     q7\n        VRHADD.U8               q7,     q4,     q15\n        VHADD.U8                q4,     q4,     q15\n        VSUB.I8         q7,     q7,     q4\n        VADD.I8         q6,     q6,     q7\n        VRHADD.U8               q7,     q5,     q14\n        VHADD.U8                q5,     q5,     q14\n        VSUB.I8         q7,     q7,     q5\n        VHADD.U8                q6,     q6,     q7\n        VRHADD.U8               q7,     q4,     q5\n        VHADD.U8                q4,     q4,     q5\n        VSUB.I8         q7,     q7,     q4\n        VRHADD.U8               q6,     q6,     q7\n        VADD.I8         q4,     q4,     q6\n        VMOV            q6,     q14\n        VBIT            q6,     q4,     q3\n        VPUSH           {q6}\n        VHADD.U8                q1,     q9,     q13\n        VRHADD.U8               q4,     q1,     q10\n        VRHADD.U8               q5,     q11,    q12\n        VHADD.U8                q6,     q1,     q10\n        VHADD.U8                q7,     q11,    q12\n        VHADD.U8                q4,     q4,     q5\n        VRHADD.U8               q6,     q6,     q7\n        VRHADD.U8               q1,     q4,     q6\n        VRHADD.U8               q4,     q9,     q10\n        VRHADD.U8               q5,     q11,    q12\n        VHADD.U8                q6,     q9,     q10\n        VHADD.U8                q7,     q11,    q12\n        VHADD.U8                q4,     q4,     q5\n        VRHADD.U8               q6,     q6,     q7\n        VRHADD.U8               q4,     q4,     q6\n        VHADD.U8                q5,     q11,    q13\n        VRHADD.U8               q5,     q5,     q10\n        VBIF            q1,     q5,     q0\n        VBSL            q0,     q4,     q10\n        VHADD.U8                q7,     q14,    q10\n        VRHADD.U8               q4,     q7,     q13\n        VRHADD.U8               q5,     q11,    q12\n        VHADD.U8                q6,     q7,     q13\n        VHADD.U8                q7,     q11,    q12\n        VHADD.U8                q4,     q4,     q5\n        VRHADD.U8               q6,     q6,     q7\n        VRHADD.U8               q4,     q4,     q6\n        VRHADD.U8               q6,     q14,    q13\n        VRHADD.U8               q5,     q11,    q12\n        VHADD.U8                q5,     q6,     q5\n        VHADD.U8                q6,     q14,    q13\n        VHADD.U8                q7,     q11,    q12\n        VRHADD.U8               q6,     q6,     q7\n        VRHADD.U8               q5,     q5,     q6\n        VHADD.U8                q6,     q12,    q10\n        VRHADD.U8               q6,     q6,     q13\n        VBIF            q4,     q6,     q3\n        VBSL            q3,     q5,     q13\n        VPOP            {q14}\n        VPOP            {q9}\n        VBIT            q10,    q0,     q2\n        VBIT            q11,    q1,     q2\n        VBIT            q12,    q4,     q2\n        VBIT            q13,    q3,     q2\n        SUB             r0,     r0,     r1,     lsl #3\n        VST1.8          {q8},   [r0],   r1\n        VST1.8          {q9},   [r0],   r1\n        VST1.8          {q10},  [r0],   r1\n        VST1.8          {q11},  [r0],   r1\n        VST1.8          {q12},  [r0],   r1\n        VST1.8          {q13},  [r0],   r1\n        VST1.8          {q14},  [r0],   r1\n        VST1.8          {q15},  [r0],   r1\n        VPOP            {q4-q7}\n        BX              lr\n        .size  deblock_luma_h_s4, .-deblock_luma_h_s4\n\n        .type  deblock_luma_v_s4, %function\ndeblock_luma_v_s4:\n        VPUSH           {q4-q7}\n        SUB             r0,     r0,     #4\n        VLD1.8          {d16},  [r0],   r1\n        VLD1.8          {d18},  [r0],   r1\n        VLD1.8          {d20},  [r0],   r1\n        VLD1.8          {d22},  [r0],   r1\n        VLD1.8          {d24},  [r0],   r1\n        VLD1.8          {d26},  [r0],   r1\n        VLD1.8          {d28},  [r0],   r1\n        VLD1.8          {d30},  [r0],   r1\n        VLD1.8          {d17},  [r0],   r1\n        VLD1.8          {d19},  [r0],   r1\n        VLD1.8          {d21},  [r0],   r1\n        VLD1.8          {d23},  [r0],   r1\n        VLD1.8          {d25},  [r0],   r1\n        VLD1.8          {d27},  [r0],   r1\n        VLD1.8          {d29},  [r0],   r1\n        VLD1.8          {d31},  [r0],   r1\n        VTRN.32         q8,     q12\n        VTRN.32         q9,     q13\n        VTRN.32         q10,    q14\n        VTRN.32         q11,    q15\n        VTRN.16         q8,     q10\n        VTRN.16         q9,     q11\n        VTRN.16         q12,    q14\n        VTRN.16         q13,    q15\n        VTRN.8          q8,     q9\n        VTRN.8          q10,    q11\n        VTRN.8          q12,    q13\n        VTRN.8          q14,    q15\n        VDUP.8          q3,     r2\n        VABD.U8         q0,     q11,    q12\n        VCLT.U8         q2,     q0,     q3\n        VDUP.8          q3,     r3\n        VABD.U8         q1,     q11,    q10\n        VCLT.U8         q1,     q1,     q3\n        VAND            q2,     q2,     q1\n        VABD.U8         q1,     q12,    q13\n        VCLT.U8         q1,     q1,     q3\n        VAND            q2,     q2,     q1\n        MOV             r12,    r2,     lsr #2\n        ADD             r12,    r12,    #2\n        VDUP.8          q4,     r12\n        VCLT.U8         q1,     q0,     q4\n        VAND            q1,     q1,     q2\n        VABD.U8         q0,     q9,     q11\n        VCLT.U8         q0,     q0,     q3\n        VAND            q0,     q0,     q1\n        VABD.U8         q7,     q14,    q12\n        VCLT.U8         q3,     q7,     q3\n        VAND            q3,     q3,     q1\n        VHADD.U8                q4,     q9,     q10\n        VHADD.U8                q5,     q11,    q12\n        VRHADD.U8               q6,     q9,     q10\n        VRHADD.U8               q7,     q11,    q12\n        VSUB.I8         q6,     q6,     q4\n        VSUB.I8         q7,     q7,     q5\n        VHADD.U8                q6,     q6,     q7\n        VRHADD.U8               q7,     q4,     q8\n        VHADD.U8                q4,     q4,     q8\n        VSUB.I8         q7,     q7,     q4\n        VADD.I8         q6,     q6,     q7\n        VRHADD.U8               q7,     q5,     q9\n        VHADD.U8                q5,     q5,     q9\n        VSUB.I8         q7,     q7,     q5\n        VHADD.U8                q6,     q6,     q7\n        VRHADD.U8               q7,     q4,     q5\n        VHADD.U8                q4,     q4,     q5\n        VSUB.I8         q7,     q7,     q4\n        VRHADD.U8               q6,     q6,     q7\n        VADD.I8         q4,     q4,     q6\n        VMOV            q6,     q9\n        VBIT            q6,     q4,     q0\n        VPUSH           {q6}\n        VHADD.U8                q4,     q14,    q13\n        VHADD.U8                q5,     q12,    q11\n        VRHADD.U8               q6,     q14,    q13\n        VRHADD.U8               q7,     q12,    q11\n        VSUB.I8         q6,     q6,     q4\n        VSUB.I8         q7,     q7,     q5\n        VHADD.U8                q6,     q6,     q7\n        VRHADD.U8               q7,     q4,     q15\n        VHADD.U8                q4,     q4,     q15\n        VSUB.I8         q7,     q7,     q4\n        VADD.I8         q6,     q6,     q7\n        VRHADD.U8               q7,     q5,     q14\n        VHADD.U8                q5,     q5,     q14\n        VSUB.I8         q7,     q7,     q5\n        VHADD.U8                q6,     q6,     q7\n        VRHADD.U8               q7,     q4,     q5\n        VHADD.U8                q4,     q4,     q5\n        VSUB.I8         q7,     q7,     q4\n        VRHADD.U8               q6,     q6,     q7\n        VADD.I8         q4,     q4,     q6\n        VMOV            q6,     q14\n        VBIT            q6,     q4,     q3\n        VPUSH           {q6}\n        VHADD.U8                q1,     q9,     q13\n        VRHADD.U8               q4,     q1,     q10\n        VRHADD.U8               q5,     q11,    q12\n        VHADD.U8                q6,     q1,     q10\n        VHADD.U8                q7,     q11,    q12\n        VHADD.U8                q4,     q4,     q5\n        VRHADD.U8               q6,     q6,     q7\n        VRHADD.U8               q1,     q4,     q6\n        VRHADD.U8               q4,     q9,     q10\n        VRHADD.U8               q5,     q11,    q12\n        VHADD.U8                q6,     q9,     q10\n        VHADD.U8                q7,     q11,    q12\n        VHADD.U8                q4,     q4,     q5\n        VRHADD.U8               q6,     q6,     q7\n        VRHADD.U8               q4,     q4,     q6\n        VHADD.U8                q5,     q11,    q13\n        VRHADD.U8               q5,     q5,     q10\n        VBIF            q1,     q5,     q0\n        VBSL            q0,     q4,     q10\n        VHADD.U8                q7,     q14,    q10\n        VRHADD.U8               q4,     q7,     q13\n        VRHADD.U8               q5,     q11,    q12\n        VHADD.U8                q6,     q7,     q13\n        VHADD.U8                q7,     q11,    q12\n        VHADD.U8                q4,     q4,     q5\n        VRHADD.U8               q6,     q6,     q7\n        VRHADD.U8               q4,     q4,     q6\n        VRHADD.U8               q6,     q14,    q13\n        VRHADD.U8               q5,     q11,    q12\n        VHADD.U8                q5,     q6,     q5\n        VHADD.U8                q6,     q14,    q13\n        VHADD.U8                q7,     q11,    q12\n        VRHADD.U8               q6,     q6,     q7\n        VRHADD.U8               q5,     q5,     q6\n        VHADD.U8                q6,     q12,    q10\n        VRHADD.U8               q6,     q6,     q13\n        VBIF            q4,     q6,     q3\n        VBSL            q3,     q5,     q13\n        VPOP            {q14}\n        VPOP            {q9}\n        VBIT            q10,    q0,     q2\n        VBIT            q11,    q1,     q2\n        VBIT            q12,    q4,     q2\n        VBIT            q13,    q3,     q2\n        VTRN.8          q8,     q9\n        VTRN.8          q10,    q11\n        VTRN.8          q12,    q13\n        VTRN.8          q14,    q15\n        VTRN.16         q8,     q10\n        VTRN.16         q9,     q11\n        VTRN.16         q12,    q14\n        VTRN.16         q13,    q15\n        VTRN.32         q8,     q12\n        VTRN.32         q9,     q13\n        VTRN.32         q10,    q14\n        VTRN.32         q11,    q15\n        SUB             r0,     r0,     r1,     lsl #4\n        VST1.8          {d16},  [r0],   r1\n        VST1.8          {d18},  [r0],   r1\n        VST1.8          {d20},  [r0],   r1\n        VST1.8          {d22},  [r0],   r1\n        VST1.8          {d24},  [r0],   r1\n        VST1.8          {d26},  [r0],   r1\n        VST1.8          {d28},  [r0],   r1\n        VST1.8          {d30},  [r0],   r1\n        VST1.8          {d17},  [r0],   r1\n        VST1.8          {d19},  [r0],   r1\n        VST1.8          {d21},  [r0],   r1\n        VST1.8          {d23},  [r0],   r1\n        VST1.8          {d25},  [r0],   r1\n        VST1.8          {d27},  [r0],   r1\n        VST1.8          {d29},  [r0],   r1\n        VST1.8          {d31},  [r0],   r1\n        VPOP            {q4-q7}\n        BX              lr\n        .size  deblock_luma_v_s4, .-deblock_luma_v_s4\n\n        .type  deblock_luma_v, %function\ndeblock_luma_v:\n        VPUSH           {q4-q7}\n        SUB             r0,     r0,     #4\n        VLD1.8          {d16},  [r0],   r1\n        VLD1.8          {d18},  [r0],   r1\n        VLD1.8          {d20},  [r0],   r1\n        VLD1.8          {d22},  [r0],   r1\n        VLD1.8          {d24},  [r0],   r1\n        VLD1.8          {d26},  [r0],   r1\n        VLD1.8          {d28},  [r0],   r1\n        VLD1.8          {d30},  [r0],   r1\n        VLD1.8          {d17},  [r0],   r1\n        VLD1.8          {d19},  [r0],   r1\n        VLD1.8          {d21},  [r0],   r1\n        VLD1.8          {d23},  [r0],   r1\n        VLD1.8          {d25},  [r0],   r1\n        VLD1.8          {d27},  [r0],   r1\n        VLD1.8          {d29},  [r0],   r1\n        VLD1.8          {d31},  [r0],   r1\n        VTRN.32         q8,     q12\n        VTRN.32         q9,     q13\n        VTRN.32         q10,    q14\n        VTRN.32         q11,    q15\n        VTRN.16         q8,     q10\n        VTRN.16         q9,     q11\n        VTRN.16         q12,    q14\n        VTRN.16         q13,    q15\n        VTRN.8          q8,     q9\n        VTRN.8          q10,    q11\n        VTRN.8          q12,    q13\n        VTRN.8          q14,    q15\n        ADR             r12,    g_unzip2\n        VDUP.8          q3,     r2\n        VABD.U8         q1,     q11,    q12\n        VLD1.8          {q4},   [r12]\n        VCLT.U8         q2,     q1,     q3\n        VDUP.8          q3,     r3\n        LDR             r12,    [sp,    #4+16*4]\n        VABD.U8         q1,     q11,    q10\n        VABD.U8         q5,     q12,    q13\n        VMAX.U8         q1,     q1,     q5\n        LDR             r12,    [r12]\n        VCLT.U8         q1,     q1,     q3\n        VAND            q2,     q2,     q1\n        VMOV.32         d2[0],  r12\n        VTBL.8          d3,     {d2},   d9\n        VTBL.8          d2,     {d2},   d8\n        VCGT.S8         q1,     q1,     #0\n        VAND            q2,     q2,     q1\n        VMOV.I8         q6,     #1\n        LDR             r12,    [sp,    #0+16*4]\n        VHSUB.U8                q7,     q10,    q13\n        VSHR.S8         q7,     q7,     #1\n        VEOR            q0,     q12,    q11\n        VAND            q6,     q6,     q0\n        VHSUB.U8                q0,     q12,    q11\n        LDR             r12,    [r12]\n        VRHADD.S8               q7,     q7,     q6\n        VQADD.S8                q7,     q0,     q7\n        VAND            q7,     q7,     q2\n        VMOV.32         d2[0],  r12\n        VTBL.8          d3,     {d2},   d9\n        VTBL.8          d2,     {d2},   d8\n        VAND            q1,     q1,     q2\n        VABD.U8         q0,     q9,     q11\n        VCLT.U8         q0,     q0,     q3\n        VAND            q4,     q0,     q2\n        VABD.U8         q0,     q14,    q12\n        VCLT.U8         q0,     q0,     q3\n        VAND            q3,     q0,     q2\n        VRHADD.U8               q0,     q11,    q12\n        VHADD.U8                q0,     q0,     q9\n        VAND            q5,     q1,     q4\n        VQADD.U8                q6,     q10,    q5\n        VMIN.U8         q0,     q0,     q6\n        VQSUB.U8                q6,     q10,    q5\n        VMAX.U8         q10,    q0,     q6\n        VRHADD.U8               q0,     q11,    q12\n        VHADD.U8                q0,     q0,     q14\n        VAND            q5,     q1,     q3\n        VQADD.U8                q6,     q13,    q5\n        VMIN.U8         q0,     q0,     q6\n        VQSUB.U8                q6,     q13,    q5\n        VMAX.U8         q13,    q0,     q6\n        VSUB.I8         q1,     q1,     q3\n        VSUB.I8         q1,     q1,     q4\n        VAND            q1,     q1,     q2\n        VEOR            q6,     q6,     q6\n        VMAX.S8         q5,     q6,     q7\n        VSUB.S8         q7,     q6,     q7\n        VMAX.S8         q6,     q6,     q7\n        VMIN.U8         q5,     q1,     q5\n        VMIN.U8         q6,     q1,     q6\n        VQADD.U8                q11,    q11,    q5\n        VQSUB.U8                q11,    q11,    q6\n        VQSUB.U8                q12,    q12,    q5\n        VQADD.U8                q12,    q12,    q6\n        VTRN.8          q8,     q9\n        VTRN.8          q10,    q11\n        VTRN.8          q12,    q13\n        VTRN.8          q14,    q15\n        VTRN.16         q8,     q10\n        VTRN.16         q9,     q11\n        VTRN.16         q12,    q14\n        VTRN.16         q13,    q15\n        VTRN.32         q8,     q12\n        VTRN.32         q9,     q13\n        VTRN.32         q10,    q14\n        VTRN.32         q11,    q15\n        SUB             r0,     r0,     r1,     lsl #4\n        VST1.8          {d16},  [r0],   r1\n        VST1.8          {d18},  [r0],   r1\n        VST1.8          {d20},  [r0],   r1\n        VST1.8          {d22},  [r0],   r1\n        VST1.8          {d24},  [r0],   r1\n        VST1.8          {d26},  [r0],   r1\n        VST1.8          {d28},  [r0],   r1\n        VST1.8          {d30},  [r0],   r1\n        VST1.8          {d17},  [r0],   r1\n        VST1.8          {d19},  [r0],   r1\n        VST1.8          {d21},  [r0],   r1\n        VST1.8          {d23},  [r0],   r1\n        VST1.8          {d25},  [r0],   r1\n        VST1.8          {d27},  [r0],   r1\n        VST1.8          {d29},  [r0],   r1\n        VST1.8          {d31},  [r0],   r1\n        VPOP            {q4-q7}\n        BX              lr\ng_unzip2:\n        .quad           0x0101010100000000\n        .quad           0x0303030302020202\n        .size  deblock_luma_v, .-deblock_luma_v\n\n        .type  deblock_luma_h, %function\ndeblock_luma_h:\n        VPUSH           {q4-q7}\n        SUB             r0,     r0,     r1\n        SUB             r0,     r0,     r1,     lsl #1\n        VLD1.8          {q9 },  [r0],   r1\n        VLD1.8          {q10},  [r0],   r1\n        VLD1.8          {q11},  [r0],   r1\n        VLD1.8          {q12},  [r0],   r1\n        VLD1.8          {q13},  [r0],   r1\n        VLD1.8          {q14},  [r0]\n        ADR             r12,    g_unzip2\n        VDUP.8          q3,     r2\n        VABD.U8         q1,     q11,    q12\n        VLD1.8          {q4},   [r12]\n        VCLT.U8         q2,     q1,     q3\n        VDUP.8          q3,     r3\n        LDR             r12,    [sp,    #4+16*4]\n        VABD.U8         q1,     q11,    q10\n        VABD.U8         q5,     q12,    q13\n        VMAX.U8         q1,     q1,     q5\n        LDR             r12,    [r12]\n        VCLT.U8         q1,     q1,     q3\n        VAND            q2,     q2,     q1\n        VMOV.32         d2[0],  r12\n        VTBL.8          d3,     {d2},   d9\n        VTBL.8          d2,     {d2},   d8\n        VCGT.S8         q1,     q1,     #0\n        VAND            q2,     q2,     q1\n        VMOV.I8         q6,     #1\n        LDR             r12,    [sp,    #0+16*4]\n        VHSUB.U8                q7,     q10,    q13\n        VSHR.S8         q7,     q7,     #1\n        VEOR            q0,     q12,    q11\n        VAND            q6,     q6,     q0\n        VHSUB.U8                q0,     q12,    q11\n        LDR             r12,    [r12]\n        VRHADD.S8               q7,     q7,     q6\n        VQADD.S8                q7,     q0,     q7\n        VAND            q7,     q7,     q2\n        VMOV.32         d2[0],  r12\n        VTBL.8          d3,     {d2},   d9\n        VTBL.8          d2,     {d2},   d8\n        VAND            q1,     q1,     q2\n        VABD.U8         q0,     q9,     q11\n        VCLT.U8         q0,     q0,     q3\n        VAND            q4,     q0,     q2\n        VABD.U8         q0,     q14,    q12\n        VCLT.U8         q0,     q0,     q3\n        VAND            q3,     q0,     q2\n        VRHADD.U8               q0,     q11,    q12\n        VHADD.U8                q0,     q0,     q9\n        VAND            q5,     q1,     q4\n        VQADD.U8                q6,     q10,    q5\n        VMIN.U8         q0,     q0,     q6\n        VQSUB.U8                q6,     q10,    q5\n        VMAX.U8         q10,    q0,     q6\n        VRHADD.U8               q0,     q11,    q12\n        VHADD.U8                q0,     q0,     q14\n        VAND            q5,     q1,     q3\n        VQADD.U8                q6,     q13,    q5\n        VMIN.U8         q0,     q0,     q6\n        VQSUB.U8                q6,     q13,    q5\n        VMAX.U8         q13,    q0,     q6\n        VSUB.I8         q1,     q1,     q3\n        VSUB.I8         q1,     q1,     q4\n        VAND            q1,     q1,     q2\n        VEOR            q6,     q6,     q6\n        VMAX.S8         q5,     q6,     q7\n        VSUB.S8         q7,     q6,     q7\n        VMAX.S8         q6,     q6,     q7\n        VMIN.U8         q5,     q1,     q5\n        VMIN.U8         q6,     q1,     q6\n        VQADD.U8                q11,    q11,    q5\n        VQSUB.U8                q11,    q11,    q6\n        VQSUB.U8                q12,    q12,    q5\n        VQADD.U8                q12,    q12,    q6\n        SUB             r0,     r0,     r1,     lsl #2\n        VST1.8          {q10},  [r0],   r1\n        VST1.8          {q11},  [r0],   r1\n        VST1.8          {q12},  [r0],   r1\n        VST1.8          {q13},  [r0],   r1\n        VPOP            {q4-q7}\n        BX              lr\n        .size  deblock_luma_h, .-deblock_luma_h\n\n        .type  deblock_chroma_v, %function\ndeblock_chroma_v:\n        VPUSH           {q4-q7}\n        SUB             r0,     r0,     #2\n        VLD1.8          {d16},  [r0],   r1\n        VLD1.8          {d18},  [r0],   r1\n        VLD1.8          {d20},  [r0],   r1\n        VLD1.8          {d22},  [r0],   r1\n        VLD1.8          {d17},  [r0],   r1\n        VLD1.8          {d19},  [r0],   r1\n        VLD1.8          {d21},  [r0],   r1\n        VLD1.8          {d23},  [r0],   r1\n        VTRN.32         d16,    d17\n        VTRN.32         d18,    d19\n        VTRN.32         d20,    d21\n        VTRN.32         d22,    d23\n        VTRN.16         q8,     q10\n        VTRN.16         q9,     q11\n        VTRN.8          q8,     q9\n        VTRN.8          q10,    q11\n        LDR             r12,    [sp,    #4+16*4]\n        VDUP.8          q3,     r2\n        VABD.U8         q1,     q10,    q9\n        VCLT.U8         q2,     q1,     q3\n        VDUP.8          q3,     r3\n        VABD.U8         q1,     q8,     q9\n        VABD.U8         q4,     q10,    q11\n        VMAX.U8         q4,     q1,     q4\n        VLD1.8          {d2 },  [r12]\n        VCLT.U8         q4,     q4,     q3\n        VAND            q2,     q2,     q4\n        LDR             r12,    [sp,    #0+16*4]\n        VMOV            d0,     d2\n        VZIP.8          q1,     q0\n        VLD1.8          {d0 },  [r12]\n        VCGT.S8         q3,     q1,     #0\n        VSHR.U8         q1,     q1,     #2\n        VCGT.S8         q1,     q1,     #0\n        VAND            q2,     q2,     q3\n        VMOV            d8,     d0\n        VMOV.I8         q6,     #1\n        VZIP.8          q0,     q4\n        VADD.I8         q0,     q0,     q6\n        VAND            q0,     q0,     q2\n        VHSUB.U8                q7,     q8,     q11\n        VSHR.S8         q7,     q7,     #1\n        VEOR            q4,     q10,    q9\n        VAND            q6,     q6,     q4\n        VHSUB.U8                q4,     q10,    q9\n        VRHADD.S8               q7,     q7,     q6\n        VQADD.S8                q7,     q4,     q7\n        VEOR            q4,     q4,     q4\n        VMAX.S8         q5,     q4,     q7\n        VSUB.S8         q7,     q4,     q7\n        VMAX.S8         q4,     q4,     q7\n        VMIN.U8         q5,     q0,     q5\n        VMIN.U8         q4,     q0,     q4\n        VQADD.U8                q0,     q9,     q5\n        VQSUB.U8                q0,     q0,     q4\n        VQSUB.U8                q3,     q10,    q5\n        VQADD.U8                q3,     q3,     q4\n        VHADD.U8                q6,     q9,     q11\n        VRHADD.U8               q6,     q6,     q8\n        VHADD.U8                q7,     q8,     q10\n        VRHADD.U8               q7,     q7,     q11\n        VBIT            q0,     q6,     q1\n        VBIT            q3,     q7,     q1\n        VBIT            q9,     q0,     q2\n        VBIT            q10,    q3,     q2\n        VTRN.8          q8,     q9\n        VTRN.8          q10,    q11\n        VTRN.16         q8,     q10\n        VTRN.16         q9,     q11\n        VTRN.32         d16,    d17\n        VTRN.32         d18,    d19\n        VTRN.32         d20,    d21\n        VTRN.32         d22,    d23\n        SUB             r0,     r0,     r1,     lsl #3\n        VMOV.32         r12,    d16[0]\n        STR             r12,    [r0],   r1\n        VMOV.32         r12,    d18[0]\n        STR             r12,    [r0],   r1\n        VMOV.32         r12,    d20[0]\n        STR             r12,    [r0],   r1\n        VMOV.32         r12,    d22[0]\n        STR             r12,    [r0],   r1\n        VMOV.32         r12,    d17[0]\n        STR             r12,    [r0],   r1\n        VMOV.32         r12,    d19[0]\n        STR             r12,    [r0],   r1\n        VMOV.32         r12,    d21[0]\n        STR             r12,    [r0],   r1\n        VMOV.32         r12,    d23[0]\n        STR             r12,    [r0],   r1\n        VPOP            {q4-q7}\n        BX              lr\n        .size  deblock_chroma_v, .-deblock_chroma_v\n\n        .type  deblock_chroma_h, %function\ndeblock_chroma_h:\n        VPUSH           {q4-q7}\n        SUB             r0,     r0,     r1,     lsl #1\n        VLD1.8          {q8 },  [r0],   r1\n        VLD1.8          {q9 },  [r0],   r1\n        VLD1.8          {q10},  [r0],   r1\n        VLD1.8          {q11},  [r0]\n        LDR             r12,    [sp,    #4+16*4]\n        VDUP.8          q3,     r2\n        VABD.U8         q1,     q10,    q9\n        VCLT.U8         q2,     q1,     q3\n        VDUP.8          q3,     r3\n        VABD.U8         q1,     q8,     q9\n        VABD.U8         q4,     q10,    q11\n        VMAX.U8         q4,     q1,     q4\n        VLD1.8          {d2 },  [r12]\n        VCLT.U8         q4,     q4,     q3\n        VAND            q2,     q2,     q4\n        LDR             r12,    [sp,    #0+16*4]\n        VMOV            d0,     d2\n        VZIP.8          q1,     q0\n        VLD1.8          {d0 },  [r12]\n        VCGT.S8         q3,     q1,     #0\n        VSHR.U8         q1,     q1,     #2\n        VCGT.S8         q1,     q1,     #0\n        VAND            q2,     q2,     q3\n        VMOV            d8,     d0\n        VMOV.I8         q6,     #1\n        VZIP.8          q0,     q4\n        VADD.I8         q0,     q0,     q6\n        VAND            q0,     q0,     q2\n        VHSUB.U8                q7,     q8,     q11\n        VSHR.S8         q7,     q7,     #1\n        VEOR            q4,     q10,    q9\n        VAND            q6,     q6,     q4\n        VHSUB.U8                q4,     q10,    q9\n        VRHADD.S8               q7,     q7,     q6\n        VQADD.S8                q7,     q4,     q7\n        VEOR            q4,     q4,     q4\n        VMAX.S8         q5,     q4,     q7\n        VSUB.S8         q7,     q4,     q7\n        VMAX.S8         q4,     q4,     q7\n        VMIN.U8         q5,     q0,     q5\n        VMIN.U8         q4,     q0,     q4\n        VQADD.U8                q0,     q9,     q5\n        VQSUB.U8                q0,     q0,     q4\n        VQSUB.U8                q3,     q10,    q5\n        VQADD.U8                q3,     q3,     q4\n        VHADD.U8                q6,     q9,     q11\n        VRHADD.U8               q6,     q6,     q8\n        VHADD.U8                q7,     q8,     q10\n        VRHADD.U8               q7,     q7,     q11\n        VBIT            q0,     q6,     q1\n        VBIT            q3,     q7,     q1\n        VBIT            q9,     q0,     q2\n        VBIT            q10,    q3,     q2\n        SUB             r0,     r0,     r1,     lsl #1\n        VST1.8          {d18 }, [r0],   r1\n        VST1.8          {d20},  [r0],   r1\n        VPOP            {q4-q7}\n        BX              lr\n        .size  deblock_chroma_h, .-deblock_chroma_h\n\n        .type  h264e_deblock_chroma_neon, %function\nh264e_deblock_chroma_neon:\n        PUSH            {r2-r10,        lr}\n        MOV             r8,     r0\n        LDRB            r0,     [r2,    #0x40]\n        MOV             r9,     r1\n        LDRB            r1,     [r2,    #0x44]\n        ADD             r5,     r2,     #0x40\n        ADD             r6,     r2,     #0x44\n        ADD             r10,    r2,     #0x20\n        MOV             r7,     r2\n        MOV             r4,     #0\nl1.2056:\n        LDR             r2,     [r7,    r4]\n        CMP             r2,     #0\n        CMPNE           r0,     #0\n        BEQ             l1.2108\n        ADD             r3,     r7,     r4\n        ADD             r2,     r10,    r4\n        ADD             r12,    r8,     r4,     asr #1\n        STRD            r2,     r3,     [sp,    #0]\n        MOV             r3,     r1\n        MOV             r2,     r0\n        MOV             r1,     r9\n        MOV             r0,     r12\n        BL              deblock_chroma_v\nl1.2108:\n        LDRB            r0,     [r5,    #1]\n        ADD             r4,     r4,     #8\n        LDRB            r1,     [r6,    #1]\n        CMP             r4,     #0x10\n        BLT             l1.2056\n        LDRB            r0,     [r5,    #2]\n        LDRB            r1,     [r6,    #2]\n        ADD             r10,    r10,    #0x10\n        ADD             r7,     r7,     #0x10\n        MOV             r4,     #0\nl1.2148:\n        LDR             r2,     [r7,    r4]\n        CMP             r2,     #0\n        CMPNE           r0,     #0\n        BEQ             l1.2196\n        ADD             r3,     r7,     r4\n        ADD             r2,     r10,    r4\n        STRD            r2,     r3,     [sp,    #0]\n        MOV             r3,     r1\n        MOV             r2,     r0\n        MOV             r1,     r9\n        MOV             r0,     r8\n        BL              deblock_chroma_h\nl1.2196:\n        LDRB            r0,     [r5,    #3]\n        ADD             r4,     r4,     #8\n        LDRB            r1,     [r6,    #3]\n        CMP             r4,     #0x10\n        ADD             r8,     r8,     r9,     lsl #2\n        BLT             l1.2148\n        POP             {r2-r10,        pc}\n        .size  h264e_deblock_chroma_neon, .-h264e_deblock_chroma_neon\n\n        .type  h264e_deblock_luma_neon, %function\nh264e_deblock_luma_neon:\n        PUSH            {r2-r10,        lr}\n        MOV             r7,     r0\n        LDRB            r0,     [r2,    #0x40]\n        MOV             r9,     r1\n        LDRB            r1,     [r2,    #0x44]\n        ADD             r5,     r2,     #0x40\n        ADD             r6,     r2,     #0x44\n        ADD             r10,    r2,     #0x20\n        MOV             r8,     r2\n        MOV             r4,     #0\nl1.2264:\n        LDR             r2,     [r8,    r4]\n        AND             r3,     r2,     #0xff\n        CMP             r3,     #4\n        BEQ             l1.2456\n        CMP             r2,     #0\n        CMPNE           r0,     #0\n        BEQ             l1.2328\n        ADD             r3,     r8,     r4\n        ADD             r2,     r10,    r4\n        ADD             r12,    r7,     r4\n        STRD            r2,     r3,     [sp,    #0]\n        MOV             r3,     r1\n        MOV             r2,     r0\n        MOV             r1,     r9\n        MOV             r0,     r12\n        BL              deblock_luma_v\nl1.2328:\n        LDRB            r0,     [r5,    #1]\n        ADD             r4,     r4,     #4\n        LDRB            r1,     [r6,    #1]\n        CMP             r4,     #0x10\n        BLT             l1.2264\n        LDRB            r0,     [r5,    #2]\n        LDRB            r1,     [r6,    #2]\n        ADD             r10,    r10,    #0x10\n        ADD             r8,     r8,     #0x10\n        MOV             r4,     #0\nl1.2368:\n        LDR             r2,     [r8,    r4]\n        AND             r3,     r2,     #0xff\n        CMP             r3,     #4\n        BEQ             l1.2484\n        CMP             r2,     #0\n        CMPNE           r0,     #0\n        BEQ             l1.2428\n        ADD             r3,     r8,     r4\n        ADD             r2,     r10,    r4\n        STRD            r2,     r3,     [sp,    #0]\n        MOV             r3,     r1\n        MOV             r2,     r0\n        MOV             r1,     r9\n        MOV             r0,     r7\n        BL              deblock_luma_h\nl1.2428:\n        LDRB            r0,     [r5,    #3]\n        ADD             r4,     r4,     #4\n        LDRB            r1,     [r6,    #3]\n        CMP             r4,     #0x10\n        ADD             r7,     r7,     r9,     lsl #2\n        BLT             l1.2368\n        POP             {r2-r10,        pc}\nl1.2456:\n        ADD             r12,    r7,     r4\n        MOV             r3,     r1\n        MOV             r2,     r0\n        MOV             r1,     r9\n        MOV             r0,     r12\n        BL              deblock_luma_v_s4\n        B               l1.2328\nl1.2484:\n        MOV             r3,     r1\n        MOV             r2,     r0\n        MOV             r1,     r9\n        MOV             r0,     r7\n        BL              deblock_luma_h_s4\n        B               l1.2428\n        .size  h264e_deblock_luma_neon, .-h264e_deblock_luma_neon\n\n        .global         deblock_luma_h_s4\n        .global         h264e_deblock_chroma_neon\n        .global         h264e_deblock_luma_neon\n"
  },
  {
    "path": "asm/neon/h264e_denoise_neon.s",
    "content": "        .arm\n        .text\n        .align 2\n\n__rt_memcpy_w:\n        subs            r2,     r2,     #0x10-4\nlocal_denoise_1_3:\n        ldmcsia         r1!,    {r3,    r12}\n        stmcsia         r0!,    {r3,    r12}\n        ldmcsia         r1!,    {r3,    r12}\n        stmcsia         r0!,    {r3,    r12}\n        subcss          r2,     r2,     #0x10\n        bcs             local_denoise_1_3\n        movs            r12,    r2,     lsl #29\n        ldmcsia         r1!,    {r3,    r12}\n        stmcsia         r0!,    {r3,    r12}\n        ldrmi           r3,     [r1],   #4\n        strmi           r3,     [r0],   #4\n        moveq           pc,     lr\n        sub             r1,     r1,     #3\n_memcpy_lastbytes_skip3:\n        add             r1,     r1,     #1\n_memcpy_lastbytes_skip2:\n        add             r1,     r1,     #1\n_memcpy_lastbytes_skip1:\n        add             r1,     r1,     #1\n\n_memcpy_lastbytes:\n        movs            r2,     r2,     lsl #31\n        ldrmib          r2,     [r1],   #1\n        ldrcsb          r3,     [r1],   #1\n        ldrcsb          r12,    [r1],   #1\n        strmib          r2,     [r0],   #1\n        strcsb          r3,     [r0],   #1\n        strcsb          r12,    [r0],   #1\n        bx              lr\nmy_memcpy:\n        cmp             r2,     #3\n        bls             _memcpy_lastbytes\n        rsb             r12,    r0,     #0\n        movs            r12,    r12,    lsl #31\n        ldrcsb          r3,     [r1],   #1\n        ldrcsb          r12,    [r1],   #1\n        strcsb          r3,     [r0],   #1\n        strcsb          r12,    [r0],   #1\n        ldrmib          r3,     [r1],   #1\n        subcs           r2,     r2,     #2\n        submi           r2,     r2,     #1\n        strmib          r3,     [r0],   #1\n_memcpy_dest_aligned:\n        subs            r2,     r2,     #4\n        bcc             _memcpy_lastbytes\n        adr             r12,    __rt_memcpy_w\n        and             r3,     r1,     #3\n        sub             pc,     r12,    r3,     lsl #5\n\n        .global h264e_denoise_run_neon\n        .type  h264e_denoise_run_neon, %function\nh264e_denoise_run_neon:\n        CMP             r2,     #2\n        CMPGT           r3,     #2\n        BXLE            lr\n        PUSH            {r0-r11,        lr}\n        SUB             sp,     sp,     #0xc\n        SUB             r1,     r2,     #2\n        SUB             r0,     r3,     #2\n        STR             r0,     [sp,    #0+4+4]\n        LDR             r4,     [sp,    #0+4+4+4+4+4+4+4+4*9+4]\n        LDR             r5,     [sp,    #0+4+4+4+4+4+4+4+4*9]\n        STR             r1,     [sp,    #0+4+4+4+4+4]\nlocal_denoise_2_0:\n        LDR             r0,     [sp,    #0+4+4+4]\n        LDR             r1,     [sp,    #0+4+4+4+4]\n        ADD             r0,     r0,     r5\n        ADD             r1,     r1,     r4\n        STR             r0,     [sp,    #0+4+4+4]\n        STR             r1,     [sp,    #0+4+4+4+4]\n        LDRB            r3,     [r0],   #1\n        SUB             r12,    r1,     r4\n        STRB            r3,     [r12,   #0]\n        ADD             r1,     r1,     #1\n        LDR             r12,    [sp,    #0+4+4+4+4+4]\n        MOVS            r12,    r12,    lsr #3\n        BEQ             local_denoise_10_0\nlocal_denoise_1_4:\n        VLD1.U8         {d16},  [r0]\n        VLD1.U8         {d17},  [r1]\n        SUB             lr,     r0,     #1\n        VLD1.U8         {d18},  [lr]\n        SUB             lr,     r1,     #1\n        VLD1.U8         {d19},  [lr]\n        SUB             lr,     r0,     r5\n        VLD1.U8         {d20},  [lr]\n        SUB             lr,     r1,     r4\n        VLD1.U8         {d21},  [lr]\n        ADD             lr,     r0,     #1\n        VLD1.U8         {d22},  [lr]\n        ADD             lr,     r1,     #1\n        VLD1.U8         {d23},  [lr]\n        ADD             lr,     r0,     r5\n        VLD1.U8         {d24},  [lr]\n        ADD             lr,     r1,     r4\n        VLD1.U8         {d25},  [lr]\n        VABDL.U8                q0,     d16,    d17\n        VADDL.U8                q1,     d18,    d20\n        VADDW.U8                q1,     q1,     d22\n        VADDW.U8                q1,     q1,     d24\n        VADDL.U8                q2,     d19,    d21\n        VADDW.U8                q2,     q2,     d23\n        VADDW.U8                q2,     q2,     d25\n        VABD.U16                q1,     q1,     q2\n        VSHR.U16                q1,     q1,     #2\n        VMOV.I16                q2,     #1\n        VADD.S16                q0,     q0,     q2\n        VADD.S16                q1,     q1,     q2\n        VQSHL.S16               q0,     q0,     #7\n        VCLS.S16                q2,     q0\n        VSHL.S16                q0,     q0,     q2\n        VQDMULH.S16             q0,     q0,     q0\n        VCLS.S16                q15,    q0\n        VSHL.S16                q0,     q0,     q15\n        VADD.S16                q2,     q2,     q2\n        VADD.S16                q2,     q2,     q15\n        VQDMULH.S16             q0,     q0,     q0\n        VCLS.S16                q15,    q0\n        VSHL.S16                q0,     q0,     q15\n        VADD.S16                q2,     q2,     q2\n        VADD.S16                q2,     q2,     q15\n        VQDMULH.S16             q0,     q0,     q0\n        VCLS.S16                q15,    q0\n        VSHL.S16                q0,     q0,     q15\n        VADD.S16                q2,     q2,     q2\n        VADD.S16                q2,     q2,     q15\n        VQDMULH.S16             q0,     q0,     q0\n        VCLS.S16                q15,    q0\n        VADD.S16                q2,     q2,     q2\n        VADD.S16                q2,     q2,     q15\n        VMOV.I16                q15,    #127\n        VSUB.S16                q2,     q15,    q2\n        VQSHL.S16               q1,     q1,     #7\n        VCLS.S16                q3,     q1\n        VSHL.S16                q1,     q1,     q3\n        VQDMULH.S16             q1,     q1,     q1\n        VCLS.S16                q15,    q1\n        VSHL.S16                q1,     q1,     q15\n        VADD.S16                q3,     q3,     q3\n        VADD.S16                q3,     q3,     q15\n        VQDMULH.S16             q1,     q1,     q1\n        VCLS.S16                q15,    q1\n        VSHL.S16                q1,     q1,     q15\n        VADD.S16                q3,     q3,     q3\n        VADD.S16                q3,     q3,     q15\n        VQDMULH.S16             q1,     q1,     q1\n        VCLS.S16                q15,    q1\n        VSHL.S16                q1,     q1,     q15\n        VADD.S16                q3,     q3,     q3\n        VADD.S16                q3,     q3,     q15\n        VQDMULH.S16             q1,     q1,     q1\n        VCLS.S16                q15,    q1\n        VADD.S16                q3,     q3,     q3\n        VADD.S16                q3,     q3,     q15\n        VMOV.I16                q15,    #127\n        VSUB.S16                q3,     q15,    q3\n        VQSHL.U16               q3,     q3,     #10\n        VSHR.U16                q3,     q3,     #8\n        VMOV.I16                q15,    #255\n        VSUB.S16                q2,     q15,    q2\n        VSUB.S16                q3,     q15,    q3\n        VMUL.U16                q2,     q2,     q3\n        VMOVL.U8                q0,     d17\n        VMULL.U16               q10,    d0,     d4\n        VMULL.U16               q11,    d1,     d5\n        VMOV.I8         q15,    #255\n        VSUB.S16                q2,     q15,    q2\n        VMOVL.U8                q0,     d16\n        VMLAL.U16               q10,    d0,     d4\n        VMLAL.U16               q11,    d1,     d5\n        VRSHRN.I32              d0,     q10,    #16\n        VRSHRN.I32              d1,     q11,    #16\n        VMOVN.I16               d0,     q0\n        SUB             r3,     r1,     r4\n        VST1.U8         {d0},   [r3]\n        ADD             r0,     r0,     #8\n        ADD             r1,     r1,     #8\n        SUBS            r12,    r12,    #1\n        BNE             local_denoise_1_4\nlocal_denoise_10_0:\n        LDR             r12,    [sp,    #0+4+4+4+4+4]\n        ANDS            r12,    r12,    #7\n        BNE             tail\ntail_ret:\n        LDRB            r0,     [r0,    #0]\n        SUB             r1,     r1,     r4\n        STRB            r0,     [r1,    #0]\n        LDR             r0,     [sp,    #0+4+4]\n        SUBS            r0,     r0,     #1\n        STR             r0,     [sp,    #0+4+4]\n        BNE             local_denoise_2_0\n        LDR             r0,     [sp,    #0+4+4+4]\n        LDR             r2,     [sp,    #0+4+4+4+4+4]\n        ADD             r1,     r0,     r5\n        LDR             r0,     [sp,    #0+4+4+4+4]\n        ADD             r2,     r2,     #2\n        ADD             r0,     r0,     r4\n        BL              my_memcpy\n        LDR             r11,    [sp,    #0+4+4+4+4+4+4]\n        SUB             r11,    r11,    #2\nlocal_denoise_1_5:\n        LDR             r0,     [sp,    #0+4+4+4+4]\n        SUB             r7,     r0,     r4\n        LDR             r0,     [sp,    #0+4+4+4+4+4]\n        MOV             r1,     r7\n        ADD             r2,     r0,     #2\n        LDR             r0,     [sp,    #0+4+4+4+4]\n        BL              my_memcpy\n        STR             r7,     [sp,    #0+4+4+4+4]\n        SUBS            r11,    r11,    #1\n        BNE             local_denoise_1_5\n        LDR             r0,     [sp,    #0+4+4+4+4+4+4]\n        RSB             r1,     r0,     #2\n        LDR             r0,     [sp,    #0+4+4+4]\n        MLA             r1,     r5,     r1,     r0\n        LDR             r0,     [sp,    #0+4+4+4+4+4]\n        ADD             r2,     r0,     #2\n        LDR             r0,     [sp,    #0+4+4+4+4]\n        ADD             sp,     sp,     #0x1c\n        POP             {r4-r11,        lr}\n        B               my_memcpy\ntail:\nlocal_denoise_1_6:\n        LDRB            r3,     [r0,    #-1]\n        LDRB            r9,     [r1,    #-1]\n        LDRB            r6,     [r0,    #1]\n        LDRB            r10,    [r1,    #1]\n        SUB             r3,     r3,     r9\n        SUB             r9,     r0,     r5\n        SUB             r6,     r6,     r10\n        ADD             r3,     r3,     r6\n        SUB             r6,     r1,     r4\n        LDRB            r9,     [r9,    #0]\n        LDRB            r10,    [r6,    #0]\n        LDRB            r7,     [r0,    #0]\n        LDRB            r8,     [r1,    #0]\n        LDRB            r11,    [r0,    r5]\n        LDRB            lr,     [r1,    r4]\n        SUB             r9,     r9,     r10\n        SUBS            r2,     r7,     r8\n        RSBLT           r2,     r2,     #0\n        ADD             r3,     r3,     r9\n        SUB             r9,     r11,    lr\n        ADDS            r3,     r3,     r9\n        RSBLT           r3,     r3,     #0\n        MOV             r10,    r3,     asr #2\n        LDR             r3,     =g_diff_to_gainQ8\n        LDRB            r9,     [r3,    r2]\n        LDRB            r2,     [r3,    r10]\n        ADD             r0,     r0,     #1\n        ADD             r1,     r1,     #1\n        MOV             r2,     r2,     lsl #2\n        CMP             r2,     #0xff\n        MOVHI           r2,     #0xff\n        RSB             r3,     r2,     #0xff\n        RSB             r2,     r9,     #0xff\n        MUL             r2,     r3,     r2\n        RSB             r3,     r2,     #0x00010000\n        SUB             r3,     r3,     #1\n        MUL             r3,     r7,     r3\n        MLA             r3,     r8,     r2,     r3\n        ADD             r3,     r3,     #0x00008000\n        MOV             r3,     r3,     lsr     #16\n        STRB            r3,     [r6,    #0]\n        SUBS            r12,    r12,    #1\n        BNE             local_denoise_1_6\n        B               tail_ret\n        .size  h264e_denoise_run_neon, .-h264e_denoise_run_neon\n"
  },
  {
    "path": "asm/neon/h264e_intra_neon.s",
    "content": "        .arm\n        .text\n        .align 2\n\n        .type  intra_predict_dc4_neon, %function\nintra_predict_dc4_neon:\n        MOV             r3,     #0\n        VEOR            q1,     q1,     q1\n        CMP             r0,     #0x20\n        BCC             local_intra_10_0\n        VLD1.8          {d0},   [r0]\n        ADD             r3,     r3,     #2\n        VPADAL.U8               q1,     q0\nlocal_intra_10_0:\n        CMP             r1,     #0x20\n        BCC             local_intra_10_1\n        VLD1.8          {d0},   [r1]\n        ADD             r3,     r3,     #2\n        VPADAL.U8               q1,     q0\nlocal_intra_10_1:\n        VPADDL.U16              q1,     q1\n        VMOV.32         r12,    d2[0]\n        ADD             r0,     r12,    r3\n        CMP             r3,     #4\n        MOVEQ           r0,     r0,     lsr #1\n        MOV             r0,     r0,     lsr #2\n        CMP             r3,     #0\n        MOVEQ           r0,     #0x80\n        ADD             r0,     r0,     r0,     lsl #16\n        ADD             r0,     r0,     r0,     lsl #8\n        BX              lr\n        .size  intra_predict_dc4_neon, .-intra_predict_dc4_neon\n\n        .type  h264e_intra_predict_16x16_neon, %function\nh264e_intra_predict_16x16_neon:\n        CMP             r3,     #1\n        BEQ             h_pred_16x16\n        BLT             v_pred_16x16\n        MOV             r3,     #0\n        VEOR            q1,     q1,     q1\n        CMP             r1,     #0x20\n        BCC             local_intra_10_2\n        VLD1.8          {q2},   [r1]\n        ADD             r3,     r3,     #8\n        VPADAL.U8               q1,     q2\nlocal_intra_10_2:\n        CMP             r2,     #0x20\n        BCC             local_intra_10_3\n        VLD1.8          {q0},   [r2]\n        ADD             r3,     r3,     #8\n        VPADAL.U8               q1,     q0\nlocal_intra_10_3:\n        VPADDL.U16              q1,     q1\n        VPADDL.U32              q1,     q1\n        VADD.I64                d2,     d2,     d3\n        VMOV.32         r12,    d2[0]\n        ADD             r2,     r12,    r3\n        CMP             r3,     #16\n        MOVEQ           r2,     r2,     lsr #1\n        MOV             r2,     r2,     lsr #4\n        CMP             r3,     #0\n        MOVEQ           r2,     #0x80\n        VDUP.I8         q0,     r2\nsave_q0:\n        VMOV            q1,     q0\n        VMOV            q2,     q0\n        VMOV            q3,     q0\n        VSTMIA          r0!,    {q0-q3}\n        VSTMIA          r0!,    {q0-q3}\n        VSTMIA          r0!,    {q0-q3}\n        VSTMIA          r0!,    {q0-q3}\n        BX              lr\nv_pred_16x16:\n        VLD1.8          {q0},   [r2]\n        B               save_q0\nh_pred_16x16:\n        MOV             r2,     #16\nlocal_intra_1_0:\n        LDRB            r3,     [r1],   #1\n        VDUP.I8         q0,     r3\n        SUBS            r2,     r2,     #1\n        VSTMIA          r0!,    {q0}\n        BNE             local_intra_1_0\n        BX              lr\n        .size  h264e_intra_predict_16x16_neon, .-h264e_intra_predict_16x16_neon\n\n        .type  h264e_intra_predict_chroma_neon, %function\nh264e_intra_predict_chroma_neon:\n        PUSH            {r4-r8, lr}\n        MOV             r6,     r2\n        CMP             r3,     #1\n        LDMLT           r6,     {r2,    r3,     r12,    lr}\n        MOV             r4,     r0\n        MOVGT           r7,     #2\n        MOV             r5,     r1\n        MOV             r0,     #8\n        MOVGT           r8,     r7\n        BEQ             h_pred_chroma\n        BGT             dc_pred_chroma\nv_pred_chroma:\n        SUBS            r0,     r0,     #1\n        STMIA           r4!,    {r2,    r3,     r12,    lr}\n        BNE             v_pred_chroma\n        POP             {r4-r8, pc}\nh_pred_chroma:\n        LDRB            r12,    [r5,    #8]\n        LDRB            r2,     [r5],   #1\n        SUBS            r0,     r0,     #1\n        ADD             r12,    r12,    r12,    lsl #16\n        ADD             r2,     r2,     r2,     lsl #16\n        ADD             r12,    r12,    r12,    lsl #8\n        ADD             r2,     r2,     r2,     lsl #8\n        MOV             lr,     r12\n        MOV             r3,     r2\n        STMIA           r4!,    {r2,    r3,     r12,    lr}\n        BNE             h_pred_chroma\n        POP             {r4-r8, pc}\ndc_pred_chroma:\n        MOV             r1,     r6\n        MOV             r0,     r5\n        BL              intra_predict_dc4_neon\n        STR             r0,     [r4,    #0x40]\n        STR             r0,     [r4,    #4]\n        STR             r0,     [r4,    #0]\n        ADD             r1,     r6,     #4\n        ADD             r0,     r5,     #4\n        BL              intra_predict_dc4_neon\n        CMP             r6,     #0x20\n        STR             r0,     [r4,    #0x44]\n        BCC             local_intra_10_4\n        ADD             r1,     r6,     #4\n        MOV             r0,     #0\n        BL              intra_predict_dc4_neon\n        STR             r0,     [r4,    #4]\nlocal_intra_10_4:\n        CMP             r5,     #0x20\n        BCC             local_intra_11_0\n        ADD             r1,     r5,     #4\n        MOV             r0,     #0\n        BL              intra_predict_dc4_neon\n        STR             r0,     [r4,    #0x40]\nlocal_intra_11_0:\n        SUBS            r8,     r8,     #1\n        ADD             r4,     r4,     #8\n        ADD             r5,     r5,     #8\n        ADD             r6,     r6,     #8\n        BNE             dc_pred_chroma\n        LDMDB           r4,     {r0-r3}\n        STMIA           r4!,    {r0-r3}\n        STMIA           r4!,    {r0-r3}\n        STMIA           r4!,    {r0-r3}\n        LDMIA           r4!,    {r0-r3}\n        STMIA           r4!,    {r0-r3}\n        STMIA           r4!,    {r0-r3}\n        STMIA           r4!,    {r0-r3}\n        POP             {r4-r8, pc}\nsave_best:\n        CMP             r1,     r10\n        MOVNE           r0,     r11\n        MOVEQ           r0,     #0\n        VABD.U8         q2,     q1,     q15\n        VPADDL.U8               q2,     q2\n        VPADDL.U16              q2,     q2\n        VPADDL.U32              q2,     q2\n        VADD.I64                d4,     d4,     d5\n        VMOV.32         d5[0],  r0\n        VADD.U32                d4,     d4,     d5\n        VMOV.32         r0,     d4[0]\n        CMP             r0,     r9\n        BXGE            lr\n        VMOV            q3,     q1\n        STR             r1,     [sp,    #0+4+4+4]\n        MOV             r9,     r0\n        BX              lr\n        .size  h264e_intra_predict_chroma_neon, .-h264e_intra_predict_chroma_neon\n\n        .type  h264e_intra_choose_4x4_neon, %function\nh264e_intra_choose_4x4_neon:\n        PUSH            {r0-r11,        lr}\n        SUB             sp,     sp,     #5*4\n        LDR             r9,     [r0],   #0x10\n        LDR             r10,    [r0],   #0x10\n        LDR             r11,    [r0],   #0x10\n        LDR             r12,    [r0],   #0x10\n        VMOV            d30,    r9,     r10\n        VMOV            d31,    r11,    r12\n        LDR             r10,    [sp,    #0+4+4+4+4+4+4+4+4+4+4*8+4]\n        LDR             r11,    [sp,    #0+4+4+4+4+4+4+4+4+4+4*8+4+4]\n        MOV             r9,     #0x10000000\n        TST             r2,     #1\n        MOVNE           r1,     r3\n        MOVEQ           r1,     #0\n        TST             r2,     #2\n        SUBNE           r0,     r3,     #5\n        MOVEQ           r0,     #0\n        BL              intra_predict_dc4_neon\n        VDUP.8          q1,     r0\n        MOV             r1,     #2\n        BL              save_best\n        LDR             r2,     [sp,    #0+4+4+4+4+4+4+4+4]\n        SUB             r12,    r2,     #5\n        VLD1.8          {q0},   [r12]\n        LDR             r0,     [sp,    #0+4+4+4+4+4+4+4]\n        VMOV.U8         lr,     d1[4]\n        ORR             lr,     lr,     lr,     lsl #8\n        ORR             lr,     lr,     lr,     lsl #16\n        VMOV.32         d1[1],  lr\n        TST             r0,     #1\n        BEQ             not_avail_t\n        TST             r0,     #8\n        BNE             local_intra_10_5\n        VDUP.8          d1,     d1[0]\nlocal_intra_10_5:\n        VEXT.8          q1,     q0,     q0,     #5\n        VMOV            q2,     q1\n        VZIP.32         q1,     q2\n        VMOV            q2,     q1\n        VZIP.32         q1,     q2\n        MOV             r1,     #0\n        BL              save_best\n        VEXT.8          q10,    q0,     q0,     #5\n        VEXT.8          q11,    q0,     q0,     #6\n        VEXT.8          q12,    q0,     q0,     #7\n        VHADD.U8                q1,     q10,    q12\n        VRHADD.U8               q1,     q1,     q11\n        VEXT.8          q10,    q1,     q1,     #1\n        VEXT.8          d3,     d2,     d2,     #2\n        VEXT.8          d4,     d2,     d2,     #3\n        VZIP.32         d2,     d20\n        VZIP.32         d3,     d4\n        VMOV            d24,    d2\n        MOV             r1,     #3\n        BL              save_best\n        VEXT.8          q10,    q0,     q0,     #5\n        VEXT.8          q11,    q0,     q0,     #6\n        VRHADD.U8               q1,     q10,    q11\n        VEXT.8          q10,    q1,     q1,     #1\n        VZIP.32         q1,     q10\n        VZIP.32         q1,     q12\n        MOV             r1,     #7\n        BL              save_best\n        LDR             r0,     [sp,    #0+4+4+4+4+4+4+4]\nlocal_intra_10_6:\nnot_avail_t:\n        TST             r0,     #2\n        BEQ             not_avail_l\n        VREV32.8                q8,     q0\n        VREV32.8                q1,     q0\n        VZIP.8          q8,     q1\n        VMOV            q1,     q8\n        VZIP.8          q1,     q8\n        MOV             r1,     #1\n        BL              save_best\n        VREV32.8                q2,     q0\n        VREV32.8                q1,     q0\n        VREV32.8                q8,     q0\n        VZIP.8          q8,     q1\n        VMOV.U16                lr,     d16[3]\n        VMOV.16         d4[2],  lr\n        VMOV.16         d17[0], lr\n        VEXT.8          q9,     q2,     q2,     #14\n        VHADD.U8                q10,    q9,     q2\n        VZIP.8          q9,     q10\n        VEXT.8          q11,    q8,     q8,     #14\n        VRHADD.U8               q10,    q9,     q11\n        ADD             lr,     lr,     lr,     lsl #16\n        VEXT.8          q1,     q10,    q10,    #4\n        VEXT.8          q9,     q10,    q10,    #6\n        VZIP.32         q1,     q9\n        VMOV.32         d3[1],  lr\n        MOV             r1,     #8\n        BL              save_best\n        LDR             r0,     [sp,    #0+4+4+4+4+4+4+4]\nnot_avail_l:\n        AND             r0,     r0,     #7\n        CMP             r0,     #7\n        BNE             not_avail_diag\n        VEXT.8          q10,    q0,     q0,     #1\n        VEXT.8          q11,    q0,     q0,     #2\n        VHADD.U8                q1,     q0,     q11\n        VRHADD.U8               q2,     q1,     q10\n        VMOV            q11,    q2\n        VEXT.8          d3,     d4,     d4,     #1\n        VEXT.8          d5,     d4,     d4,     #2\n        VEXT.8          d2,     d4,     d4,     #3\n        VZIP.32         d3,     d4\n        VZIP.32         d2,     d5\n        MOV             r1,     #4\n        BL              save_best\n        VRHADD.U8               q1,     q0,     q10\n        VMOV            q12,    q1\n        VMOV            q2,     q11\n        VZIP.8          q1,     q2\n        VEXT.8          q2,     q1,     q1,     #2\n        VZIP.32         q1,     q2\n        VREV64.32               q1,     q1\n        VSWP            d2,     d3\n        VMOV.U16                lr,     d22[2]\n        VMOV.16         d2[1],  lr\n        MOV             r1,     #6\n        BL              save_best\n        VEXT.8          q11,    q11,    q11,    #1\n        VEXT.8          q1,     q12,    q12,    #4\n        VEXT.8          q2,     q11,    q11,    #2\n        VZIP.32         q1,     q2\n        VMOV.U16                lr,     d22[0]\n        VMOV.16         d24[1], lr\n        MOV             lr,     lr,     lsl #8\n        VMOV.16         d22[0], lr\n        VEXT.8          d3,     d24,    d24,    #3\n        VEXT.8          d22,    d22,    d22,    #1\n        VZIP.32         d3,     d22\n        MOV             r1,     #5\n        BL              save_best\nnot_avail_diag:\n        LDR             r0,     [sp,    #0+4+4+4]\n        MOV             r3,     r9\n        LDR             r4,     [sp,    #0+4+4+4+4+4+4]\n        VMOV            r5,     r6,     d6\n        STR             r5,     [r4]\n        STR             r6,     [r4,    #0x10]\n        VMOV            r5,     r6,     d7\n        STR             r5,     [r4,    #0x20]\n        STR             r6,     [r4,    #0x30]\n        ADD             sp,     sp,     #4*9\n        ADD             r0,     r0,     r3,     lsl #4\n        POP             {r4-r11,        pc}\n        .size  h264e_intra_choose_4x4_neon, .-h264e_intra_choose_4x4_neon\n\n        .global         h264e_intra_predict_16x16_neon\n        .global         h264e_intra_predict_chroma_neon\n        .global         h264e_intra_choose_4x4_neon\n"
  },
  {
    "path": "asm/neon/h264e_qpel_neon.s",
    "content": "        .arm\n        .text\n        .align 2\n\n        .global h264e_qpel_average_wh_align_neon\n        .type  h264e_qpel_average_wh_align_neon, %function\nh264e_qpel_average_wh_align_neon:\n        MOVS            r3,     r3,     lsr #5\n        BCC             local_qpel_20_0\nlocal_qpel_1_0:\n        VLDMIA          r0!,    {q0-q3}\n        VLDMIA          r1!,    {q8-q11}\n        SUBS            r3,     r3,     #4<<11\n        VRHADD.U8               q0,     q0,     q8\n        VRHADD.U8               q1,     q1,     q9\n        VRHADD.U8               q2,     q2,     q10\n        VRHADD.U8               q3,     q3,     q11\n        VSTMIA          r2!,    {q0-q3}\n        BNE             local_qpel_1_0\n        BX              lr\nlocal_qpel_20_0:\n        MOV             r12,    #16\nlocal_qpel_1_1:\n        VLD1.8          {d0},   [r0],   r12\n        VLD1.8          {d1},   [r1],   r12\n        SUBS            r3,     r3,     #1<<11\n        VRHADD.U8               d0,     d0,     d1\n        VST1.8          {d0},   [r2],   r12\n        BNE             local_qpel_1_1\n        BX              lr\ncopy_w8or4:\n        MOVS            r12,    r3,     lsr #4\n        MOV             r3,     r3,     asr #16\n        BCS             copy_w8\ncopy_w4:\nlocal_qpel_1_2:\n        LDR             r12,    [r0],   r1\n        SUBS            r3,     r3,     #1\n        STR             r12,    [r2],   #16\n        BNE             local_qpel_1_2\n        BX              lr\ncopy_w16or8:\n        MOVS            r12,    r3,     lsr #5\n        MOV             r3,     r3,     asr #16\n        BCC             copy_w8\ncopy_w16:\n        VLD1.8          {q0},   [r0],   r1\n        VLD1.8          {q1},   [r0],   r1\n        VLD1.8          {q2},   [r0],   r1\n        VLD1.8          {q3},   [r0],   r1\n        SUBS            r3,     r3,     #4\n        VSTMIA          r2!,    {q0-q3}\n        BNE             copy_w16\n        BX              lr\ncopy_w8:\n        MOV             r12,    #16\nlocal_qpel_1_3:\n        VLD1.8          {d0},   [r0],   r1\n        VLD1.8          {d1},   [r0],   r1\n        SUBS            r3,     r3,     #2\n        VST1.8          {d0},   [r2],   r12\n        VST1.8          {d1},   [r2],   r12\n        BNE             local_qpel_1_3\n        BX              lr\n        .size  h264e_qpel_average_wh_align_neon, .-h264e_qpel_average_wh_align_neon\n\n        .global h264e_qpel_interpolate_chroma_neon\n        .type  h264e_qpel_interpolate_chroma_neon, %function\nh264e_qpel_interpolate_chroma_neon:\n        LDR             r12,    [sp]\n        VMOV.I8         d5,     #8\n        CMP             r12,    #0\n        BEQ             copy_w8or4\n        VDUP.8          d0,     r12\n        MOV             r12,    r12,    asr #16\n        VDUP.8          d1,     r12\n        VSUB.I8         d2,     d5,     d0\n        VSUB.I8         d3,     d5,     d1\n        VMUL.I8         d28,    d2,     d3\n        VMUL.I8         d29,    d0,     d3\n        VMUL.I8         d30,    d2,     d1\n        VMUL.I8         d31,    d0,     d1\n        MOVS            r12,    r3,     lsr #4\n        MOV             r3,     r3,     asr #16\n        BCS             interpolate_chroma_w8\ninterpolate_chroma_w4:\n        VLD1.8          {d0},   [r0],   r1\n        VEXT.8          d1,     d0,     d0,     #1\nlocal_qpel_1_4:\n        VLD1.8          {d2},   [r0],   r1\n        SUBS            r3,     r3,     #1\n        VEXT.8          d3,     d2,     d2,     #1\n        VMULL.U8                q2,     d0,     d28\n        VMLAL.U8                q2,     d1,     d29\n        VMLAL.U8                q2,     d2,     d30\n        VMLAL.U8                q2,     d3,     d31\n        VQRSHRUN.S16            d4,     q2,     #6\n        VMOV            r12,    d4[0]\n        STR             r12,    [r2],   #16\n        VMOV            q0,     q1\n        BNE             local_qpel_1_4\n        BX              lr\ninterpolate_chroma_w8:\n        VLD1.8          {q0},   [r0],   r1\n        MOV             r12,    #16\n        VEXT.8          d1,     d0,     d1,     #1\nlocal_qpel_1_5:\n        VLD1.8          {q1},   [r0],   r1\n        SUBS            r3,     r3,     #1\n        VEXT.8          d3,     d2,     d3,     #1\n        VMULL.U8                q2,     d0,     d28\n        VMLAL.U8                q2,     d1,     d29\n        VMLAL.U8                q2,     d2,     d30\n        VMLAL.U8                q2,     d3,     d31\n        VQRSHRUN.S16            d4,     q2,     #6\n        VST1.8          {d4},   [r2],   r12\n        VMOV            q0,     q1\n        BNE             local_qpel_1_5\n        BX              lr\n        .size  h264e_qpel_interpolate_chroma_neon, .-h264e_qpel_interpolate_chroma_neon\n\n        .global h264e_qpel_interpolate_luma_neon\n        .type  h264e_qpel_interpolate_luma_neon, %function\nh264e_qpel_interpolate_luma_neon:\n        LDR             r12,    [sp]\n        VMOV.I8         d0,     #5\n        CMP             r12,    #0\n        BEQ             copy_w16or8\n        PUSH            {r4,    r7,     r10,    r11,    lr}\n        MOV             lr,     #16\n        MOV             r4,     sp\n        SUB             sp,     sp,     #16*16\n        MOV             r7,     sp\n        BIC             r7,     r7,     #15\n        MOV             sp,     r7\n        PUSH            {r2,    r4}\n        MOV             r11,    #1\n        ADD             r10,    r12,    #0x00010000\n        ADD             r10,    r10,    r11\n        ADD             r12,    r12,    r12,    lsr #14\n        MOV             r11,    r11,    lsl r12\n        LDR             r12,    =0xbbb0e0ee\n        MOV             r7,     r0\n        TST             r12,    r11\n        BEQ             local_qpel_10_0\n        TST             r10,    #0x00040000\n        ADDNE           r0,     r0,     r1\n        MOVS            r4,     r3,     lsr #5\n        MOV             r4,     r3,     asr #16\n        VSHL.I8         d1,     d0,     #2\n        SUB             r0,     r0,     #2\n        BCC             flt_luma_hor_w8\nlocal_qpel_1_6:\n        VLD1.8          {q8,    q9},    [r0],   r1\n        SUBS            r4,     r4,     #1\n        VEXT.8          q11,    q8,     q9,     #1\n        VEXT.8          q12,    q8,     q9,     #2\n        VEXT.8          q13,    q8,     q9,     #3\n        VEXT.8          q14,    q8,     q9,     #4\n        VEXT.8          q15,    q8,     q9,     #5\n        VADDL.U8                q1,     d16,    d30\n        VADDL.U8                q2,     d17,    d31\n        VMLSL.U8                q1,     d22,    d0\n        VMLSL.U8                q2,     d23,    d0\n        VMLAL.U8                q1,     d24,    d1\n        VMLAL.U8                q2,     d25,    d1\n        VMLAL.U8                q1,     d26,    d1\n        VMLAL.U8                q2,     d27,    d1\n        VMLSL.U8                q1,     d28,    d0\n        VMLSL.U8                q2,     d29,    d0\n        VQRSHRUN.S16            d2,     q1,     #5\n        VQRSHRUN.S16            d3,     q2,     #5\n        VSTMIA          r2!,    {q1}\n        BNE             local_qpel_1_6\n        B               flt_luma_hor_end\nflt_luma_hor_w8:\nlocal_qpel_1_7:\n        VLD1.8          {q8},   [r0],   r1\n        SUBS            r4,     r4,     #1\n        VEXT.8          d22,    d16,    d17,    #1\n        VEXT.8          d24,    d16,    d17,    #2\n        VEXT.8          d26,    d16,    d17,    #3\n        VEXT.8          d28,    d16,    d17,    #4\n        VEXT.8          d30,    d16,    d17,    #5\n        VADDL.U8                q1,     d16,    d30\n        VMLSL.U8                q1,     d22,    d0\n        VMLAL.U8                q1,     d24,    d1\n        VMLAL.U8                q1,     d26,    d1\n        VMLSL.U8                q1,     d28,    d0\n        VQRSHRUN.S16            d2,     q1,     #5\n        VST1.8          {d2},   [r2],   lr\n        BNE             local_qpel_1_7\nflt_luma_hor_end:\n        SUB             r2,     r3,     asr #12\n        MOV             r0,     r7\n        ADD             r2,     sp,     #4*2\nlocal_qpel_10_0:\n        TST             r11,    r12,    lsr #16\n        BEQ             local_qpel_10_1\n        MOV             r0,     r7\n        TST             r10,    #0x0004\n        ADDNE           r0,     r0,     #1\n        MOVS            r4,     r3,     lsr #5\n        MOV             r4,     r3,     asr #16\n        VMOV.I8         d0,     #5\n        VSHL.I8         d1,     d0,     #2\n        SUB             r0,     r0,     r1,     lsl #1\n        BCC             flt_luma_ver_w8\n        VLD1.8          {q10},  [r0],   r1\n        VLD1.8          {q11},  [r0],   r1\n        VLD1.8          {q12},  [r0],   r1\n        VLD1.8          {q13},  [r0],   r1\n        VLD1.8          {q14},  [r0],   r1\nlocal_qpel_1_8:\n        VLD1.8          {q15},  [r0],   r1\n        VADDL.U8                q1,     d20,    d30\n        VADDL.U8                q2,     d21,    d31\n        VMLSL.U8                q1,     d22,    d0\n        VMLSL.U8                q2,     d23,    d0\n        VMLAL.U8                q1,     d24,    d1\n        VMLAL.U8                q2,     d25,    d1\n        VMLAL.U8                q1,     d26,    d1\n        VMLAL.U8                q2,     d27,    d1\n        VMLSL.U8                q1,     d28,    d0\n        VMLSL.U8                q2,     d29,    d0\n        VQRSHRUN.S16            d2,     q1,     #5\n        VQRSHRUN.S16            d3,     q2,     #5\n        VSTMIA          r2!,    {q1}\n        VMOV            q10,    q11\n        VMOV            q11,    q12\n        VMOV            q12,    q13\n        VMOV            q13,    q14\n        VMOV            q14,    q15\n        SUBS            r4,     r4,     #1\n        BNE             local_qpel_1_8\n        B               flt_luma_ver_end\nflt_luma_ver_w8:\n        VLD1.8          {d20},  [r0],   r1\n        VLD1.8          {d22},  [r0],   r1\n        VLD1.8          {d24},  [r0],   r1\n        VLD1.8          {d26},  [r0],   r1\n        VLD1.8          {d28},  [r0],   r1\nlocal_qpel_1_9:\n        VLD1.8          {d30},  [r0],   r1\n        VADDL.U8                q1,     d20,    d30\n        VMLSL.U8                q1,     d22,    d0\n        VMLAL.U8                q1,     d24,    d1\n        VMLAL.U8                q1,     d26,    d1\n        VMLSL.U8                q1,     d28,    d0\n        VQRSHRUN.S16            d2,     q1,     #5\n        VST1.8          {d2},   [r2],   lr\n        VMOV            d20,    d22\n        VMOV            d22,    d24\n        VMOV            d24,    d26\n        VMOV            d26,    d28\n        VMOV            d28,    d30\n        SUBS            r4,     r4,     #1\n        BNE             local_qpel_1_9\nflt_luma_ver_end:\n        SUB             r2,     r3,     asr #12\n        MOV             r0,     r7\n        ADD             r2,     sp,     #4*2\nlocal_qpel_10_1:\n        LDR             r12,    =0xfafa4e40\n        TST             r12,    r11\n        BEQ             local_qpel_10_2\n        MOV             r0,     r7\n        SUB             sp,     sp,     #(8)\n        VPUSH           {q4-q7}\n        MOVS            r4,     r3,     lsr #5\n        MOV             r4,     r3,     asr #16\n        VMOV.I8         d0,     #5\n        VSHL.I8         d1,     d0,     #2\n        SUB             r0,     r0,     #2\n        SUB             r0,     r0,     r1,     lsl #1\n        ADD             r2,     r2,     r4,     lsl #4\n        ADD             r4,     r4,     #5\n        BCC             flt_luma_diag_w8\nlocal_qpel_1_10:\n        VLD1.8          {q8,    q9},    [r0],   r1\n        VMOV            q10,    q8\n        VEXT.8          q11,    q8,     q9,     #1\n        VEXT.8          q12,    q8,     q9,     #2\n        VEXT.8          q13,    q8,     q9,     #3\n        VEXT.8          q14,    q8,     q9,     #4\n        VEXT.8          q15,    q8,     q9,     #5\n        VADDL.U8                q1,     d20,    d30\n        VADDL.U8                q2,     d21,    d31\n        VMLSL.U8                q1,     d22,    d0\n        VMLSL.U8                q2,     d23,    d0\n        VMLAL.U8                q1,     d24,    d1\n        VMLAL.U8                q2,     d25,    d1\n        VMLAL.U8                q1,     d26,    d1\n        VMLAL.U8                q2,     d27,    d1\n        VMLSL.U8                q1,     d28,    d0\n        VMLSL.U8                q2,     d29,    d0\n        VPUSH           {q1,    q2}\n        SUBS            r4,     r4,     #1\n        BNE             local_qpel_1_10\n        MOV             r4,     r3,     asr #16\n        VPOP            {q4-q9}\n        VPOP            {q10-q15}\nlocal_qpel_1_11:\n        SUBS            r4,     r4,     #1\n        SUB             r2,     r2,     #16\n        VADD.S16                q4,     q4,     q14\n        VADD.S16                q5,     q5,     q15\n        VADD.S16                q2,     q6,     q12\n        VADD.S16                q3,     q7,     q13\n        VADD.S16                q0,     q8,     q10\n        VADD.S16                q1,     q9,     q11\n        VSUB.S16                q4,     q4,     q2\n        VSUB.S16                q5,     q5,     q3\n        VSUB.S16                q2,     q2,     q0\n        VSUB.S16                q3,     q3,     q1\n        VSHR.S16                q4,     q4,     #2\n        VSHR.S16                q5,     q5,     #2\n        VSUB.S16                q4,     q4,     q2\n        VSUB.S16                q5,     q5,     q3\n        VSHR.S16                q4,     q4,     #2\n        VSHR.S16                q5,     q5,     #2\n        VADD.S16                q4,     q4,     q0\n        VADD.S16                q5,     q5,     q1\n        VQRSHRUN.S16            d2,     q4,     #6\n        VQRSHRUN.S16            d3,     q5,     #6\n        VST1.8          {q1},   [r2]\n        VMOV            q4,     q6\n        VMOV            q5,     q7\n        VMOV            q6,     q8\n        VMOV            q7,     q9\n        VMOV            q8,     q10\n        VMOV            q9,     q11\n        VMOV            q10,    q12\n        VMOV            q11,    q13\n        VMOV            q12,    q14\n        VMOV            q13,    q15\n        VPOPNE          {q14,   q15}\n        BNE             local_qpel_1_11\n        B               flt_luma_diag_end\nflt_luma_diag_w8:\nlocal_qpel_1_12:\n        VLD1.8          {q8},   [r0],   r1\n        VMOV            d20,    d16\n        VEXT.8          d22,    d16,    d17,    #1\n        VEXT.8          d24,    d16,    d17,    #2\n        VEXT.8          d26,    d16,    d17,    #3\n        VEXT.8          d28,    d16,    d17,    #4\n        VEXT.8          d30,    d16,    d17,    #5\n        VADDL.U8                q1,     d20,    d30\n        VMLSL.U8                q1,     d22,    d0\n        VMLAL.U8                q1,     d24,    d1\n        VMLAL.U8                q1,     d26,    d1\n        VMLSL.U8                q1,     d28,    d0\n        VPUSH           {q1}\n        SUBS            r4,     r4,     #1\n        BNE             local_qpel_1_12\n        MOV             r4,     r3,     asr #16\n        VPOP            {q4}\n        VPOP            {q6}\n        VPOP            {q8}\n        VPOP            {q10}\n        VPOP            {q12}\nlocal_qpel_1_13:\n        VPOP            {q14}\n        SUBS            r4,     r4,     #1\n        SUB             r2,     r2,     #16\n        VADD.S16                q4,     q4,     q14\n        VADD.S16                q2,     q6,     q12\n        VADD.S16                q0,     q8,     q10\n        VSUB.S16                q4,     q4,     q2\n        VSUB.S16                q2,     q2,     q0\n        VSHR.S16                q4,     q4,     #2\n        VSUB.S16                q4,     q4,     q2\n        VSHR.S16                q4,     q4,     #2\n        VADD.S16                q4,     q4,     q0\n        VQRSHRUN.S16            d2,     q4,     #6\n        VST1.8          {d2},   [r2]\n        VMOV            q4,     q6\n        VMOV            q6,     q8\n        VMOV            q8,     q10\n        VMOV            q10,    q12\n        VMOV            q12,    q14\n        BNE             local_qpel_1_13\nflt_luma_diag_end:\n        VPOP            {q4-q7}\n        ADD             sp,     sp,     #(8)\n        ADD             r2,     sp,     #4*2\nlocal_qpel_10_2:\n        TST             r11,    r12,    lsr #16\n        BEQ             local_qpel_10_3\n        LDR             r12,    =0xeae0\n        TST             r12,    r11\n        LDR             r2,     [sp]\n        BEQ             local_qpel_20_1\n        ADD             r0,     sp,     #4*2\n        LDR             r3,     =0x00100010\n        MOV             r1,     r2\n        BL              h264e_qpel_average_wh_align_neon\n        B               local_qpel_10_3\nlocal_qpel_20_1:\n        MOV             r0,     r7\n        TST             r10,    #0x0004\n        ADDNE           r0,     r0,     #1\n        TST             r10,    #0x00040000\n        ADDNE           r0,     r0,     r1\n        LDR             r2,     [sp]\n        MOV             r12,    #4\nlocal_qpel_1_14:\n        VLD1.8          {q8},   [r0],   r1\n        VLD1.8          {q9},   [r0],   r1\n        VLD1.8          {q10},  [r0],   r1\n        VLD1.8          {q11},  [r0],   r1\n        SUBS            r12,    r12,    #1\n        VLDMIA          r2,     {q0-q3}\n        VRHADD.U8               q0,     q8\n        VRHADD.U8               q1,     q9\n        VRHADD.U8               q2,     q10\n        VRHADD.U8               q3,     q11\n        VSTMIA          r2!,    {q0-q3}\n        BNE             local_qpel_1_14\nlocal_qpel_10_3:\n        LDR             sp,     [sp,    #4]\n        POP             {r4,    r7,     r10,    r11,    pc}\n        .size  h264e_qpel_interpolate_luma_neon, .-h264e_qpel_interpolate_luma_neon\n"
  },
  {
    "path": "asm/neon/h264e_sad_neon.s",
    "content": "        .arm\n        .text\n        .align 2\n\n        .type  h264e_sad_mb_unlaign_wh_neon, %function\nh264e_sad_mb_unlaign_wh_neon:\n        TST             r3,     #0x008\n        BNE             local_sad_2_0\n        VLDMIA          r2!,    {q8-q15}\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABDL.U8                q0,     d16,    d4\n        VABAL.U8                q0,     d17,    d5\n        VABAL.U8                q0,     d18,    d6\n        VABAL.U8                q0,     d19,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d20,    d4\n        VABAL.U8                q0,     d21,    d5\n        VABAL.U8                q0,     d22,    d6\n        VABAL.U8                q0,     d23,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d24,    d4\n        VABAL.U8                q0,     d25,    d5\n        VABAL.U8                q0,     d26,    d6\n        VABAL.U8                q0,     d27,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d28,    d4\n        VABAL.U8                q0,     d29,    d5\n        VABAL.U8                q0,     d30,    d6\n        VABAL.U8                q0,     d31,    d7\n        TST             r3,     #0x00100000\n        BEQ             local_sad_1_0\n        VLDMIA          r2!,    {q8-q15}\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d16,    d4\n        VABAL.U8                q0,     d17,    d5\n        VABAL.U8                q0,     d18,    d6\n        VABAL.U8                q0,     d19,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d20,    d4\n        VABAL.U8                q0,     d21,    d5\n        VABAL.U8                q0,     d22,    d6\n        VABAL.U8                q0,     d23,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d24,    d4\n        VABAL.U8                q0,     d25,    d5\n        VABAL.U8                q0,     d26,    d6\n        VABAL.U8                q0,     d27,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d28,    d4\n        VABAL.U8                q0,     d29,    d5\n        VABAL.U8                q0,     d30,    d6\n        VABAL.U8                q0,     d31,    d7\nlocal_sad_1_0:\n        VPADDL.U16              q0,     q0\n        VPADDL.U32              q0,     q0\n        VADD.U64                d0,     d1\n        VMOV            r0,     r1,     d0\n        BX              lr\nlocal_sad_2_0:\n        VLDMIA          r2!,    {q8-q15}\n        VLD1.8          {d4},   [r0],   r1\n        VLD1.8          {d5},   [r0],   r1\n        VABDL.U8                q0,     d16,    d4\n        VABAL.U8                q0,     d18,    d5\n        VLD1.8          {d4},   [r0],   r1\n        VLD1.8          {d5},   [r0],   r1\n        VABAL.U8                q0,     d20,    d4\n        VABAL.U8                q0,     d22,    d5\n        VLD1.8          {d4},   [r0],   r1\n        VLD1.8          {d5},   [r0],   r1\n        VABAL.U8                q0,     d24,    d4\n        VABAL.U8                q0,     d26,    d5\n        VLD1.8          {d4},   [r0],   r1\n        VLD1.8          {d5},   [r0],   r1\n        VABAL.U8                q0,     d28,    d4\n        VABAL.U8                q0,     d30,    d5\n        TST             r3,     #0x00100000\n        BEQ             local_sad_1_1\n        VLDMIA          r2!,    {q8-q15}\n        VLD1.8          {d4},   [r0],   r1\n        VLD1.8          {d5},   [r0],   r1\n        VABAL.U8                q0,     d16,    d4\n        VABAL.U8                q0,     d18,    d5\n        VLD1.8          {d4},   [r0],   r1\n        VLD1.8          {d5},   [r0],   r1\n        VABAL.U8                q0,     d20,    d4\n        VABAL.U8                q0,     d22,    d5\n        VLD1.8          {d4},   [r0],   r1\n        VLD1.8          {d5},   [r0],   r1\n        VABAL.U8                q0,     d24,    d4\n        VABAL.U8                q0,     d26,    d5\n        VLD1.8          {d4},   [r0],   r1\n        VLD1.8          {d5},   [r0],   r1\n        VABAL.U8                q0,     d28,    d4\n        VABAL.U8                q0,     d30,    d5\nlocal_sad_1_1:\n        VPADDL.U16              q0,     q0\n        VPADDL.U32              q0,     q0\n        VADD.U64                d0,     d1\n        VMOV            r0,     r1,     d0\n        BX              lr\n        .size  h264e_sad_mb_unlaign_wh_neon, .-h264e_sad_mb_unlaign_wh_neon\n\n        .type  h264e_sad_mb_unlaign_8x8_neon, %function\nh264e_sad_mb_unlaign_8x8_neon:\n        VLDMIA          r2!,    {q8-q15}\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABDL.U8                q0,     d16,    d4\n        VABDL.U8                q1,     d17,    d5\n        VABAL.U8                q0,     d18,    d6\n        VABAL.U8                q1,     d19,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d20,    d4\n        VABAL.U8                q1,     d21,    d5\n        VABAL.U8                q0,     d22,    d6\n        VABAL.U8                q1,     d23,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d24,    d4\n        VABAL.U8                q1,     d25,    d5\n        VABAL.U8                q0,     d26,    d6\n        VABAL.U8                q1,     d27,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d28,    d4\n        VABAL.U8                q1,     d29,    d5\n        VABAL.U8                q0,     d30,    d6\n        VABAL.U8                q1,     d31,    d7\n        VLDMIA          r2!,    {q8-q15}\n        VPADDL.U16              q0,     q0\n        VPADDL.U16              q1,     q1\n        VPADDL.U32              q0,     q0\n        VPADDL.U32              q1,     q1\n        VADD.U64                d0,     d1\n        VADD.U64                d2,     d3\n        VTRN.32         d0,     d2\n        VSTMIA          r3!,    {d0}\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABDL.U8                q0,     d16,    d4\n        VABDL.U8                q1,     d17,    d5\n        VABAL.U8                q0,     d18,    d6\n        VABAL.U8                q1,     d19,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d20,    d4\n        VABAL.U8                q1,     d21,    d5\n        VABAL.U8                q0,     d22,    d6\n        VABAL.U8                q1,     d23,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d24,    d4\n        VABAL.U8                q1,     d25,    d5\n        VABAL.U8                q0,     d26,    d6\n        VABAL.U8                q1,     d27,    d7\n        VLD1.8          {d4,    d5},    [r0],   r1\n        VLD1.8          {d6,    d7},    [r0],   r1\n        VABAL.U8                q0,     d28,    d4\n        VABAL.U8                q1,     d29,    d5\n        VABAL.U8                q0,     d30,    d6\n        VABAL.U8                q1,     d31,    d7\n        VPADDL.U16              q0,     q0\n        VPADDL.U16              q1,     q1\n        VPADDL.U32              q0,     q0\n        VPADDL.U32              q1,     q1\n        VADD.U64                d0,     d1\n        VADD.U64                d2,     d3\n        VTRN.32         d0,     d2\n        VSTMIA          r3!,    {d0}\n        LDMDB           r3,     {r0-r3}\n        ADD             r0,     r0,     r1\n        ADD             r0,     r0,     r2\n        ADD             r0,     r0,     r3\n        BX              lr\n        .size  h264e_sad_mb_unlaign_8x8_neon, .-h264e_sad_mb_unlaign_8x8_neon\n\n        .type  h264e_copy_8x8_neon, %function\nh264e_copy_8x8_neon:\n        VLDR.64         d0,     [r2,    #0*16]\n        VLDR.64         d1,     [r2,    #1*16]\n        VLDR.64         d2,     [r2,    #2*16]\n        VLDR.64         d3,     [r2,    #3*16]\n        VLDR.64         d4,     [r2,    #4*16]\n        VLDR.64         d5,     [r2,    #5*16]\n        VLDR.64         d6,     [r2,    #6*16]\n        VLDR.64         d7,     [r2,    #7*16]\n        VST1.32         {d0},   [r0:64],        r1\n        VST1.32         {d1},   [r0:64],        r1\n        VST1.32         {d2},   [r0:64],        r1\n        VST1.32         {d3},   [r0:64],        r1\n        VST1.32         {d4},   [r0:64],        r1\n        VST1.32         {d5},   [r0:64],        r1\n        VST1.32         {d6},   [r0:64],        r1\n        VST1.32         {d7},   [r0:64],        r1\n        BX              lr\n        .size  h264e_copy_8x8_neon, .-h264e_copy_8x8_neon\n\n        .type  h264e_copy_16x16_neon, %function\nh264e_copy_16x16_neon:\n        MOV             r12,    #4\nlocal_sad_1_2:\n        VLD2.32         {d0-d1},        [r2:64],        r3\n        VLD2.32         {d2-d3},        [r2:64],        r3\n        VLD2.32         {d4-d5},        [r2:64],        r3\n        VLD2.32         {d6-d7},        [r2:64],        r3\n        SUBS            r12,    r12,    #1\n        VST2.32         {d0-d1},        [r0:64],        r1\n        VST2.32         {d2-d3},        [r0:64],        r1\n        VST2.32         {d4-d5},        [r0:64],        r1\n        VST2.32         {d6-d7},        [r0:64],        r1\n        BNE             local_sad_1_2\n        BX              lr\n        .size  h264e_copy_16x16_neon, .-h264e_copy_16x16_neon\n\n        .type  h264e_copy_borders_neon, %function\nh264e_copy_borders_neon:\n        PUSH            {r4-r12,        lr}\n        ADD             r4,     r1,     r3,     lsl #1\n        MUL             r5,     r3,     r4\n        MLA             r6,     r2,     r4,     r0\n        SUB             r8,     r1,     #4\n        MOV             lr,     r5\n        ADD             r12,    lr,     #4\n        SUB             r7,     r6,     r4\n        SUB             r5,     r0,     r5\n        ADD             r5,     r5,     r8\n        ADD             r6,     r6,     r8\nlocal_sad_2_1:\n        LDR             r10,    [r0,    r8]\n        LDR             r11,    [r7,    r8]\n        MOV             r9,     r3\nlocal_sad_1_3:\n        SUBS            r9,     r9,     #1\n        STR             r10,    [r5],   r4\n        STR             r11,    [r6],   r4\n        BGT             local_sad_1_3\n        SUBS            r8,     r8,     #4\n        SUB             r5,     r5,     r12\n        SUB             r6,     r6,     r12\n        BGE             local_sad_2_1\n        SUB             r0,     r0,     lr\n        SUB             r5,     r0,     r3\n        ADD             r6,     r0,     r1\n        SUB             r7,     r6,     #1\n        ADD             r9,     r2,     r3,     lsl #1\n        LDR             r1,     =0x1010101\n        RSB             r12,    r3,     r4,     lsl #1\nlocal_sad_2_2:\n        LDRB            lr,     [r0,    r4]\n        LDRB            r2,     [r7,    r4]\n        LDRB            r10,    [r0],   r4,     lsl #1\n        LDRB            r11,    [r7],   r4,     lsl #1\n        SUB             r8,     r3,     #4\n        MUL             lr,     lr,     r1\n        MUL             r2,     r2,     r1\n        MUL             r10,    r10,    r1\n        MUL             r11,    r11,    r1\nlocal_sad_1_4:\n        SUBS            r8,     r8,     #4\n        STR             lr,     [r5,    r4]\n        STR             r2,     [r6,    r4]\n        STR             r10,    [r5],   #4\n        STR             r11,    [r6],   #4\n        BGE             local_sad_1_4\n        SUBS            r9,     r9,     #2\n        ADD             r5,     r5,     r12\n        ADD             r6,     r6,     r12\n        BGT             local_sad_2_2\n        POP             {r4-r12,        pc}\n        .size  h264e_copy_borders_neon, .-h264e_copy_borders_neon\n\n        .global         h264e_sad_mb_unlaign_8x8_neon\n        .global         h264e_sad_mb_unlaign_wh_neon\n        .global         h264e_copy_borders_neon\n        .global         h264e_copy_8x8_neon\n        .global         h264e_copy_16x16_neon\n"
  },
  {
    "path": "asm/neon/h264e_transform_neon.s",
    "content": "        .arm\n        .text\n        .align 2\n\n        .type  hadamar4_2d_neon, %function\nhadamar4_2d_neon:\n        VLD4.16         {d0,    d1,     d2,     d3},    [r0]\n        VADD.S16                q2,     q0,     q1\n        VSUB.S16                q3,     q0,     q1\n        VSWP            d5,     d6\n        VADD.S16                q0,     q2,     q3\n        VSUB.S16                q1,     q2,     q3\n        VSWP            d2,     d3\n        VTRN.S16                d0,     d1\n        VTRN.S16                d2,     d3\n        VTRN.S32                q0,     q1\n        VADD.S16                q2,     q0,     q1\n        VSUB.S16                q3,     q0,     q1\n        VSWP            d5,     d6\n        VADD.S16                q0,     q2,     q3\n        VSUB.S16                q1,     q2,     q3\n        VSWP            d2,     d3\n        VSTMIA          r0,     {q0-q1}\n        BX              lr\n        .size  hadamar4_2d_neon, .-hadamar4_2d_neon\n\n        .type  hadamar2_2d_neon, %function\nhadamar2_2d_neon:\n        LDMIA           r0,     {r1,    r2}\n        SADDSUBX                r1,     r1,     r1\n        SADDSUBX                r2,     r2,     r2\n        SSUB16          r3,     r1,     r2\n        SADD16          r2,     r1,     r2\n        MOV             r2,     r2,     ror #16\n        MOV             r3,     r3,     ror #16\n        STMIA           r0,     {r2,    r3}\n        BX              lr\n        .size  hadamar2_2d_neon, .-hadamar2_2d_neon\n\n        .type  h264e_quant_luma_dc_neon, %function\nh264e_quant_luma_dc_neon:\n        PUSH            {r4-r6, lr}\n        SUB             sp,     sp,     #0x28\n        MOV             r6,     r1\n        MOV             r4,     r2\n        MOV             r5,     r0\n        SUB             r0,     r5,     #16*2\n        BL              hadamar4_2d_neon\n        MOV             r3,     #0x20000\n        STR             r3,     [sp,    #0]\n        LDRSH           r2,     [r4,    #0]\n        MOV             r3,     #0x10\n        MOV             r1,     r6\n        SUB             r0,     r5,     #16*2\n        BL              quant_dc\n        SUB             r0,     r5,     #16*2\n        BL              hadamar4_2d_neon\n        LDRH            r0,     [r4,    #2]\n        MOV             r3,     #0x10\n        SUB             r1,     r5,     #16*2\n        MOV             r2,     r0,     lsr #2\n        MOV             r0,     r5\n        BL              dequant_dc\n        ADD             sp,     sp,     #0x28\n        POP             {r4-r6, pc}\n        .size  h264e_quant_luma_dc_neon, .-h264e_quant_luma_dc_neon\n\n        .type  h264e_quant_chroma_dc_neon, %function\nh264e_quant_chroma_dc_neon:\n        PUSH            {r3-r7, lr}\n        MOV             r6,     r1\n        MOV             r4,     r2\n        MOV             r5,     r0\n        SUB             r0,     r5,     #16*2\n        BL              hadamar2_2d_neon\n        LDR             r3,     =0x0000aaaa\n        MOV             r1,     r6\n        STR             r3,     [sp,    #0]\n        LDRH            r0,     [r4,    #0]\n        MOV             r3,     #4\n        MOV             r2,     r0,     lsl #17\n        MOV             r2,     r2,     asr #16\n        SUB             r0,     r5,     #16*2\n        BL              quant_dc\n        SUB             r0,     r5,     #16*2\n        BL              hadamar2_2d_neon\n        LDRH            r0,     [r4,    #2]\n        MOV             r3,     #4\n        SUB             r1,     r5,     #16*2\n        MOV             r2,     r0,     lsr #1\n        MOV             r0,     r5\n        BL              dequant_dc\n        SUB             r1,     r5,     #16*2\n        LDMIA           r1,     {r2,    r3}\n        ORRS            r0,     r2,     r3\n        MOVNE           r0,     #1\n        POP             {r3-r7, pc}\n        .size  h264e_quant_chroma_dc_neon, .-h264e_quant_chroma_dc_neon\n\n        .type  is_zero4_neon, %function\nis_zero4_neon:\n        PUSH            {r4-r6, lr}\n        MOV             r4,     r0\n        MOV             r5,     r1\n        MOV             r6,     r2\n        ADD             r0,     r0,     #(0+16*2)\n        BL              is_zero_neon\n        POPNE           {r4-r6, pc}\n        MOV             r2,     r6\n        MOV             r1,     r5\n        ADD             r0,     r4,     #(0+16*2)+((0+16*2)+16*2)\n        BL              is_zero_neon\n        POPNE           {r4-r6, pc}\n        MOV             r2,     r6\n        MOV             r1,     r5\n        ADD             r0,     r4,     #(0+16*2)+4*((0+16*2)+16*2)\n        BL              is_zero_neon\n        POPNE           {r4-r6, pc}\n        MOV             r2,     r6\n        MOV             r1,     r5\n        ADD             r0,     r4,     #(0+16*2)+5*((0+16*2)+16*2)\n        BL              is_zero_neon\n        POP             {r4-r6, pc}\n        .size  is_zero4_neon, .-is_zero4_neon\n\n        .type  h264e_transform_sub_quant_dequant_neon, %function\nh264e_transform_sub_quant_dequant_neon:\n        PUSH            {r0-r12,        lr}\n        MOV             r6,     r1\n        MOV             r5,     r0\n        MOV             r8,     r3,     asr #1\n        LDR             r1,     [sp,    #8]\n        MOV             r0,     r3,     asr #1\n        LDR             r4,     [sp,    #0x38]\n        MOV             r9,     r3\n        MOV             r7,     r8\n        SUB             r10,    r1,     r0\n        RSB             r11,    r0,     #0x10\nl0.660:\n        LDR             r2,     [sp,    #8]\n        ADD             r3,     r4,     #0x20\n        MOV             r1,     r6\n        MOV             r0,     r5\n        BL              fwdtransformresidual4x42_neon\n        SUBS            r7,     r7,     #1\n        ADD             r5,     r5,     #4\n        ADD             r6,     r6,     #4\n        ADD             r4,     r4,     #((0+16*2)+16*2)\n        BNE             l0.660\n        SUBS            r8,     r8,     #1\n        MOV             r7,     r9,     asr #1\n        ADD             r5,     r5,     r10,    lsl #2\n        ADD             r6,     r6,     r11,    lsl #2\n        BNE             l0.660\n        MOVS            r7,     r9,     lsr #1\n        BCC             local_transform_10_0\n        MUL             r7,     r7,     r7\n        LDR             r5,     [sp,    #0x38]\n        SUB             r0,     r5,     #16*2\n        ADD             r1,     r5,     #(0+16*2)\nlocal_transform_1_0:\n        LDRH            r2,     [r1],   #((0+16*2)+16*2)\n        SUBS            r7,     r7,     #1\n        STRH            r2,     [r0],   #2\n        BNE             local_transform_1_0\nlocal_transform_10_0:\n        ADD             r3,     sp,     #0x38\n        MOV             r1,     r9\n        LDMIA           r3,     {r0,    r2}\n        BL              zero_smallq_neon\n        ADD             r4,     sp,     #0x38\n        MOV             r3,     r0\n        MOV             r1,     r9\n        LDMIA           r4,     {r0,    r2}\n        ADD             sp,     sp,     #0x10\n        POP             {r4-r12,        lr}\n        B               quantize_neon\n        .size  h264e_transform_sub_quant_dequant_neon, .-h264e_transform_sub_quant_dequant_neon\n\n        .type  h264e_transform_add_neon, %function\nh264e_transform_add_neon:\n        LDR             r12,    [sp]\n        SUB             r12,    r12,    r12,    lsl #16\n        ADD             r3,     r3,     #(0+16*2)\n        PUSH            {r0-r12,        lr}\nlocal_transform_1_1:\n        LDR             r12,    [sp,    #0+4+4+4+4+4*8+4+4+4]\n        MOV             lr,     #0\n        MOVS            r12,    r12,    lsl #1\n        STR             r12,    [sp,    #0+4+4+4+4+4*8+4+4+4]\n        BCC             copy_block\n        VLD1.16         {d0,    d1,     d2,     d3},    [r3]\n        ADD             r3,     r3,     #((0+16*2)+16*2)\n        VTRN.16         d0,     d1\n        VTRN.16         d2,     d3\n        VTRN.32         q0,     q1\n        VADD.S16                d4,     d0,     d2\n        VSUB.S16                d5,     d0,     d2\n        VSHR.S16                d31,    d1,     #1\n        VSHR.S16                d30,    d3,     #1\n        VSUB.S16                d6,     d31,    d3\n        VADD.S16                d7,     d1,     d30\n        VADD.S16                d0,     d4,     d7\n        VADD.S16                d1,     d5,     d6\n        VSUB.S16                d2,     d5,     d6\n        VSUB.S16                d3,     d4,     d7\n        VTRN.16         d0,     d1\n        VTRN.16         d2,     d3\n        VTRN.32         q0,     q1\n        VADD.S16                d4,     d0,     d2\n        VSUB.S16                d5,     d0,     d2\n        VSHR.S16                d31,    d1,     #1\n        VSHR.S16                d30,    d3,     #1\n        VSUB.S16                d6,     d31,    d3\n        VADD.S16                d7,     d1,     d30\n        VADD.S16                d0,     d4,     d7\n        VADD.S16                d1,     d5,     d6\n        VSUB.S16                d2,     d5,     d6\n        VSUB.S16                d3,     d4,     d7\n        LDR             r4,     [r2],   #16\n        LDR             r5,     [r2],   #16\n        VMOV            d20,    r4,     r5\n        LDR             r4,     [r2],   #16\n        LDR             r5,     [r2],   #4-16*3\n        VMOV            d21,    r4,     r5\n        VSHLL.U8                q2,     d20,    #6\n        VADD.S16                q0,     q0,     q2\n        VSHLL.U8                q3,     d21,    #6\n        VADD.S16                q1,     q1,     q3\n        VQRSHRUN.S16            d0,     q0,     #6\n        VQRSHRUN.S16            d1,     q1,     #6\n        VMOV            r4,     r5,     d0\n        STR             r4,     [r0],   r1\n        STR             r5,     [r0],   r1\n        VMOV            r4,     r5,     d1\n        STR             r4,     [r0],   r1\n        STR             r5,     [r0],   r1\ncopy_block_ret:\n        LDR             lr,     [sp,    #0+4+4+4+4+4*8]\n        SUB             r0,     r0,     r1,     lsl #2\n        ADD             r0,     r0,     #4\n        ADDS            lr,     lr,     #0x10000\n        STRMI           lr,     [sp,    #0+4+4+4+4+4*8]\n        BMI             local_transform_1_1\n        SUBS            lr,     lr,     #1\n        POPEQ           {r0-r12,        pc}\n        LDR             r4,     [sp,    #0+4+4+4+4+4*8+4+4]\n        SUB             lr,     lr,     r4,     lsl #16\n        STR             lr,     [sp,    #0+4+4+4+4+4*8]\n        ADD             r0,     r0,     r1,     lsl #2\n        SUB             r0,     r0,     r4,     lsl #2\n        ADD             r2,     r2,     #16*4\n        SUB             r2,     r2,     r4,     lsl #2\n        B               local_transform_1_1\ncopy_block:\n        LDR             r4,     [r2],   #16\n        LDR             r5,     [r2],   #16\n        LDR             r6,     [r2],   #16\n        LDR             r7,     [r2],   #4-16*3\n        ADD             r3,     r3,     #((0+16*2)+16*2)\n        STR             r4,     [r0],   r1\n        STR             r5,     [r0],   r1\n        STR             r6,     [r0],   r1\n        STR             r7,     [r0],   r1\n        B               copy_block_ret\ndequant_dc:\n        PUSH            {lr}\n        ADD             r0,     r0,     #(0+16*2)\nlocal_transform_1_2:\n        LDR             lr,     [r1],   #4\n        SUBS            r3,     r3,     #2\n        SMULBB          r12,    r2,     lr\n        SMULBT          lr,     r2,     lr\n        STRH            r12,    [r0],   #((0+16*2)+16*2)\n        STRH            lr,     [r0],   #((0+16*2)+16*2)\n        BNE             local_transform_1_2\n        POP             {pc}\nquant_dc:\n        PUSH            {r4-r6, lr}\n        CMP             r3,     #4\n        LDR             r5,     [sp,    #0x10]\n        LDRNE           r12,    =iscan16\n        LDREQ           r12,    =iscan4\n        RSB             r6,     r5,     #0x40000\nlocal_transform_1_3:\n        LDRSH           lr,     [r0]\n        CMP             lr,     #0\n        MOVGE           r4,     r5\n        MOVLT           r4,     r6\n        MLA             lr,     r2,     lr,     r4\n        MOV             lr,     lr,     asr #18\n        STRH            lr,     [r0],   #2\n        LDRB            r4,     [r12],  #1\n        SUBS            r3,     r3,     #1\n        ADD             r4,     r1,     r4,     lsl #1\n        STRH            lr,     [r4,    #0]\n        BNE             local_transform_1_3\n        POP             {r4-r6, pc}\n        .size  h264e_transform_add_neon, .-h264e_transform_add_neon\n\n        .type  fwdtransformresidual4x42_neon, %function\nfwdtransformresidual4x42_neon:\n        PUSH            {lr}\n        LDR             r12,    [r0],   r2\n        LDR             lr,     [r0],   r2\n        VMOV            d16,    r12,    lr\n        LDR             r12,    [r0],   r2\n        LDR             lr,     [r0],   r2\n        VMOV            d17,    r12,    lr\n        LDR             r12,    [r1],   #16\n        LDR             lr,     [r1],   #16\n        VMOV            d20,    r12,    lr\n        LDR             r12,    [r1],   #16\n        LDR             lr,     [r1],   #16\n        VMOV            d21,    r12,    lr\n        VSUBL.U8                q0,     d16,    d20\n        VSUBL.U8                q1,     d17,    d21\n        VTRN.16         d0,     d1\n        VTRN.16         d2,     d3\n        VTRN.32         q0,     q1\n        VADD.S16                d4,     d0,     d3\n        VSUB.S16                d5,     d0,     d3\n        VADD.S16                d6,     d1,     d2\n        VSUB.S16                d7,     d1,     d2\n        VADD.S16                q0,     q2,     q3\n        VADD.S16                d1,     d1,     d5\n        VSUB.S16                q1,     q2,     q3\n        VSUB.S16                d3,     d3,     d7\n        VTRN.16         d0,     d1\n        VTRN.16         d2,     d3\n        VTRN.32         q0,     q1\n        VADD.S16                d4,     d0,     d3\n        VSUB.S16                d5,     d0,     d3\n        VADD.S16                d6,     d1,     d2\n        VSUB.S16                d7,     d1,     d2\n        VADD.S16                q0,     q2,     q3\n        VADD.S16                d1,     d1,     d5\n        VSUB.S16                q1,     q2,     q3\n        VSUB.S16                d3,     d3,     d7\n        VST1.16         {q0,    q1},    [r3]\n        POP             {pc}\n        .size  fwdtransformresidual4x42_neon, .-fwdtransformresidual4x42_neon\n\n        .type  is_zero_neon, %function\nis_zero_neon:\n        VLD1.16         {d0-d3},        [r0]\n        VABS.S16                q0,     q0\n        VABS.S16                q1,     q1\n        VCGT.U16                q0,     q0,     q15\n        VCGT.U16                q1,     q1,     q15\n        VBIC            d0,     d0,     d29\n        VORR            q0,     q0,     q1\n        VORR            d0,     d0,     d1\n        VMOV            r0,     r1,     d0\n        ORRS            r0,     r0,     r1\n        BX              lr\n        .size  is_zero_neon, .-is_zero_neon\n\n        .type  zero_smallq_neon, %function\nzero_smallq_neon:\n        PUSH            {r4-r12,        lr}\n        TST             r1,     #1\n        VMOV.I64                d29,    #0xffff\n        BNE             local_transform_10_1\n        VMOV.I64                d29,    #0\nlocal_transform_10_1:\n        CMP             r1,     #8\n        MOV             r8,     r0\n        MOV             r6,     r1\n        MOV             r0,     r1,     asr #1\n        CMPNE           r6,     #5\n        MOV             r7,     r2\n        ADD             r2,     r2,     #0x14\n        VLD1.16         {q15},  [r2]\n        MOV             r4,     #0\n        MULEQ           r9,     r0,     r0\n        AND             r10,    r1,     #1\n        MOVEQ           r5,     #0\n        MOVEQ           r11,    #1\n        BNE             l0.1964\n        MOV             r12,    #((0+16*2)+16*2)\n        MLA             r8,     r12,    r9,     r8\n        ADD             r8,     r8,     #(0+16*2)\nlocal_transform_1_4:\n        SUB             r8,     r8,     #(((0+16*2)+16*2))\n        VLD1.16         {d0-d3},        [r8]\n        VABS.S16                q0,     q0\n        VABS.S16                q1,     q1\n        VCGT.U16                q0,     q0,     q15\n        VCGT.U16                q1,     q1,     q15\n        VBIC            d0,     d0,     d29\n        VORR            q0,     q0,     q1\n        VORR            d0,     d0,     d1\n        VMOV            r0,     r1,     d0\n        ORRS            r0,     r0,     r1\n        ADD             r4,     r4,     r4\n        ORREQ           r4,     r4,     #1\n        SUBS            r9,     r9,     #1\n        BNE             local_transform_1_4\n        SUB             r8,     r8,     #(0+16*2)\n        ADD             r2,     r2,     #0x10\n        VLD1.16         {q15},  [r2]\n        CMP             r6,     #8\n        BNE             l0.1964\n        MOV             r0,     #0x33\n        BICS            r0,     r0,     r4\n        BEQ             l0.1856\n        ADD             r2,     r7,     #0x24\n        MOV             r1,     r10\n        MOV             r0,     r8\n        BL              is_zero4_neon\n        ORREQ           r4,     r4,     #0x33\nl0.1856:\n        MOV             r0,     #0xcc\n        BICS            r0,     r0,     r4\n        BEQ             l0.1892\n        ADD             r2,     r7,     #0x24\n        MOV             r1,     r10\n        ADD             r0,     r8,     #2*((0+16*2)+16*2)\n        BL              is_zero4_neon\n        ORREQ           r4,     r4,     #0xcc\nl0.1892:\n        MOV             r0,     #0x3300\n        BICS            r0,     r0,     r4\n        BEQ             l0.1928\n        ADD             r2,     r7,     #0x24\n        MOV             r1,     r10\n        ADD             r0,     r8,     #8*((0+16*2)+16*2)\n        BL              is_zero4_neon\n        ORREQ           r4,     r4,     #0x3300\nl0.1928:\n        MOV             r0,     #0xcc00\n        BICS            r0,     r0,     r4\n        BEQ             l0.1964\n        ADD             r2,     r7,     #0x24\n        MOV             r1,     r10\n        ADD             r0,     r8,     #10*((0+16*2)+16*2)\n        BL              is_zero4_neon\n        ORREQ           r4,     r4,     #0xcc00\nl0.1964:\n        MOV             r0,     r4\n        POP             {r4-r12,        pc}\n        .size  zero_smallq_neon, .-zero_smallq_neon\n\n        .type  quantize_neon, %function\nquantize_neon:\n        PUSH            {r3-r11,        lr}\n        AND             r4,     r1,     #1\n        MOV             r5,     r1,     asr #1\n        MOV             r7,     #0\n        MOV             lr,     r5\n        STR             r4,     [sp,    #0]\nlocal_transform_1_5:\n        TST             r3,     #1\n        MOV             r6,     #0\n        BEQ             nonzero\n        VMOV.U8         q0,     #0\n        VMOV.U8         q1,     #0\n        VST1.16         {q0,    q1},    [r0]\nqloop_next:\n        CMP             r6,     #0\n        MOV             r7,     r7,     lsl #1\n        ORRNE           r7,     r7,     #1\n        SUBS            r5,     r5,     #1\n        MOVEQ           r5,     r1,     asr #1\n        SUBEQS          lr,     lr,     #1\n        MOV             r3,     r3,     asr #1\n        ADD             r0,     r0,     #((0+16*2)+16*2)\n        MOVEQ           r0,     r7\n        BNE             local_transform_1_5\n        POP             {r3-r11,        pc}\nnonzero:\n        LDR             r4,     [sp,    #0]\n        LDRH            r12,    [r2,    #0xc]\n        CMP             r4,     #0\n        ADD             r4,     r0,     #(0+16*2)\n        VLD1.16         {q0,    q1},    [r4]\n        VDUP.16         q15,    r12\n        VCLT.S16                q8,     q0,     #0\n        VCLT.S16                q9,     q1,     #0\n        VEOR            q8,     q15,    q8\n        VEOR            q9,     q15,    q9\n        LDR             r12,    [r2,    #4]\n        VDUP.16         d4,     r12\n        VDUP.16         d6,     r12\n        MOV             r12,    r12,    asr #16\n        VDUP.16         d5,     r12\n        VDUP.16         d7,     r12\n        LDR             r12,    [r2,    #0]\n        VMOV.16         d4[0],  r12\n        VMOV.16         d4[2],  r12\n        MOV             r12,    r12,    asr #16\n        VMOV.16         d5[0],  r12\n        VMOV.16         d5[2],  r12\n        LDR             r12,    [r2,    #8]\n        VMOV.16         d6[1],  r12\n        VMOV.16         d6[3],  r12\n        MOV             r12,    r12,    asr #16\n        VMOV.16         d7[1],  r12\n        VMOV.16         d7[3],  r12\n        VMULL.S16               q10,    d0,     d4\n        VADDW.U16               q10,    d16\n        VQSHRN.S32              d22,    q10,    #16\n        VMUL.S16                d26,    d22,    d5\n        VMULL.S16               q10,    d1,     d6\n        VADDW.U16               q10,    d17\n        VQSHRN.S32              d23,    q10,    #16\n        VMUL.S16                d27,    d23,    d7\n        VMULL.S16               q10,    d2,     d4\n        VADDW.U16               q10,    d18\n        VQSHRN.S32              d24,    q10,    #16\n        VMUL.S16                d28,    d24,    d5\n        VMULL.S16               q10,    d3,     d6\n        VADDW.U16               q10,    d19\n        VQSHRN.S32              d25,    q10,    #16\n        VMUL.S16                d29,    d25,    d7\n        ADD             r4,     r0,     #(0+16*2)\n        LDRNEH          r12,    [r4]\n        VST1.16         {d26-d29},      [r4]\n        STRNEH          r12,    [r4]\n        LDR             r4,     [sp,    #0]\n        CMP             r4,     #0\n        LDR             r12,    =iscan16_neon\n        VLD1.8          {q8,    q9},    [r12]\n        VTBL.8          d0,     {d22-d25},      d16\n        VTBL.8          d1,     {d22-d25},      d17\n        VTBL.8          d2,     {d22-d25},      d18\n        VTBL.8          d3,     {d22-d25},      d19\n        LDRNEH          r4,     [r0]\n        VST1.16         {d0-d3},        [r0]\n        STRNEH          r4,     [r0]\n        LDR             r12,    =imask16_neon\n        VLD1.8          {q8,    q9},    [r12]\n        VCEQ.I16                q0,     q0,     #0\n        VCEQ.I16                q1,     q1,     #0\n        VAND            q0,     q0,     q8\n        VAND            q1,     q1,     q9\n        VORR            q0,     q0,     q1\n        VORR            d0,     d0,     d1\n        VPADD.U16               d0,     d0,     d0\n        VPADD.U16               d0,     d0,     d0\n        VMOV.U16                r12,    d0[0]\n        MVN             r6,     r12,    lsl #16\n        MOV             r6,     r6,     lsr #16\n        BICNE           r6,     r6,     #1\n        B               qloop_next\n        .size  quantize_neon, .-quantize_neon\n\n        .section        .rodata\n        .align 2\niscan4:\n        .byte           0x00,   0x01,   0x02,   0x03\niscan16:\n        .byte           0x00,   0x01,   0x05,   0x06\n        .byte           0x02,   0x04,   0x07,   0x0c\n        .byte           0x03,   0x08,   0x0b,   0x0d\n        .byte           0x09,   0x0a,   0x0e,   0x0f\nimask16_neon:\n        .short          0x0001, 0x0002, 0x0004, 0x0008\n        .short          0x0010, 0x0020, 0x0040, 0x0080\n        .short          0x0100, 0x0200, 0x0400, 0x0800\n        .short          0x1000, 0x2000, 0x4000, 0x8000\niscan16_neon:\n        .byte           0x00,   0x01,   0x02,   0x03,   0x08,   0x09,   0x10,   0x11\n        .byte           0x0a,   0x0b,   0x04,   0x05,   0x06,   0x07,   0x0c,   0x0d\n        .byte           0x12,   0x13,   0x18,   0x19,   0x1a,   0x1b,   0x14,   0x15\n        .byte           0x0e,   0x0f,   0x16,   0x17,   0x1c,   0x1d,   0x1e,   0x1f\n        .global         h264e_quant_luma_dc_neon\n        .global         h264e_quant_chroma_dc_neon\n        .global         h264e_transform_sub_quant_dequant_neon\n        .global         h264e_transform_add_neon\n"
  },
  {
    "path": "minih264e.h",
    "content": "#ifndef MINIH264_H\n#define MINIH264_H\n/*\n    https://github.com/lieff/minih264\n    To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide.\n    This software is distributed without any warranty.\n    See <http://creativecommons.org/publicdomain/zero/1.0/>.\n*/\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#ifndef H264E_SVC_API\n#   define H264E_SVC_API 1\n#endif\n\n#ifndef H264E_MAX_THREADS\n#   define H264E_MAX_THREADS 4\n#endif\n\n/**\n*   API return error codes\n*/\n#define H264E_STATUS_SUCCESS                0\n#define H264E_STATUS_BAD_ARGUMENT           1\n#define H264E_STATUS_BAD_PARAMETER          2\n#define H264E_STATUS_BAD_FRAME_TYPE         3\n#define H264E_STATUS_SIZE_NOT_MULTIPLE_16   4\n#define H264E_STATUS_SIZE_NOT_MULTIPLE_2    5\n#define H264E_STATUS_BAD_LUMA_ALIGN         6\n#define H264E_STATUS_BAD_LUMA_STRIDE        7\n#define H264E_STATUS_BAD_CHROMA_ALIGN       8\n#define H264E_STATUS_BAD_CHROMA_STRIDE      9\n\n/**\n*   Frame type definitions\n*   - Sequence must start with key (IDR) frame.\n*   - P (Predicted) frames are most efficiently coded\n*   - Dropable frames may be safely removed from bitstream, and used\n*     for frame rate scalability\n*   - Golden and Recovery frames used for error recovery. These\n*     frames uses \"long-term reference\" for prediction, and\n*     can be decoded if P frames sequence is interrupted.\n*     They acts similarly to key frame, but coded more efficiently.\n*\n*   Type        Refers to   Saved as long-term  Saved as short-term\n*   ---------------------------------------------------------------\n*   Key (IDR) : N/A         Yes                 Yes                |\n*   Golden    : long-term   Yes                 Yes                |\n*   Recovery  : long-term   No                  Yes                |\n*   P         : short-term  No                  Yes                |\n*   Droppable : short-term  No                  No                 |\n*                                                                  |\n*   Example sequence:        K   P   P   G   D   P   R   D   K     |\n*   long-term reference       1K  1K  1K  4G  4G  4G  4G  4G  9K   |\n*                             /         \\ /         \\         /    |\n*   coded frame             1K  2P  3P  4G  5D  6P  7R  8D  9K     |\n*                             \\ / \\ / \\   \\ /   / \\   \\ /     \\    |\n*   short-term reference      1K  2P  3P  4G  4G  6P  7R  7R  9K   |\n*\n*/\n#define H264E_FRAME_TYPE_DEFAULT    0       // Frame type set according to GOP size\n#define H264E_FRAME_TYPE_KEY        6       // Random access point: SPS+PPS+Intra frame\n#define H264E_FRAME_TYPE_I          5       // Intra frame: updates long & short-term reference\n#define H264E_FRAME_TYPE_GOLDEN     4       // Use and update long-term reference\n#define H264E_FRAME_TYPE_RECOVERY   3       // Use long-term reference, updates short-term reference\n#define H264E_FRAME_TYPE_P          2       // Use and update short-term reference\n#define H264E_FRAME_TYPE_DROPPABLE  1       // Use short-term reference, don't update anything\n#define H264E_FRAME_TYPE_CUSTOM     99      // Application specifies reference frame\n\n/**\n*   Speed preset index.\n*   Currently used values are 0, 1, 8 and 9\n*/\n#define H264E_SPEED_SLOWEST         0       // All coding tools enabled, including denoise filter\n#define H264E_SPEED_BALANCED        5\n#define H264E_SPEED_FASTEST         10      // Minimum tools enabled\n\n/**\n*   Creations parameters\n*/\ntypedef struct H264E_create_param_tag\n{\n    // Frame width: must be multiple of 16\n    int width;\n\n    // Frame height: must be multiple of 16\n    int height;\n\n    // GOP size == key frame period\n    // If 0: no key frames generated except 1st frame (infinite GOP)\n    // If 1: Only intra-frames produced\n    int gop;\n\n    // Video Buffer Verifier size, bits\n    // If 0: VBV model would be disabled\n    // Note, that this value defines Level,\n    int vbv_size_bytes;\n\n    // If set: transparent frames produced on VBV overflow\n    // If not set: VBV overflow ignored, produce bitrate bigger than specified\n    int vbv_overflow_empty_frame_flag;\n\n    // If set: keep minimum bitrate using stuffing, prevent VBV underflow\n    // If not set: ignore VBV underflow, produce bitrate smaller than specified\n    int vbv_underflow_stuffing_flag;\n\n    // If set: control bitrate at macroblock-level (better bitrate precision)\n    // If not set: control bitrate at frame-level (better quality)\n    int fine_rate_control_flag;\n\n    // If set: don't change input, but allocate additional frame buffer\n    // If not set: use input as a scratch\n    int const_input_flag;\n\n    // If 0: golden, recovery, and custom frames are disabled\n    // If >0: Specifies number of persistent frame buffer's used\n    int max_long_term_reference_frames;\n\n    int enableNEON;\n\n    // If set: enable temporal noise suppression\n    int temporal_denoise_flag;\n\n    int sps_id;\n\n#if H264E_SVC_API\n    //          SVC extension\n    // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n    // Number of SVC layers:\n    // 1 = AVC\n    // 2 = SVC with 2-layers of spatial scalability\n    int num_layers;\n\n    // If set, SVC extension layer will use predictors from base layer\n    // (sometimes can slightly increase efficiency)\n    int inter_layer_pred_flag;\n#endif\n\n#if H264E_MAX_THREADS\n    //           Multi-thread extension\n    // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n    // Maximum threads, supported by the callback\n    int max_threads;\n\n    // Opaque token, passed to callback\n    void *token;\n\n    // Application-supplied callback function.\n    // This callback runs given jobs, by calling provided job_func(), passing\n    // job_data[i] to each one.\n    //\n    // The h264e_thread_pool_run() can be used here, example:\n    //\n    //      int max_threads = 4;\n    //      void *thread_pool = h264e_thread_pool_init(max_threads);\n    //\n    //      H264E_create_param_t par;\n    //      par.max_threads = max_threads;\n    //      par.token = thread_pool;\n    //      par.run_func_in_thread = h264e_thread_pool_run;\n    //\n    // The reason to use double callbacks is to avoid mixing portable and\n    // system-dependent code, and to avoid close() function in the encoder API.\n    //\n    void (*run_func_in_thread)(void *token, void (*job_func)(void*), void *job_data[], int njobs);\n#endif\n\n} H264E_create_param_t;\n\n/**\n*   Run-time parameters\n*/\ntypedef struct H264E_run_param_tag\n{\n    // Variable, indicating speed/quality tradeoff\n    // 0 means best quality\n    int encode_speed;\n\n    // Frame type override: one of H264E_FRAME_TYPE_* values\n    // if 0: GOP pattern defined by create_param::gop value\n    int frame_type;\n\n    // Used only if frame_type == H264E_FRAME_TYPE_CUSTOM\n    // Reference long-term frame index [1..max_long_term_reference_frames]\n    // 0 = use previous frame (short-term)\n    // -1 = IDR frame, kill all long-term frames\n    int long_term_idx_use;\n\n    // Used only if frame_type == H264E_FRAME_TYPE_CUSTOM\n    // Store decoded frame in long-term buffer with given index in the\n    // range [1..max_long_term_reference_frames]\n    // 0 = save to short-term buffer\n    // -1 = Don't save frame (dropable)\n    int long_term_idx_update;\n\n    // Target frame size. Typically = bitrate/framerate\n    int desired_frame_bytes;\n\n    // Minimum quantizer value, 10 indicates good quality\n    // range: [10; qp_max]\n    int qp_min;\n\n    // Maximum quantizer value, 51 indicates very bad quality\n    // range: [qp_min; 51]\n    int qp_max;\n\n    // Desired NALU size. NALU produced as soon as it's size exceed this value\n    // if 0: frame would be coded with a single NALU\n    int desired_nalu_bytes;\n\n    // Optional NALU notification callback, called by the encoder\n    // as soon as NALU encoding complete.\n    void (*nalu_callback)(\n        const unsigned char *nalu_data, // Coded NALU data, w/o start code\n        int sizeof_nalu_data,           // Size of NALU data\n        void *token                     // optional transparent token\n        );\n\n    // token to pass to NALU callback\n    void *nalu_callback_token;\n\n} H264E_run_param_t;\n\n/**\n*    Planar YUV420 descriptor\n*/\ntypedef struct H264E_io_yuv_tag\n{\n    // Pointers to 3 pixel planes of YUV image\n    unsigned char *yuv[3];\n    // Stride for each image plane\n    int stride[3];\n} H264E_io_yuv_t;\n\ntypedef struct H264E_persist_tag H264E_persist_t;\ntypedef struct H264E_scratch_tag H264E_scratch_t;\n\n/**\n*   Return persistent and scratch memory requirements\n*   for given encoding options.\n*\n*   Return value:\n*       -zero in case of success\n*       -error code (H264E_STATUS_*), if fails\n*\n*   example:\n*\n*   int sizeof_persist, sizeof_scratch, error;\n*   H264E_persist_t * enc;\n*   H264E_scratch_t * scratch;\n*\n*   error = H264E_sizeof(param, &sizeof_persist, &sizeof_scratch);\n*   if (!error)\n*   {\n*       enc     = malloc(sizeof_persist);\n*       scratch = malloc(sizeof_scratch);\n*       error = H264E_init(enc, param);\n*   }\n*/\nint H264E_sizeof(\n    const H264E_create_param_t *param,  ///< Encoder creation parameters\n    int *sizeof_persist,                ///< [OUT] Size of persistent RAM\n    int *sizeof_scratch                 ///< [OUT] Size of scratch RAM\n);\n\n/**\n*   Initialize encoding session\n*\n*   Return value:\n*       -zero in case of success\n*       -error code (H264E_STATUS_*), if fails\n*/\nint H264E_init(\n    H264E_persist_t *enc,               ///< Encoder object\n    const H264E_create_param_t *param   ///< Encoder creation parameters\n);\n\n/**\n*   Encode single video frame\n*\n*   Output buffer is in the scratch RAM\n*\n*   Return value:\n*       -zero in case of success\n*       -error code (H264E_STATUS_*), if fails\n*/\nint H264E_encode(\n    H264E_persist_t *enc,               ///< Encoder object\n    H264E_scratch_t *scratch,           ///< Scratch memory\n    const H264E_run_param_t *run_param, ///< run-time parameters\n    H264E_io_yuv_t *frame,              ///< Input video frame\n    unsigned char **coded_data,         ///< [OUT] Pointer to coded data\n    int *sizeof_coded_data              ///< [OUT] Size of coded data\n);\n\n/**\n*   This is a \"hack\" function to set internal rate-control state\n*   Note that encoder allows application to completely override rate-control,\n*   so this function should be used only by lazy coders, who just want to change\n*   VBV size, without implementing custom rate-control.\n*\n*   Note that H.264 level defined by VBV size on initialization.\n*/\nvoid H264E_set_vbv_state(\n    H264E_persist_t *enc,               ///< Encoder object\n    int vbv_size_bytes,                 ///< New VBV size\n    int vbv_fullness_bytes              ///< New VBV fulness, -1 = no change\n);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif //MINIH264_H\n\n#if defined(MINIH264_IMPLEMENTATION) && !defined(MINIH264_IMPLEMENTATION_GUARD)\n#define MINIH264_IMPLEMENTATION_GUARD\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdint.h>\n#include <stdio.h>\n#include <string.h>\n\n/************************************************************************/\n/*      Build configuration                                             */\n/************************************************************************/\n#ifndef H264E_ENABLE_DENOISE\n#define H264E_ENABLE_DENOISE 1 // Build-in noise supressor\n#endif\n\n#ifndef MAX_LONG_TERM_FRAMES\n#define MAX_LONG_TERM_FRAMES 8 // Max long-term frames count\n#endif\n\n#if !defined(MINIH264_ONLY_SIMD) && (defined(_M_X64) || defined(_M_ARM64) || defined(__x86_64__) || defined(__aarch64__))\n/* x64 always have SSE2, arm64 always have neon, no need for generic code */\n#define MINIH264_ONLY_SIMD\n#endif /* SIMD checks... */\n\n#if (defined(_MSC_VER) && (defined(_M_IX86) || defined(_M_X64))) || ((defined(__i386__) || defined(__x86_64__)) && defined(__SSE2__))\n#define H264E_ENABLE_SSE2 1\n#if defined(_MSC_VER)\n#include <intrin.h>\n#else\n#include <emmintrin.h>\n#endif\n#elif defined(__ARM_NEON) || defined(__aarch64__)\n#define H264E_ENABLE_NEON 1\n#include <arm_neon.h>\n#else\n#ifdef MINIH264_ONLY_SIMD\n#error MINIH264_ONLY_SIMD used, but SSE/NEON not enabled\n#endif\n#endif\n\n#ifndef MINIH264_ONLY_SIMD\n#define H264E_ENABLE_PLAIN_C 1\n#endif\n\n#define H264E_CONFIGS_COUNT ((H264E_ENABLE_SSE2) + (H264E_ENABLE_PLAIN_C) + (H264E_ENABLE_NEON))\n\n#if defined(__ARMCC_VERSION) || defined(_WIN32) || defined(__EMSCRIPTEN__)\n#define __BYTE_ORDER 0\n#define __BIG_ENDIAN 1\n#elif defined(__linux__) || defined(__CYGWIN__)\n#include <endian.h>\n#elif defined(__APPLE__)\n#include <libkern/OSByteOrder.h>\n#define __BYTE_ORDER BYTE_ORDER\n#define __BIG_ENDIAN BIG_ENDIAN\n#elif defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__) || defined(__DragonFly__)\n#include <sys/endian.h>\n#else\n#error platform not supported\n#endif\n\n#if defined(__aarch64__) && defined(__clang__)\n// uintptr_t broken with aarch64 clang on ubuntu 18\n#define uintptr_t unsigned long\n#endif\n#if defined(__arm__) && defined(__clang__)\n#include <arm_acle.h>\n#elif defined(__arm__) && defined(__GNUC__) && !defined(__ARMCC_VERSION)\nstatic inline unsigned int __usad8(unsigned int val1, unsigned int val2)\n{\n    unsigned int result;\n    __asm__ volatile (\"usad8 %0, %1, %2\\n\\t\"\n                      : \"=r\" (result)\n                      : \"r\" (val1), \"r\" (val2));\n    return result;\n}\n\nstatic inline unsigned int __usada8(unsigned int val1, unsigned int val2, unsigned int val3)\n{\n    unsigned int result;\n    __asm__ volatile (\"usada8 %0, %1, %2, %3\\n\\t\"\n                      : \"=r\" (result)\n                      : \"r\" (val1), \"r\" (val2), \"r\" (val3));\n    return result;\n}\n\nstatic inline unsigned int __sadd16(unsigned int val1, unsigned int val2)\n{\n    unsigned int result;\n    __asm__ volatile (\"sadd16 %0, %1, %2\\n\\t\"\n                      : \"=r\" (result)\n                      : \"r\" (val1), \"r\" (val2));\n    return result;\n}\n\nstatic inline unsigned int __ssub16(unsigned int val1, unsigned int val2)\n{\n    unsigned int result;\n    __asm__ volatile (\"ssub16 %0, %1, %2\\n\\t\"\n                      : \"=r\" (result)\n                      : \"r\" (val1), \"r\" (val2));\n    return result;\n}\n\nstatic inline unsigned int __clz(unsigned int val1)\n{\n    unsigned int result;\n    __asm__ volatile (\"clz %0, %1\\n\\t\"\n                      : \"=r\" (result)\n                      : \"r\" (val1));\n    return result;\n}\n#endif\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif  //__cplusplus\n\n#if defined(_MSC_VER) && _MSC_VER >= 1400\n#   define h264e_restrict __restrict\n#elif defined(__arm__)\n#   define h264e_restrict __restrict\n#else\n#   define h264e_restrict\n#endif\n#if defined(_MSC_VER)\n#   define ALIGN(n) __declspec(align(n))\n#   define ALIGN2(n)\n#else\n#   define ALIGN(n)\n#   define ALIGN2(n) __attribute__((aligned(n)))\n#endif\n\n#if __GNUC__ || __clang__\ntypedef int int_u __attribute__ ((__aligned__ (1)));\n#else\ntypedef int int_u;\n#endif\n\n#ifndef MAX\n#   define MAX(x, y) ((x) > (y) ? (x) : (y))\n#endif\n\n#ifndef MIN\n#   define MIN(x, y) ((x) < (y) ? (x) : (y))\n#endif\n\n#ifndef ABS\n#   define ABS(x)    ((x) >= 0 ? (x) : -(x))\n#endif\n\n#define IS_ALIGNED(p, n) (!((uintptr_t)(p) & (uintptr_t)((n) - 1)))\n\n// bit-stream\n#if __BYTE_ORDER == __BIG_ENDIAN\n#   define SWAP32(x) (uint32_t)(x)\n#else\n#ifdef _MSC_VER\n#   define SWAP32(x) _byteswap_ulong(x)\n#elif defined(__GNUC__) || defined(__clang__)\n#   define SWAP32(x) __builtin_bswap32(x)\n#else\n#   define SWAP32(x) (uint32_t)((((x) >> 24) & 0xFF) | (((x) >> 8) & 0xFF00) | (((x) << 8) & 0xFF0000) | ((x & 0xFF) << 24))\n#endif\n#endif\n\n#define BS_OPEN(bs) uint32_t cache = bs->cache; int shift = bs->shift; uint32_t *buf = bs->buf;\n#define BS_CLOSE(bs) bs->cache = cache; bs->shift = shift; bs->buf = buf;\n#define BS_PUT(n, val)      \\\nif ((shift -= n) < 0)       \\\n{                           \\\n    cache |= val >> -shift; \\\n    *buf++ = SWAP32(cache); \\\n    shift += 32;            \\\n    cache = 0;              \\\n}                           \\\ncache |= (uint32_t)val << shift;\n\n// Quantizer-dequantizer modes\n#define QDQ_MODE_INTRA_4   2       // intra 4x4\n#define QDQ_MODE_INTER     8       // inter\n#define QDQ_MODE_INTRA_16  (8 + 1) // intra 16x61\n#define QDQ_MODE_CHROMA    (4 + 1) // chroma\n\n// put most frequently used bits to lsb, to use these as look-up tables\n#define AVAIL_TR    8\n#define AVAIL_TL    4\n#define AVAIL_L     2\n#define AVAIL_T     1\n\ntypedef uint8_t     pix_t;\ntypedef uint32_t    bs_item_t;\n\n/**\n*   Output bitstream\n*/\ntypedef struct\n{\n    int         shift;  // bit position in the cache\n    uint32_t    cache;  // bit cache\n    bs_item_t    *buf;  // current position\n    bs_item_t  *origin; // initial position\n} bs_t;\n\n/**\n*   Tuple for motion vector, or height/width representation\n*/\ntypedef union\n{\n    struct\n    {\n        int16_t x;      // horizontal or width\n        int16_t y;      // vertical or height\n    } s;\n    int32_t u32;        // packed representation\n} point_t;\n\n/**\n*   Rectangle\n*/\ntypedef struct\n{\n    point_t tl;         // top-left corner\n    point_t br;         // bottom-right corner\n} rectangle_t;\n\n/**\n*   Quantized/dequantized representation for 4x4 block\n*/\ntypedef struct\n{\n    int16_t qv[16];     // quantized coefficient\n    int16_t dq[16];     // dequantized\n} quant_t;\n\n/**\n*   Scratch RAM, used only for current MB encoding\n*/\ntypedef struct H264E_scratch_tag\n{\n    pix_t mb_pix_inp[256];          // Input MB (cached)\n    pix_t mb_pix_store[4*256];      // Prediction variants\n\n    // Quantized/dequantized\n    int16_t dcy[16];                // Y DC\n    quant_t qy[16];                 // Y 16x4x4 blocks\n\n    int16_t dcu[16];                // U DC: 4 used + align\n    quant_t qu[4];                  // U 4x4x4 blocks\n\n    int16_t dcv[16];                // V DC: 4 used + align\n    quant_t qv[4];                  // V 4x4x4 blocks\n\n    // Quantized DC:\n    int16_t quant_dc[16];           // Y\n    int16_t quant_dc_u[4];          // U\n    int16_t quant_dc_v[4];          // V\n\n    uint16_t nz_mask;               // Bit flags for non-zero 4x4 blocks\n} scratch_t;\n\n/**\n*   Deblock filter frame context\n*/\ntypedef struct\n{\n    // Motion vectors for 4x4 MB internal sub-blocks, top and left border,\n    // 5x5 array without top-left cell:\n    //     T0 T1 T2 T4\n    //  L0 i0 i1 i2 i3\n    //  L1 ...\n    //  ......\n    //\n    point_t df_mv[5*5 - 1];         // MV for current macroblock and neighbors\n    uint8_t *df_qp;                 // QP for current row of macroblocks\n    int8_t *mb_type;                // Macroblock type for current row of macroblocks\n    uint32_t nzflag;                // Bit flags for non-zero 4x4 blocks (left neighbors)\n\n    // Huffman and deblock uses different nnz...\n    uint8_t *df_nzflag;             // Bit flags for non-zero 4x4 blocks (top neighbors), only 4 bits used\n} deblock_filter_t;\n\n/**\n*    Deblock filter parameters for current MB\n*/\ntypedef struct\n{\n    uint32_t strength32[4*2];       // Strength for 4 colums and 4 rows\n    uint8_t tc0[16*2];              // TC0 parameter for 4 colums and 4 rows\n    uint8_t alpha[2*2];             // alpha for border/internals\n    uint8_t beta[2*2];              // beta for border/internals\n} deblock_params_t;\n\n/**\n*   Persistent RAM\n*/\ntypedef struct H264E_persist_tag\n{\n    H264E_create_param_t param;     // Copy of create parameters\n    H264E_io_yuv_t inp;             // Input picture\n\n    struct\n    {\n        int pic_init_qp;            // Initial QP\n    } sps;\n\n    struct\n    {\n        int num;                    // Frame number\n        int nmbx;                   // Frame width, macroblocks\n        int nmby;                   // Frame height, macroblocks\n        int nmb;                    // Number of macroblocks in frame\n        int w;                      // Frame width, pixels\n        int h;                      // Frame height, pixels\n        rectangle_t mv_limit;       // Frame MV limits = frame + border extension\n        rectangle_t mv_qpel_limit;  // Reduced MV limits for qpel interpolation filter\n        int cropping_flag;          // Cropping indicator\n    } frame;\n\n    struct\n    {\n        int type;                   // Current slice type (I/P)\n        int start_mb_num;           // # of 1st MB in the current slice\n    } slice;\n\n    struct\n    {\n        int x;                      // MB x position (in MB's)\n        int y;                      // MB y position (in MB's)\n        int num;                    // MB number\n        int skip_run;               // Skip run count\n\n        // according to table 7-13\n        // -1 = skip, 0 = P16x16, 1 = P16x8, 2=P8x16, 3 = P8x8, 5 = I4x4, >=6 = I16x16\n        int type;                   // MB type\n\n        struct\n        {\n            int pred_mode_luma;     // Intra 16x16 prediction mode\n        } i16;\n\n        int8_t i4x4_mode[16];       // Intra 4x4 prediction modes\n\n        int cost;                   // Best coding cost\n        int avail;                  // Neighbor availability flags\n        point_t mvd[16];            // Delta-MV for each 4x4 sub-part\n        point_t mv[16];             // MV for each 4x4 sub-part\n\n        point_t mv_skip_pred;       // Skip MV predictor\n    } mb;\n\n    H264E_io_yuv_t ref;             // Current reference picture\n    H264E_io_yuv_t dec;             // Reconstructed current macroblock\n#if H264E_ENABLE_DENOISE\n    H264E_io_yuv_t denoise;         // Noise suppression filter\n#endif\n\n    unsigned char *lt_yuv[MAX_LONG_TERM_FRAMES][3]; // Long-term reference pictures\n    unsigned char lt_used[MAX_LONG_TERM_FRAMES];    // Long-term \"used\" flags\n\n    struct\n    {\n        int qp;                     // Current QP\n        int vbv_bits;               // Current VBV fullness, bits\n        int qp_smooth;              // Averaged QP\n        int dqp_smooth;             // Adaptive QP adjustment, account for \"compressibility\"\n        int max_dqp;                // Worst-case DQP, for long-term reference QP adjustment\n\n        int bit_budget;             // Frame bit budget\n        int prev_qp;                // Previous MB QP\n        int prev_err;               // Accumulated coded size error\n        int stable_count;           // Stable/not stable state machine\n\n        int vbv_target_level;       // Desired VBV fullness after frame encode\n\n        // Quantizer data, passed to low-level functions\n        // layout:\n        // multiplier_quant0, multiplier_dequant0,\n        // multiplier_quant2, multiplier_dequant2,\n        // multiplier_quant1, multiplier_dequant1,\n        // rounding_factor_pos,\n        // zero_thr_inter\n        // zero_thr_inter2\n        // ... and same data for chroma\n        //uint16_t qdat[2][(6 + 4)];\n#define OFFS_RND_INTER 6\n#define OFFS_RND_INTRA 7\n#define OFFS_THR_INTER 8\n#define OFFS_THR2_INTER 9\n#define OFFS_THR_1_OFF 10\n#define OFFS_THR_2_OFF 18\n#define OFFS_QUANT_VECT 26\n#define OFFS_DEQUANT_VECT 34\n        //struct\n        //{\n        //    uint16_t qdq[6];\n        //    uint16_t rnd[2]; // inter/intra\n        //    uint16_t thr[2]; // thresholds\n        //    uint16_t zero_thr[2][8];\n        //    uint16_t qfull[8];\n        //    uint16_t dqfull[8];\n        //} qdat[2];\n        uint16_t qdat[2][6 + 2 + 2 + 8 + 8 + 8 + 8];\n    } rc;\n\n    deblock_filter_t df;            // Deblock filter\n\n    // Speed/quality trade-off\n    struct\n    {\n        int disable_deblock;        // Disable deblock filter flags\n    } speed;\n\n    int most_recent_ref_frame_idx;  // Last updated long-term reference\n\n    // predictors contexts\n    point_t *mv_pred;               // MV for left&top 4x4 blocks\n    uint8_t *nnz;                   // Number of non-zero coeffs per 4x4 block for left&top\n    int32_t *i4x4mode;              // Intra 4x4 mode for left&top\n    pix_t *top_line;                // left&top neighbor pixels\n\n    // output data\n    uint8_t *out;                   // Output data storage (pointer to scratch RAM!)\n    unsigned int out_pos;           // Output byte position\n    bs_t bs[1];                     // Output bitbuffer\n\n    scratch_t *scratch;             // Pointer to scratch RAM\n#if H264E_MAX_THREADS > 1\n    scratch_t *scratch_store[H264E_MAX_THREADS];   // Pointer to scratch RAM\n    int sizeof_scaratch;\n#endif\n    H264E_run_param_t run_param;    // Copy of run-time parameters\n\n    // Consecutive IDR's must have different idr_pic_id,\n    // unless there are some P between them\n    uint8_t next_idr_pic_id;\n\n    pix_t *pbest;                   // Macroblock best predictor\n    pix_t *ptest;                   // Macroblock predictor under test\n\n    point_t mv_clusters[2];         // MV clusterization for prediction\n\n    // Flag to track short-term reference buffer, for MMCO 1 command\n    int short_term_used;\n\n#if H264E_SVC_API\n    //svc ext\n    int   current_layer;\n    int   adaptive_base_mode_flag;\n    void *enc_next;\n#endif\n\n} h264e_enc_t;\n\n#ifdef __cplusplus\n}\n#endif //__cplusplus\n/************************************************************************/\n/*      Constants                                                       */\n/************************************************************************/\n\n// Tunable constants can be adjusted by the \"training\" application\n#ifndef ADJUSTABLE\n#   define ADJUSTABLE static const\n#endif\n\n// Huffman encode tables\n#define CODE8(val, len) (uint8_t)((val << 4) + len)\n#define CODE(val, len) (uint8_t)((val << 4) + (len - 1))\n\nconst uint8_t h264e_g_run_before[57] =\n{\n    15, 17, 20, 24, 29, 35, 42, 42, 42, 42, 42, 42, 42, 42, 42,\n    /**** Table #  0 size  2 ****/\n    CODE8(1, 1), CODE8(0, 1),\n    /**** Table #  1 size  3 ****/\n    CODE8(1, 1), CODE8(1, 2), CODE8(0, 2),\n    /**** Table #  2 size  4 ****/\n    CODE8(3, 2), CODE8(2, 2), CODE8(1, 2), CODE8(0, 2),\n    /**** Table #  3 size  5 ****/\n    CODE8(3, 2), CODE8(2, 2), CODE8(1, 2), CODE8(1, 3), CODE8(0, 3),\n    /**** Table #  4 size  6 ****/\n    CODE8(3, 2), CODE8(2, 2), CODE8(3, 3), CODE8(2, 3), CODE8(1, 3), CODE8(0, 3),\n    /**** Table #  5 size  7 ****/\n    CODE8(3, 2), CODE8(0, 3), CODE8(1, 3), CODE8(3, 3), CODE8(2, 3), CODE8(5, 3), CODE8(4, 3),\n    /**** Table #  6 size 15 ****/\n    CODE8(7, 3), CODE8(6, 3), CODE8(5, 3), CODE8(4, 3), CODE8(3, 3), CODE8(2,  3), CODE8(1,  3), CODE8(1, 4),\n    CODE8(1, 5), CODE8(1, 6), CODE8(1, 7), CODE8(1, 8), CODE8(1, 9), CODE8(1, 10), CODE8(1, 11),\n};\n\nconst uint8_t h264e_g_total_zeros_cr_2x2[12] =\n{\n    3, 7, 10,\n    /**** Table #  0 size  4 ****/\n    CODE8(1, 1), CODE8(1, 2), CODE8(1, 3), CODE8(0, 3),\n    /**** Table #  1 size  3 ****/\n    CODE8(1, 1), CODE8(1, 2), CODE8(0, 2),\n    /**** Table #  2 size  2 ****/\n    CODE8(1, 1), CODE8(0, 1),\n};\n\nconst uint8_t h264e_g_total_zeros[150] =\n{\n    15, 31, 46, 60, 73, 85, 96, 106, 115, 123, 130, 136, 141, 145, 148,\n    /**** Table #  0 size 16 ****/\n    CODE8(1, 1), CODE8(3, 3), CODE8(2, 3), CODE8(3, 4), CODE8(2, 4), CODE8(3, 5), CODE8(2, 5), CODE8(3, 6),\n    CODE8(2, 6), CODE8(3, 7), CODE8(2, 7), CODE8(3, 8), CODE8(2, 8), CODE8(3, 9), CODE8(2, 9), CODE8(1, 9),\n    /**** Table #  1 size 15 ****/\n    CODE8(7, 3), CODE8(6, 3), CODE8(5, 3), CODE8(4, 3), CODE8(3, 3), CODE8(5, 4), CODE8(4, 4), CODE8(3, 4),\n    CODE8(2, 4), CODE8(3, 5), CODE8(2, 5), CODE8(3, 6), CODE8(2, 6), CODE8(1, 6), CODE8(0, 6),\n    /**** Table #  2 size 14 ****/\n    CODE8(5, 4), CODE8(7, 3), CODE8(6, 3), CODE8(5, 3), CODE8(4, 4), CODE8(3, 4), CODE8(4, 3), CODE8(3, 3),\n    CODE8(2, 4), CODE8(3, 5), CODE8(2, 5), CODE8(1, 6), CODE8(1, 5), CODE8(0, 6),\n    /**** Table #  3 size 13 ****/\n    CODE8(3, 5), CODE8(7, 3), CODE8(5, 4), CODE8(4, 4), CODE8(6, 3), CODE8(5, 3), CODE8(4, 3), CODE8(3, 4),\n    CODE8(3, 3), CODE8(2, 4), CODE8(2, 5), CODE8(1, 5), CODE8(0, 5),\n    /**** Table #  4 size 12 ****/\n    CODE8(5, 4), CODE8(4, 4), CODE8(3, 4), CODE8(7, 3), CODE8(6, 3), CODE8(5, 3), CODE8(4, 3), CODE8(3, 3),\n    CODE8(2, 4), CODE8(1, 5), CODE8(1, 4), CODE8(0, 5),\n    /**** Table #  5 size 11 ****/\n    CODE8(1, 6), CODE8(1, 5), CODE8(7, 3), CODE8(6, 3), CODE8(5, 3), CODE8(4, 3), CODE8(3, 3), CODE8(2, 3),\n    CODE8(1, 4), CODE8(1, 3), CODE8(0, 6),\n    /**** Table #  6 size 10 ****/\n    CODE8(1, 6), CODE8(1, 5), CODE8(5, 3), CODE8(4, 3), CODE8(3, 3), CODE8(3, 2), CODE8(2, 3), CODE8(1, 4),\n    CODE8(1, 3), CODE8(0, 6),\n    /**** Table #  7 size  9 ****/\n    CODE8(1, 6), CODE8(1, 4), CODE8(1, 5), CODE8(3, 3), CODE8(3, 2), CODE8(2, 2), CODE8(2, 3), CODE8(1, 3),\n    CODE8(0, 6),\n    /**** Table #  8 size  8 ****/\n    CODE8(1, 6), CODE8(0, 6), CODE8(1, 4), CODE8(3, 2), CODE8(2, 2), CODE8(1, 3), CODE8(1, 2), CODE8(1, 5),\n    /**** Table #  9 size  7 ****/\n    CODE8(1, 5), CODE8(0, 5), CODE8(1, 3), CODE8(3, 2), CODE8(2, 2), CODE8(1, 2), CODE8(1, 4),\n    /**** Table # 10 size  6 ****/\n    CODE8(0, 4), CODE8(1, 4), CODE8(1, 3), CODE8(2, 3), CODE8(1, 1), CODE8(3, 3),\n    /**** Table # 11 size  5 ****/\n    CODE8(0, 4), CODE8(1, 4), CODE8(1, 2), CODE8(1, 1), CODE8(1, 3),\n    /**** Table # 12 size  4 ****/\n    CODE8(0, 3), CODE8(1, 3), CODE8(1, 1), CODE8(1, 2),\n    /**** Table # 13 size  3 ****/\n    CODE8(0, 2), CODE8(1, 2), CODE8(1, 1),\n    /**** Table # 14 size  2 ****/\n    CODE8(0, 1), CODE8(1, 1),\n};\n\nconst uint8_t h264e_g_coeff_token[277 + 18] =\n{\n    17 + 18, 17 + 18,\n    82 + 18, 82 + 18,\n    147 + 18, 147 + 18, 147 + 18, 147 + 18,\n    212 + 18, 212 + 18, 212 + 18, 212 + 18, 212 + 18, 212 + 18, 212 + 18, 212 + 18, 212 + 18,\n    0 + 18,\n    /**** Table #  4 size 17 ****/     // offs: 0\n    CODE(1, 2), CODE(1, 1), CODE(1, 3), CODE(5, 6), CODE(7, 6), CODE(6, 6), CODE(2, 7), CODE(0, 7), CODE(4, 6),\n    CODE(3, 7), CODE(2, 8), CODE(0, 0), CODE(3, 6), CODE(3, 8), CODE(0, 0), CODE(0, 0), CODE(2, 6),\n    /**** Table #  0 size 65 ****/     // offs: 17\n    CODE( 1,  1), CODE( 1,  2), CODE( 1,  3), CODE( 3,  5), CODE( 5,  6), CODE( 4,  6), CODE( 5,  7), CODE( 3,  6),\n    CODE( 7,  8), CODE( 6,  8), CODE( 5,  8), CODE( 4,  7), CODE( 7,  9), CODE( 6,  9), CODE( 5,  9), CODE( 4,  8),\n    CODE( 7, 10), CODE( 6, 10), CODE( 5, 10), CODE( 4,  9), CODE( 7, 11), CODE( 6, 11), CODE( 5, 11), CODE( 4, 10),\n    CODE(15, 13), CODE(14, 13), CODE(13, 13), CODE( 4, 11), CODE(11, 13), CODE(10, 13), CODE( 9, 13), CODE(12, 13),\n    CODE( 8, 13), CODE(14, 14), CODE(13, 14), CODE(12, 14), CODE(15, 14), CODE(10, 14), CODE( 9, 14), CODE( 8, 14),\n    CODE(11, 14), CODE(14, 15), CODE(13, 15), CODE(12, 15), CODE(15, 15), CODE(10, 15), CODE( 9, 15), CODE( 8, 15),\n    CODE(11, 15), CODE( 1, 15), CODE(13, 16), CODE(12, 16), CODE(15, 16), CODE(14, 16), CODE( 9, 16), CODE( 8, 16),\n    CODE(11, 16), CODE(10, 16), CODE( 5, 16), CODE( 0,  0), CODE( 7, 16), CODE( 6, 16), CODE( 0,  0), CODE( 0,  0), CODE( 4, 16),\n    /**** Table #  1 size 65 ****/     // offs: 82\n    CODE( 3,  2), CODE( 2,  2), CODE( 3,  3), CODE( 5,  4), CODE(11,  6), CODE( 7,  5), CODE( 9,  6), CODE( 4,  4),\n    CODE( 7,  6), CODE(10,  6), CODE( 5,  6), CODE( 6,  5), CODE( 7,  7), CODE( 6,  6), CODE( 5,  7), CODE( 8,  6),\n    CODE( 7,  8), CODE( 6,  7), CODE( 5,  8), CODE( 4,  6), CODE( 4,  8), CODE( 6,  8), CODE( 5,  9), CODE( 4,  7),\n    CODE( 7,  9), CODE( 6,  9), CODE(13, 11), CODE( 4,  9), CODE(15, 11), CODE(14, 11), CODE( 9, 11), CODE(12, 11),\n    CODE(11, 11), CODE(10, 11), CODE(13, 12), CODE( 8, 11), CODE(15, 12), CODE(14, 12), CODE( 9, 12), CODE(12, 12),\n    CODE(11, 12), CODE(10, 12), CODE(13, 13), CODE(12, 13), CODE( 8, 12), CODE(14, 13), CODE( 9, 13), CODE( 8, 13),\n    CODE(15, 13), CODE(10, 13), CODE( 6, 13), CODE( 1, 13), CODE(11, 13), CODE(11, 14), CODE(10, 14), CODE( 4, 14),\n    CODE( 7, 13), CODE( 8, 14), CODE( 5, 14), CODE( 0,  0), CODE( 9, 14), CODE( 6, 14), CODE( 0,  0), CODE( 0,  0), CODE( 7, 14),\n    /**** Table #  2 size 65 ****/     // offs: 147\n    CODE(15,  4), CODE(14,  4), CODE(13,  4), CODE(12,  4), CODE(15,  6), CODE(15,  5), CODE(14,  5), CODE(11,  4),\n    CODE(11,  6), CODE(12,  5), CODE(11,  5), CODE(10,  4), CODE( 8,  6), CODE(10,  5), CODE( 9,  5), CODE( 9,  4),\n    CODE(15,  7), CODE( 8,  5), CODE(13,  6), CODE( 8,  4), CODE(11,  7), CODE(14,  6), CODE( 9,  6), CODE(13,  5),\n    CODE( 9,  7), CODE(10,  6), CODE(13,  7), CODE(12,  6), CODE( 8,  7), CODE(14,  7), CODE(10,  7), CODE(12,  7),\n    CODE(15,  8), CODE(14,  8), CODE(13,  8), CODE(12,  8), CODE(11,  8), CODE(10,  8), CODE( 9,  8), CODE( 8,  8),\n    CODE(15,  9), CODE(14,  9), CODE(13,  9), CODE(12,  9), CODE(11,  9), CODE(10,  9), CODE( 9,  9), CODE(10, 10),\n    CODE( 8,  9), CODE( 7,  9), CODE(11, 10), CODE( 6, 10), CODE(13, 10), CODE(12, 10), CODE( 7, 10), CODE( 2, 10),\n    CODE( 9, 10), CODE( 8, 10), CODE( 3, 10), CODE( 0,  0), CODE( 5, 10), CODE( 4, 10), CODE( 0,  0), CODE( 0,  0), CODE( 1, 10),\n    /**** Table #  3 size 65 ****/     // offs: 212\n     3,  1,  6, 11,  0,  5, 10, 15,  4,  9, 14, 19,  8, 13, 18, 23, 12, 17, 22, 27, 16, 21, 26, 31, 20, 25, 30, 35,\n    24, 29, 34, 39, 28, 33, 38, 43, 32, 37, 42, 47, 36, 41, 46, 51, 40, 45, 50, 55, 44, 49, 54, 59, 48, 53, 58, 63,\n    52, 57, 62,  0, 56, 61,  0,  0, 60\n};\n\n/*\n    Block scan order\n    0 1 4 5\n    2 3 6 7\n    8 9 C D\n    A B E F\n*/\nstatic const uint8_t decode_block_scan[16] = { 0, 1, 4, 5, 2, 3, 6, 7, 8, 9, 12, 13, 10, 11, 14, 15 };\n\nstatic const uint8_t qpy2qpc[52] = {  // todo: [0 - 9] not used\n    0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12,\n   13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,\n   26, 27, 28, 29, 29, 30, 31, 32, 32, 33, 34, 34, 35,\n   35, 36, 36, 37, 37, 37, 38, 38, 38, 39, 39, 39, 39,\n};\n\n/**\n*   Rate-control LUT for intra/inter macroblocks: number of bits per macroblock for given QP\n*   Estimated experimentally\n*/\nstatic const uint16_t bits_per_mb[2][42 - 1] =\n{\n    // 10                                                          20                                                          30                                                          40                                                          50\n    { 664,  597,  530,  484,  432,  384,  341,  297,  262,  235,  198,  173,  153,  131,  114,  102,   84,   74,   64,   54,   47,   42,   35,   31,   26,   22,   20,   17,   15,   13,   12,   10,    9,    9,    7,    7,    6,    5,    4,    1,    1}, // P\n    {1057,  975,  925,  868,  803,  740,  694,  630,  586,  547,  496,  457,  420,  378,  345,  318,  284,  258,  234,  210,  190,  178,  155,  141,  129,  115,  102,   95,   82,   75,   69,   60,   55,   51,   45,   41,   40,   35,   31,   28,   24}  // I\n};\n\n/**\n*   Deblock filter constants:\n*   <alpha> <thr[1]> <thr[2]> <thr[3]> <beta>\n*/\nstatic const uint8_t g_a_tc0_b[52 - 10][5] = {\n    {  0,  0,  0,  0,  0},  // 10\n    {  0,  0,  0,  0,  0},  // 11\n    {  0,  0,  0,  0,  0},  // 12\n    {  0,  0,  0,  0,  0},  // 13\n    {  0,  0,  0,  0,  0},  // 14\n    {  0,  0,  0,  0,  0},  // 15\n    {  4,  0,  0,  0,  2},\n    {  4,  0,  0,  1,  2},\n    {  5,  0,  0,  1,  2},\n    {  6,  0,  0,  1,  3},\n    {  7,  0,  0,  1,  3},\n    {  8,  0,  1,  1,  3},\n    {  9,  0,  1,  1,  3},\n    { 10,  1,  1,  1,  4},\n    { 12,  1,  1,  1,  4},\n    { 13,  1,  1,  1,  4},\n    { 15,  1,  1,  1,  6},\n    { 17,  1,  1,  2,  6},\n    { 20,  1,  1,  2,  7},\n    { 22,  1,  1,  2,  7},\n    { 25,  1,  1,  2,  8},\n    { 28,  1,  2,  3,  8},\n    { 32,  1,  2,  3,  9},\n    { 36,  2,  2,  3,  9},\n    { 40,  2,  2,  4, 10},\n    { 45,  2,  3,  4, 10},\n    { 50,  2,  3,  4, 11},\n    { 56,  3,  3,  5, 11},\n    { 63,  3,  4,  6, 12},\n    { 71,  3,  4,  6, 12},\n    { 80,  4,  5,  7, 13},\n    { 90,  4,  5,  8, 13},\n    {101,  4,  6,  9, 14},\n    {113,  5,  7, 10, 14},\n    {127,  6,  8, 11, 15},\n    {144,  6,  8, 13, 15},\n    {162,  7, 10, 14, 16},\n    {182,  8, 11, 16, 16},\n    {203,  9, 12, 18, 17},\n    {226, 10, 13, 20, 17},\n    {255, 11, 15, 23, 18},\n    {255, 13, 17, 25, 18},\n};\n\n/************************************************************************/\n/*  Adjustable encoder parameters. Initial MIN_QP values never used     */\n/************************************************************************/\n\nADJUSTABLE uint16_t g_rnd_inter[] = {\n    11665, 11665, 11665, 11665, 11665, 11665, 11665, 11665, 11665, 11665,\n    11665, 12868, 14071, 15273, 16476,\n    17679, 17740, 17801, 17863, 17924,\n    17985, 17445, 16904, 16364, 15823,\n    15283, 15198, 15113, 15027, 14942,\n    14857, 15667, 16478, 17288, 18099,\n    18909, 19213, 19517, 19822, 20126,\n    20430, 16344, 12259, 8173, 4088,\n    4088, 4088, 4088, 4088, 4088,\n    4088, 4088,\n};\n\nADJUSTABLE uint16_t g_thr_inter[] = {\n    31878, 31878, 31878, 31878, 31878, 31878, 31878, 31878, 31878, 31878,\n    31878, 33578, 35278, 36978, 38678,\n    40378, 41471, 42563, 43656, 44748,\n    45841, 46432, 47024, 47615, 48207,\n    48798, 49354, 49911, 50467, 51024,\n    51580, 51580, 51580, 51580, 51580,\n    51580, 52222, 52864, 53506, 54148,\n    54790, 45955, 37120, 28286, 19451,\n    10616, 9326, 8036, 6745, 5455,\n    4165, 4165,\n};\n\nADJUSTABLE uint16_t g_thr_inter2[] = {\n    45352, 45352, 45352, 45352, 45352, 45352, 45352, 45352, 45352, 45352,\n    45352, 41100, 36848, 32597, 28345,\n    24093, 25904, 27715, 29525, 31336,\n    33147, 33429, 33711, 33994, 34276,\n    34558, 32902, 31246, 29590, 27934,\n    26278, 26989, 27700, 28412, 29123,\n    29834, 29038, 28242, 27445, 26649,\n    25853, 23440, 21028, 18615, 16203,\n    13790, 11137, 8484, 5832, 3179,\n    526, 526,\n};\n\nADJUSTABLE uint16_t g_skip_thr_inter[52] =\n{\n    45, 45, 45, 45, 45, 45, 45, 45, 45, 45,\n    45, 45, 45, 44, 44,\n    44, 40, 37, 33, 30,\n    26, 32, 38, 45, 51,\n    57, 58, 58, 59, 59,\n    60, 66, 73, 79, 86,\n    92, 95, 98, 100, 103,\n    106, 200, 300, 400, 500,\n    600, 700, 800, 900, 1000,\n    1377, 1377,\n};\n\nADJUSTABLE uint16_t g_lambda_q4[52] =\n{\n    14, 14, 14, 14, 14, 14, 14, 14, 14, 14,\n    14, 13, 11, 10, 8,\n    7, 11, 15, 20, 24,\n    28, 30, 31, 33, 34,\n    36, 48, 60, 71, 83,\n    95, 95, 95, 96, 96,\n    96, 113, 130, 147, 164,\n    181, 401, 620, 840, 1059,\n    1279, 1262, 1246, 1229, 1213,\n    1196, 1196,\n};\nADJUSTABLE uint16_t g_lambda_mv_q4[52] =\n{\n    13, 13, 13, 13, 13, 13, 13, 13, 13, 13,\n    13, 14, 15, 15, 16,\n    17, 18, 20, 21, 23,\n    24, 28, 32, 37, 41,\n    45, 53, 62, 70, 79,\n    87, 105, 123, 140, 158,\n    176, 195, 214, 234, 253,\n    272, 406, 541, 675, 810,\n    944, 895, 845, 796, 746,\n    697, 697,\n};\n\nADJUSTABLE uint16_t g_skip_thr_i4x4[52] =\n{\n    0,1,2,3,4,5,6,7,8,9,\n    7, 7, 7, 7, 7, 7, 7, 7, 7, 7,\n    24, 24, 24, 24, 24, 24, 24, 24, 24, 24,\n    44, 44, 44, 44, 44, 44, 44, 44, 44, 44,\n    68, 68, 68, 68, 68, 68, 68, 68, 68, 68,\n    100, 100,\n};\n\nADJUSTABLE uint16_t g_deadzonei[] = {\n    3419, 3419, 3419, 3419, 3419, 3419, 3419, 3419, 3419, 3419,\n    30550, 8845, 14271, 19698, 25124,\n    30550, 29556, 28562, 27569, 26575,\n    25581, 25284, 24988, 24691, 24395,\n    24098, 24116, 24134, 24153, 24171,\n    24189, 24010, 23832, 23653, 23475,\n    23296, 23569, 23842, 24115, 24388,\n    24661, 19729, 14797, 9865, 4933,\n    24661, 3499, 6997, 10495, 13993,\n    17491, 17491,\n};\n\nADJUSTABLE uint16_t g_lambda_i4_q4[] = {\n    27, 27, 27, 27, 27, 27, 27, 27, 27, 27,\n    27, 31, 34, 38, 41,\n    45, 76, 106, 137, 167,\n    198, 220, 243, 265, 288,\n    310, 347, 384, 421, 458,\n    495, 584, 673, 763, 852,\n    941, 1053, 1165, 1276, 1388,\n    1500, 1205, 910, 614, 319,\n    5000, 1448, 2872, 4296, 5720,\n    7144, 7144,\n};\n\nADJUSTABLE uint16_t g_lambda_i16_q4[] = {\n    0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n    0, 0, 0, 0, 0,\n    0, 3, 7, 10, 14,\n    17, 14, 10, 7, 3,\n    50, 20, 39, 59, 78,\n    98, 94, 89, 85, 80,\n    76, 118, 161, 203, 246,\n    288, 349, 410, 470, 531,\n    592, 575, 558, 540, 523,\n    506, 506,\n};\n\nconst uint8_t g_diff_to_gainQ8[256] =\n{\n    0, 16, 25, 32, 37, 41, 44, 48, 50, 53, 55, 57, 59, 60, 62, 64, 65,\n    66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 76, 77, 78, 79, 80, 80,\n    81, 82, 82, 83, 83, 84, 85, 85, 86, 86, 87, 87, 88, 88, 89, 89,\n    90, 90, 91, 91, 92, 92, 92, 93, 93, 94, 94, 94, 95, 95, 96, 96,\n    96, 97, 97, 97, 98, 98, 98, 99, 99, 99, 99, 100, 100, 100, 101, 101,\n    101, 102, 102, 102, 102, 103, 103, 103, 103, 104, 104, 104, 104, 105, 105, 105,\n    105, 106, 106, 106, 106, 106, 107, 107, 107, 107, 108, 108, 108, 108, 108, 109,\n    109, 109, 109, 109, 110, 110, 110, 110, 110, 111, 111, 111, 111, 111, 112, 112,\n    112, 112, 112, 112, 113, 113, 113, 113, 113, 113, 114, 114, 114, 114, 114, 114,\n    115, 115, 115, 115, 115, 115, 115, 116, 116, 116, 116, 116, 116, 117, 117, 117,\n    117, 117, 117, 117, 118, 118, 118, 118, 118, 118, 118, 118, 119, 119, 119, 119,\n    119, 119, 119, 119, 120, 120, 120, 120, 120, 120, 120, 120, 121, 121, 121, 121,\n    121, 121, 121, 121, 122, 122, 122, 122, 122, 122, 122, 122, 122, 123, 123, 123,\n    123, 123, 123, 123, 123, 123, 124, 124, 124, 124, 124, 124, 124, 124, 124, 125,\n    125, 125, 125, 125, 125, 125, 125, 125, 125, 126, 126, 126, 126, 126, 126, 126,\n    126, 126, 126, 126, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 128,\n};\n\n#if H264E_ENABLE_SSE2 && !defined(MINIH264_ASM)\n#define BS_BITS 32\n\nstatic void h264e_bs_put_bits_sse2(bs_t *bs, unsigned n, unsigned val)\n{\n    assert(!(val >> n));\n    bs->shift -= n;\n    assert((unsigned)n <= 32);\n    if (bs->shift < 0)\n    {\n        assert(-bs->shift < 32);\n        bs->cache |= val >> -bs->shift;\n        *bs->buf++ = SWAP32(bs->cache);\n        bs->shift = 32 + bs->shift;\n        bs->cache = 0;\n    }\n    bs->cache |= val << bs->shift;\n}\n\nstatic void h264e_bs_flush_sse2(bs_t *bs)\n{\n    *bs->buf = SWAP32(bs->cache);\n}\n\nstatic unsigned h264e_bs_get_pos_bits_sse2(const bs_t *bs)\n{\n    unsigned pos_bits = (unsigned)((bs->buf - bs->origin)*BS_BITS);\n    pos_bits += BS_BITS - bs->shift;\n    assert((int)pos_bits >= 0);\n    return pos_bits;\n}\n\nstatic unsigned h264e_bs_byte_align_sse2(bs_t *bs)\n{\n    int pos = h264e_bs_get_pos_bits_sse2(bs);\n    h264e_bs_put_bits_sse2(bs, -pos & 7, 0);\n    return pos + (-pos & 7);\n}\n\n/**\n*   Golomb code\n*   0 => 1\n*   1 => 01 0\n*   2 => 01 1\n*   3 => 001 00\n*   4 => 001 01\n*\n*   [0]     => 1\n*   [1..2]  => 01x\n*   [3..6]  => 001xx\n*   [7..14] => 0001xxx\n*\n*/\nstatic void h264e_bs_put_golomb_sse2(bs_t *bs, unsigned val)\n{\n    int size;\n#if defined(_MSC_VER)\n    unsigned long nbit;\n    _BitScanReverse(&nbit, val + 1);\n    size = 1 + nbit;\n#else\n    size = 32 - __builtin_clz(val + 1);\n#endif\n    h264e_bs_put_bits_sse2(bs, 2*size - 1, val + 1);\n}\n\n/**\n*   signed Golomb code.\n*   mapping to unsigned code:\n*       0 => 0\n*       1 => 1\n*      -1 => 2\n*       2 => 3\n*      -2 => 4\n*       3 => 5\n*      -3 => 6\n*/\nstatic void h264e_bs_put_sgolomb_sse2(bs_t *bs, int val)\n{\n    val = 2*val - 1;\n    val ^= val >> 31;\n    h264e_bs_put_golomb_sse2(bs, val);\n}\n\nstatic void h264e_bs_init_bits_sse2(bs_t *bs, void *data)\n{\n    bs->origin = data;\n    bs->buf = bs->origin;\n    bs->shift = BS_BITS;\n    bs->cache = 0;\n}\n\nstatic unsigned __clz_cavlc(unsigned v)\n{\n#if defined(_MSC_VER)\n    unsigned long nbit;\n    _BitScanReverse(&nbit, v);\n    return 31 - nbit;\n#else\n    return __builtin_clz(v);\n#endif\n}\n\nstatic void h264e_vlc_encode_sse2(bs_t *bs, int16_t *quant, int maxNumCoeff, uint8_t *nz_ctx)\n{\n    int nnz_context, nlevels, nnz; // nnz = nlevels + trailing_ones\n    unsigned trailing_ones = 0;\n    unsigned trailing_ones_sign = 0;\n    uint8_t runs[16];\n    uint8_t *prun = runs;\n    int16_t *levels;\n    int cloop = maxNumCoeff;\n    int v, drun;\n    unsigned zmask;\n    BS_OPEN(bs)\n\n    ALIGN(16) int16_t zzquant[16] ALIGN2(16);\n    levels = zzquant + ((maxNumCoeff == 4) ? 4 : 16);\n    if (maxNumCoeff != 4)\n    {\n        __m128i y0, y1;\n        __m128i x0 = _mm_load_si128((__m128i *)quant);\n        __m128i x1 = _mm_load_si128((__m128i *)(quant + 8));\n#define SWAP_XMM(x, i, j)     { int t0 = _mm_extract_epi16(x, i); int t1 = _mm_extract_epi16(x, j); x = _mm_insert_epi16(x, t0, j); x = _mm_insert_epi16(x, t1, i); }\n#define SWAP_XMM2(x, y, i, j) { int t0 = _mm_extract_epi16(x, i); int t1 = _mm_extract_epi16(y, j); y = _mm_insert_epi16(y, t0, j); x = _mm_insert_epi16(x, t1, i); }\n        SWAP_XMM(x0, 3, 4);\n        SWAP_XMM(x1, 3, 4);\n        SWAP_XMM2(x0, x1, 5, 2);\n        x0 = _mm_shufflelo_epi16(x0, 0 + (3 << 2) + (1 << 4) + (2 << 6));\n        x0 = _mm_shufflehi_epi16(x0, 2 + (0 << 2) + (3 << 4) + (1 << 6));\n        x1 = _mm_shufflelo_epi16(x1, 2 + (0 << 2) + (3 << 4) + (1 << 6));\n        x1 = _mm_shufflehi_epi16(x1, 1 + (2 << 2) + (0 << 4) + (3 << 6));\n        y0 = _mm_unpacklo_epi64(x0, x1);\n        y1 = _mm_unpackhi_epi64(x0, x1);\n        y0 = _mm_slli_epi16(y0, 1);\n        y1 = _mm_slli_epi16(y1, 1);\n        zmask = _mm_movemask_epi8(_mm_cmpeq_epi8(_mm_packs_epi16(y0, y1), _mm_setzero_si128()));\n        _mm_store_si128((__m128i *)zzquant, y0);\n        _mm_store_si128((__m128i *)(zzquant + 8), y1);\n\n        if (maxNumCoeff == 15)\n            zmask |= 1;\n        zmask = (~zmask) << 16;\n\n        v = 15;\n        drun = (maxNumCoeff == 16) ? 1 : 0;\n    } else\n    {\n        __m128i x0 = _mm_loadl_epi64((__m128i *)quant);\n        x0 = _mm_slli_epi16(x0, 1);\n        zmask = _mm_movemask_epi8(_mm_cmpeq_epi8(_mm_packs_epi16(x0, x0), _mm_setzero_si128()));\n        _mm_storel_epi64((__m128i *)zzquant, x0);\n        zmask = (~zmask) << 28;\n        drun = 1;\n        v = 3;\n    }\n\n    if (zmask)\n    {\n        do\n        {\n            int i = __clz_cavlc(zmask);\n            *--levels = zzquant[v -= i];\n            *prun++ = (uint8_t)(v + drun);\n            zmask <<= (i + 1);\n            v--;\n        } while(zmask);\n        quant = zzquant + ((maxNumCoeff == 4) ? 4 : 16);\n        nnz = (int)(quant - levels);\n\n        cloop = MIN(3, nnz);\n        levels = quant - 1;\n        do\n        {\n            if ((unsigned)(*levels + 2) > 4u)\n            {\n                break;\n            }\n            trailing_ones_sign = (trailing_ones_sign << 1) | (*levels-- < 0);\n            trailing_ones++;\n        } while (--cloop);\n    } else\n    {\n        nnz = trailing_ones = 0;\n    }\n    nlevels = nnz - trailing_ones;\n\n    nnz_context = nz_ctx[-1] + nz_ctx[1];\n\n    nz_ctx[0] = (uint8_t)nnz;\n    if (nnz_context <= 34)\n    {\n        nnz_context = (nnz_context + 1) >> 1;\n    }\n    nnz_context &= 31;\n\n    // 9.2.1 Parsing process for total number of transform coefficient levels and trailing ones\n    {\n        int off = h264e_g_coeff_token[nnz_context];\n        unsigned n = 6, val = h264e_g_coeff_token[off + trailing_ones + 4*nlevels];\n        if (off != 230)\n        {\n            n = (val & 15) + 1;\n            val >>= 4;\n        }\n        BS_PUT(n, val);\n    }\n\n    if (nnz)\n    {\n        if (trailing_ones)\n        {\n            BS_PUT(trailing_ones, trailing_ones_sign);\n        }\n        if (nlevels)\n        {\n            int vlcnum = 1;\n            int sym_len, prefix_len;\n\n            int sym = *levels-- - 2;\n            if (sym < 0) sym = -3 - sym;\n            if (sym >= 6) vlcnum++;\n            if (trailing_ones < 3)\n            {\n                sym -= 2;\n                if (nnz > 10)\n                {\n                    sym_len = 1;\n                    prefix_len = sym >> 1;\n                    if (prefix_len >= 15)\n                    {\n                        // or vlcnum = 1;  goto escape;\n                        prefix_len = 15;\n                        sym_len = 12;\n                    }\n                    sym -= prefix_len << 1;\n                    // bypass vlcnum advance due to sym -= 2; above\n                    goto loop_enter;\n                }\n            }\n\n            if (sym < 14)\n            {\n                prefix_len = sym;\n                sym = 0; // to avoid side effect in bitbuf\n                sym_len = 0;\n            } else if (sym < 30)\n            {\n                prefix_len = 14;\n                sym_len = 4;\n                sym -= 14;\n            } else\n            {\n                vlcnum = 1;\n                goto escape;\n            }\n            goto loop_enter;\n\n            for (;;)\n            {\n                sym_len = vlcnum;\n                prefix_len = sym >> vlcnum;\n                if (prefix_len >= 15)\n                {\nescape:\n                    prefix_len = 15;\n                    sym_len = 12;\n                }\n                sym -= prefix_len << vlcnum;\n\n                if (prefix_len >= 3 && vlcnum < 6) vlcnum++;\nloop_enter:\n                sym |= 1 << sym_len;\n                sym_len += prefix_len+1;\n                BS_PUT(sym_len, (unsigned)sym);\n                if (!--nlevels) break;\n                sym = *levels-- - 2;\n                if (sym < 0) sym = -3 - sym;\n            }\n        }\n\n        if (nnz < maxNumCoeff)\n        {\n            const uint8_t *vlc = (maxNumCoeff == 4) ? h264e_g_total_zeros_cr_2x2 : h264e_g_total_zeros;\n            uint8_t *run = runs;\n            int run_prev = *run++;\n            int nzeros = run_prev - nnz;\n            int zeros_left = 2*nzeros - 1;\n            int ctx = nnz - 1;\n            run[nnz - 1] = (uint8_t)maxNumCoeff; // terminator\n            for(;;)\n            {\n                int t;\n                //encode_huff8(bs, vlc, ctx, nzeros);\n\n                unsigned val = vlc[vlc[ctx] + nzeros];\n                unsigned n = val & 15;\n                val >>= 4;\n                BS_PUT(n, val);\n\n                zeros_left -= nzeros;\n                if (zeros_left < 0)\n                {\n                    break;\n                }\n\n                t = *run++;\n                nzeros = run_prev - t - 1;\n                if (nzeros < 0)\n                {\n                    break;\n                }\n                run_prev = t;\n                assert(zeros_left < 14);\n                vlc = h264e_g_run_before;\n                ctx = zeros_left;\n            }\n        }\n    }\n    BS_CLOSE(bs);\n}\n\n#define MM_LOAD_8TO16_2(p) _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)(p)), _mm_setzero_si128())\nstatic __inline __m128i subabs128_16(__m128i a, __m128i b)\n{\n    return _mm_or_si128(_mm_subs_epu16(a, b), _mm_subs_epu16(b, a));\n}\nstatic __inline __m128i clone2x16(const void *p)\n{\n    __m128i tmp = MM_LOAD_8TO16_2(p);\n    return _mm_unpacklo_epi16(tmp, tmp);\n}\nstatic __inline __m128i subabs128(__m128i a, __m128i b)\n{\n    return _mm_or_si128(_mm_subs_epu8(a, b), _mm_subs_epu8(b, a));\n}\n\nstatic void transpose8x8_sse(uint8_t *dst, int dst_stride, uint8_t *src, int src_stride)\n{\n    __m128i a = _mm_loadl_epi64((__m128i *)(src));\n    __m128i b = _mm_loadl_epi64((__m128i *)(src += src_stride));\n    __m128i c = _mm_loadl_epi64((__m128i *)(src += src_stride));\n    __m128i d = _mm_loadl_epi64((__m128i *)(src += src_stride));\n    __m128i e = _mm_loadl_epi64((__m128i *)(src += src_stride));\n    __m128i f = _mm_loadl_epi64((__m128i *)(src += src_stride));\n    __m128i g = _mm_loadl_epi64((__m128i *)(src += src_stride));\n    __m128i h = _mm_loadl_epi64((__m128i *)(src += src_stride));\n\n    __m128i p0 = _mm_unpacklo_epi8(a,b);  // b7 a7 b6 a6 ... b0 a0\n    __m128i p1 = _mm_unpacklo_epi8(c,d);  // d7 c7 d6 c6 ... d0 c0\n    __m128i p2 = _mm_unpacklo_epi8(e,f);  // f7 e7 f6 e6 ... f0 e0\n    __m128i p3 = _mm_unpacklo_epi8(g,h);  // h7 g7 h6 g6 ... h0 g0\n\n    __m128i q0 = _mm_unpacklo_epi16(p0, p1);  // d3c3 b3a3 ... d0c0 b0a0\n    __m128i q1 = _mm_unpackhi_epi16(p0, p1);  // d7c7 b7a7 ... d4c4 b4a4\n    __m128i q2 = _mm_unpacklo_epi16(p2, p3);  // h3g3 f3e3 ... h0g0 f0e0\n    __m128i q3 = _mm_unpackhi_epi16(p2, p3);  // h7g7 f7e7 ... h4g4 f4e4\n\n    __m128i r0 = _mm_unpacklo_epi32(q0, q2);  // h1g1f1e1 d1c1b1a1 h0g0f0e0 d0c0b0a0\n    __m128i r1 = _mm_unpackhi_epi32(q0, q2);  // h3g3f3e3 d3c3b3a3 h2g2f2e2 d2c2b2a2\n    __m128i r2 = _mm_unpacklo_epi32(q1, q3);\n    __m128i r3 = _mm_unpackhi_epi32(q1, q3);\n    _mm_storel_epi64((__m128i *)(dst), r0); dst += dst_stride; _mm_storel_epi64((__m128i *)(dst), _mm_unpackhi_epi64(r0, r0)); dst += dst_stride;\n    _mm_storel_epi64((__m128i *)(dst), r1); dst += dst_stride; _mm_storel_epi64((__m128i *)(dst), _mm_unpackhi_epi64(r1, r1)); dst += dst_stride;\n    _mm_storel_epi64((__m128i *)(dst), r2); dst += dst_stride; _mm_storel_epi64((__m128i *)(dst), _mm_unpackhi_epi64(r2, r2)); dst += dst_stride;\n    _mm_storel_epi64((__m128i *)(dst), r3); dst += dst_stride; _mm_storel_epi64((__m128i *)(dst), _mm_unpackhi_epi64(r3, r3)); dst += dst_stride;\n}\n\nstatic void deblock_chroma_h_s4_sse(uint8_t *pq0, int stride, const void* threshold, int alpha, int beta, uint32_t argstr)\n{\n    __m128i thr, str, d;\n    __m128i p1 = MM_LOAD_8TO16_2(pq0 - 2*stride);\n    __m128i p0 = MM_LOAD_8TO16_2(pq0 - stride);\n    __m128i q0 = MM_LOAD_8TO16_2(pq0);\n    __m128i q1 = MM_LOAD_8TO16_2(pq0 + stride);\n    __m128i zero = _mm_setzero_si128();\n    __m128i _alpha = _mm_set1_epi16((short)alpha);\n    __m128i _beta = _mm_set1_epi16((short)beta);\n    __m128i tmp;\n\n    str =                    _mm_cmplt_epi16(subabs128_16(p0, q0), _alpha);\n    str = _mm_and_si128(str, _mm_cmplt_epi16(_mm_max_epi16(subabs128_16(p1, p0), subabs128_16(q1, q0)), _beta));\n\n    if ((uint8_t)argstr != 4)\n    {\n        d = _mm_srai_epi16(_mm_add_epi16(_mm_sub_epi16(_mm_add_epi16(_mm_slli_epi16(_mm_sub_epi16(q0, p0), 2), p1), q1),_mm_set1_epi16(4)), 3);\n        thr = _mm_add_epi16(clone2x16(threshold), _mm_set1_epi16(1));\n        d = _mm_min_epi16(_mm_max_epi16(d, _mm_sub_epi16(zero, thr)), thr);\n\n        tmp = _mm_unpacklo_epi8(_mm_cvtsi32_si128(argstr), _mm_setzero_si128());\n        tmp = _mm_unpacklo_epi16(tmp, tmp);\n\n//        str = _mm_and_si128(str, _mm_cmpgt_epi16(clone2x16(strength), zero));\n        str = _mm_and_si128(str, _mm_cmpgt_epi16(tmp, zero));\n        d = _mm_and_si128(str, d);\n        p0 = _mm_add_epi16(p0, d);\n        q0 = _mm_sub_epi16(q0, d);\n    } else\n    {\n        __m128i pq = _mm_add_epi16(p1, q1);\n        __m128i newp = _mm_srai_epi16(_mm_add_epi16(_mm_add_epi16(pq, p1), p0), 1);\n        __m128i newq = _mm_srai_epi16(_mm_add_epi16(_mm_add_epi16(pq, q1), q0), 1);\n        p0 = _mm_xor_si128(_mm_and_si128(_mm_xor_si128(_mm_avg_epu16(newp,zero), p0), str), p0);\n        q0 = _mm_xor_si128(_mm_and_si128(_mm_xor_si128(_mm_avg_epu16(newq,zero), q0), str), q0);\n    }\n    _mm_storel_epi64((__m128i*)(pq0 - stride), _mm_packus_epi16(p0, zero));\n    _mm_storel_epi64((__m128i*)(pq0         ), _mm_packus_epi16(q0, zero));\n}\n\nstatic void deblock_chroma_v_s4_sse(uint8_t *pix, int stride, const void* threshold, int alpha, int beta, uint32_t str)\n{\n    uint8_t t8x4[8*4];\n    int i;\n    uint8_t *p = pix - 2;\n    __m128i t0 =_mm_unpacklo_epi16(\n        _mm_unpacklo_epi8(_mm_cvtsi32_si128(*(int_u*)p),              _mm_cvtsi32_si128(*(int_u*)(p + stride))),\n        _mm_unpacklo_epi8(_mm_cvtsi32_si128(*(int_u*)(p + 2*stride)), _mm_cvtsi32_si128(*(int_u*)(p + 3*stride)))\n        );\n    __m128i t1 =_mm_unpacklo_epi16(\n        _mm_unpacklo_epi8(_mm_cvtsi32_si128(*(int_u*)(p + 4*stride)), _mm_cvtsi32_si128(*(int_u*)(p + 5*stride))),\n        _mm_unpacklo_epi8(_mm_cvtsi32_si128(*(int_u*)(p + 6*stride)), _mm_cvtsi32_si128(*(int_u*)(p + 7*stride)))\n        );\n    __m128i p1 = _mm_unpacklo_epi32(t0, t1);\n    __m128i p0 = _mm_shuffle_epi32 (p1, 0x4E); // 01001110b\n    __m128i q0 = _mm_unpackhi_epi32(t0, t1);\n    __m128i q1 = _mm_shuffle_epi32 (q0, 0x4E);\n    _mm_storel_epi64((__m128i*)(t8x4), p1);\n    _mm_storel_epi64((__m128i*)(t8x4 + 8), p0);\n    _mm_storel_epi64((__m128i*)(t8x4 + 16), q0);\n    _mm_storel_epi64((__m128i*)(t8x4 + 24), q1);\n    deblock_chroma_h_s4_sse(t8x4 + 16, 8, threshold, alpha, beta, str);\n\n    for (i = 0; i < 8; i++)\n    {\n        pix[-1] = t8x4[8  + i];\n        pix[ 0] = t8x4[16 + i];\n        pix += stride;\n    }\n}\n\n#define CMP_BETA(p, q, beta)   _mm_cmpeq_epi8(_mm_subs_epu8(_mm_subs_epu8(p, q), beta), _mm_subs_epu8(_mm_subs_epu8(q, p), beta))\n#define CMP_1(p, q, beta)     (_mm_subs_epu8(subabs128(p, q), beta))\n\nstatic void deblock_luma_h_s4_sse(uint8_t *pix, int stride, int alpha, int beta)\n{\n    int ccloop = 2;\n    do\n    {\n        __m128i p3 = MM_LOAD_8TO16_2(pix - 4*stride);\n        __m128i p2 = MM_LOAD_8TO16_2(pix - 3*stride);\n        __m128i p1 = MM_LOAD_8TO16_2(pix - 2*stride);\n        __m128i p0 = MM_LOAD_8TO16_2(pix - stride);\n        __m128i q0 = MM_LOAD_8TO16_2(pix);\n        __m128i q1 = MM_LOAD_8TO16_2(pix + stride);\n        __m128i q2 = MM_LOAD_8TO16_2(pix + 2*stride);\n        __m128i q3 = MM_LOAD_8TO16_2(pix + 3*stride);\n        __m128i zero = _mm_setzero_si128();\n        __m128i _alpha = _mm_set1_epi16((short)alpha);\n        __m128i _quarteralpha = _mm_set1_epi16((short)((alpha >> 2) + 2));\n        __m128i _beta = _mm_set1_epi16((short)beta);\n        __m128i ap_less_beta;\n        __m128i aq_less_beta;\n        __m128i str;\n        __m128i pq;\n        __m128i short_p;\n        __m128i short_q;\n        __m128i long_p;\n        __m128i long_q;\n        __m128i t;\n        __m128i p0q0_less__quarteralpha;\n\n        __m128i absdif_p0_q0 = subabs128_16(p0, q0);\n        __m128i p0_plus_q0 = _mm_add_epi16(_mm_add_epi16(p0, q0), _mm_set1_epi16(2));\n\n        // if (abs_p0_q0 < alpha && abs_p1_p0 < beta && abs_q1_q0 < beta)\n        str = _mm_cmplt_epi16(absdif_p0_q0, _alpha);\n        //str = _mm_and_si128(str, _mm_cmplt_epi16(subabs128_16(p1, p0), _beta));\n        //str = _mm_and_si128(str, _mm_cmplt_epi16(subabs128_16(q1, q0), _beta));\n        str = _mm_and_si128(str, _mm_cmplt_epi16(_mm_max_epi16(subabs128_16(p1, p0), subabs128_16(q1, q0)), _beta));\n        p0q0_less__quarteralpha = _mm_and_si128(_mm_cmplt_epi16(absdif_p0_q0, _quarteralpha), str);\n\n        //int short_p = (2*p1 + p0 + q1 + 2);\n        //int short_q = (2*q1 + q0 + p1 + 2);\n        short_p = _mm_avg_epu8(_mm_avg_epu8(p0, q1),p1);\n        pq = _mm_add_epi16(_mm_add_epi16(p1, q1), _mm_set1_epi16(2));\n        short_p = _mm_add_epi16(_mm_add_epi16(pq, p1), p0);\n        short_q = _mm_add_epi16(_mm_add_epi16(pq, q1), q0);\n\n        ap_less_beta = _mm_and_si128(_mm_cmplt_epi16(subabs128_16(p2, p0), _beta), p0q0_less__quarteralpha);\n        t = _mm_add_epi16(_mm_add_epi16(p2, p1), p0_plus_q0);\n        // short_p += t - p1 + q0;\n        long_p = _mm_srai_epi16(_mm_add_epi16(_mm_sub_epi16(_mm_add_epi16(short_p, t), p1), q0), 1);\n\n        _mm_storel_epi64((__m128i*)(pix - 2*stride), _mm_packus_epi16(_mm_or_si128(_mm_and_si128(ap_less_beta, _mm_srai_epi16(t, 2)), _mm_andnot_si128(ap_less_beta, p1)), zero));\n        t = _mm_add_epi16(_mm_add_epi16(_mm_slli_epi16(_mm_add_epi16(p3, p2), 1), t), _mm_set1_epi16(2));\n        _mm_storel_epi64((__m128i*)(pix - 3*stride), _mm_packus_epi16(_mm_or_si128(_mm_and_si128(ap_less_beta, _mm_srai_epi16(t, 3)), _mm_andnot_si128(ap_less_beta, p2)), zero));\n\n        aq_less_beta = _mm_and_si128(_mm_cmplt_epi16(subabs128_16(q2, q0), _beta), p0q0_less__quarteralpha);\n        t = _mm_add_epi16(_mm_add_epi16(q2, q1), p0_plus_q0);\n        long_q = _mm_srai_epi16(_mm_add_epi16(_mm_sub_epi16(_mm_add_epi16(short_q, t), q1), p0), 1);\n        _mm_storel_epi64((__m128i*)(pix + 1*stride), _mm_packus_epi16(_mm_or_si128(_mm_and_si128(aq_less_beta, _mm_srai_epi16(t, 2)), _mm_andnot_si128(aq_less_beta, q1)), zero));\n\n        t = _mm_add_epi16(_mm_add_epi16(_mm_slli_epi16(_mm_add_epi16(q3, q2), 1), t), _mm_set1_epi16(2));\n        _mm_storel_epi64((__m128i*)(pix + 2*stride), _mm_packus_epi16(_mm_or_si128(_mm_and_si128(aq_less_beta, _mm_srai_epi16(t, 3)), _mm_andnot_si128(aq_less_beta, q2)), zero));\n\n        short_p = _mm_srai_epi16(_mm_or_si128(_mm_and_si128(ap_less_beta, long_p), _mm_andnot_si128(ap_less_beta, short_p)), 2);\n        short_q = _mm_srai_epi16(_mm_or_si128(_mm_and_si128(aq_less_beta, long_q), _mm_andnot_si128(aq_less_beta, short_q)), 2);\n\n        _mm_storel_epi64((__m128i*)(pix - stride), _mm_packus_epi16(_mm_or_si128(_mm_and_si128(str, short_p), _mm_andnot_si128(str, p0)), zero));\n        _mm_storel_epi64((__m128i*)(pix         ), _mm_packus_epi16(_mm_or_si128(_mm_and_si128(str, short_q), _mm_andnot_si128(str, q0)), zero));\n\n        pix += 8;\n    } while (--ccloop);\n}\n\nstatic void deblock_luma_v_s4_sse(uint8_t *pix, int stride, int alpha, int beta)\n{\n    __m128i scratch[8];\n    uint8_t *s = pix - 4;\n    uint8_t *dst = (uint8_t *)scratch;\n    int cloop = 2;\n    do\n    {\n        transpose8x8_sse(dst, 16, s, stride);\n        s += 8*stride;\n        dst += 8;\n    } while(--cloop);\n\n    deblock_luma_h_s4_sse((uint8_t *)(scratch+4), 16, alpha, beta);\n    s = pix - 4;\n    dst = (uint8_t *)scratch;\n    cloop = 2;\n    do\n    {\n        transpose8x8_sse(s, stride, dst, 16);\n        s += 8*stride;\n        dst += 8;\n    } while(--cloop);\n}\n\n// (a-b) >> 1s == ((a + ~b + 1) >> 1u) - 128;\n//\n// delta = (((q0-p0)<<2) + (p1-q1) + 4) >> 3 =\n//          (4*q0 - 4*p0 + p1 - q1 + 4) >> 3 =\n//          ((p1-p0) - (q1-q0) - 3*(p0-q0) + 4) >> 3\n//          ((p1-p0) - (q1-q0) - 3*p0 + 3*q0) + 4) >> 3\n//          (((p1-p0)-p0)>>1 - ((q1-q0)-q0)>>1 - p0 + q0) + 2) >> 2\n//          ((((p1-p0)-p0)>>1 - p0)>>1 - (((q1-q0)-q0)>>1 - q0)>>1) + 1) >> 1\nstatic void deblock_luma_h_s3_sse(uint8_t *h264e_restrict pix, int stride, int alpha, int beta, const void* threshold, uint32_t strength)\n{\n    __m128i p1 = _mm_loadu_si128((__m128i *)(pix - 2*stride));\n    __m128i p0 = _mm_loadu_si128((__m128i *)(pix - stride));\n    __m128i q0 = _mm_loadu_si128((__m128i *)pix);\n    __m128i q1 = _mm_loadu_si128((__m128i *)(pix + stride));\n    __m128i maskp, maskq, zeromask, thr;\n    __m128i tc0tmp, p2, q2, p0q0avg, _beta;\n\n#define HALFSUM(x, y) _mm_sub_epi8(_mm_avg_epu8(x, y), _mm_and_si128(_mm_xor_si128(y, x), _mm_set1_epi8(1)))\n\n    // if (ABS(p0-q0) - alpha) ...\n    zeromask = _mm_subs_epu8(subabs128(p0, q0), _mm_set1_epi8((int8_t)(alpha - 1)));\n    //  & (ABS(p1-p0) - beta) & (ABS(q1-q0) - beta)\n    _beta = _mm_set1_epi8((int8_t)(beta - 1));\n    zeromask = _mm_or_si128(zeromask, _mm_subs_epu8(_mm_max_epu8(subabs128(p1, p0), subabs128(q1, q0)), _beta));\n    zeromask = _mm_cmpeq_epi8(zeromask, _mm_setzero_si128());\n\n    {\n        __m128i str_x = _mm_cvtsi32_si128(strength);\n        str_x = _mm_unpacklo_epi8(str_x, str_x);\n        str_x = _mm_cmpgt_epi8(_mm_unpacklo_epi8(str_x, str_x), _mm_setzero_si128());\n        zeromask = _mm_and_si128(zeromask, str_x);\n    }\n\n    thr = _mm_cvtsi32_si128(*(int*)threshold);//_mm_loadl_epi64((__m128i *)(threshold));\n    thr = _mm_unpacklo_epi8(thr, thr);\n    thr = _mm_unpacklo_epi8(thr, thr);\n    thr = _mm_and_si128(thr, zeromask);\n\n    p2 = _mm_loadu_si128((__m128i *)(pix - 3*stride));\n    maskp = CMP_BETA(p2, p0, _beta);\n    tc0tmp = _mm_and_si128(thr, maskp);\n    p0q0avg = _mm_avg_epu8(p0, q0);     // (p0+q0+1)>>1\n    _mm_storeu_si128((__m128i *)(pix - 2*stride), _mm_min_epu8(_mm_max_epu8(HALFSUM(p2, p0q0avg), _mm_subs_epu8(p1, tc0tmp)), _mm_adds_epu8(p1, tc0tmp)));\n\n    q2 = _mm_loadu_si128((__m128i *)(pix + 2*stride));\n    maskq = CMP_BETA(q2, q0, _beta);\n    tc0tmp = _mm_and_si128(thr, maskq);\n    _mm_storeu_si128((__m128i *)(pix + stride),  _mm_min_epu8(_mm_max_epu8(HALFSUM(q2, p0q0avg), _mm_subs_epu8(q1, tc0tmp)), _mm_adds_epu8(q1, tc0tmp)));\n\n    thr = _mm_sub_epi8(thr, maskp);\n    thr = _mm_sub_epi8(thr, maskq);\n    thr = _mm_and_si128(thr, zeromask);\n\n    {\n    __m128i ff = _mm_set1_epi8(0xff);\n    __m128i part1 = _mm_avg_epu8(q0, _mm_xor_si128(p0, ff));\n    __m128i part2 = _mm_avg_epu8(p1, _mm_xor_si128(q1, ff));\n    __m128i carry = _mm_and_si128(_mm_xor_si128(p0, q0), _mm_set1_epi8(1));\n    __m128i d = _mm_adds_epu8(part1, _mm_avg_epu8(_mm_avg_epu8(part2, _mm_set1_epi8(3)), carry));\n    __m128i delta_p = _mm_subs_epu8(d, _mm_set1_epi8((char)(128 + 33)));\n    __m128i delta_n = _mm_subs_epu8(_mm_set1_epi8((char)(128 + 33)), d);\n    delta_p = _mm_min_epu8(delta_p, thr);\n    delta_n = _mm_min_epu8(delta_n, thr);\n\n    q0 =  _mm_adds_epu8(_mm_subs_epu8(q0, delta_p), delta_n);\n    p0 =  _mm_subs_epu8(_mm_adds_epu8(p0, delta_p), delta_n);\n\n    _mm_storeu_si128 ((__m128i *)(pix - stride), p0);\n    _mm_storeu_si128 ((__m128i *)pix,            q0);\n    }\n}\n\nstatic void deblock_luma_v_s3_sse(uint8_t *pix, int stride, int alpha, int beta, const void* thr, uint32_t strength)\n{\n    __m128i scratch[8];\n    uint8_t *s = pix - 4;\n    uint8_t *dst = (uint8_t *)scratch;\n    int cloop = 2;\n    do\n    {\n        transpose8x8_sse(dst, 16, s, stride);\n        s += 8*stride;\n        dst += 8;\n    } while(--cloop);\n\n    deblock_luma_h_s3_sse((uint8_t*)(scratch + 4), 16, alpha, beta, thr, strength);\n    s = pix - 4;\n    dst = (uint8_t *)scratch;\n    cloop = 2;\n    do\n    {\n        transpose8x8_sse(s, stride, dst, 16);\n        s += 8*stride;\n        dst += 8;\n    } while(--cloop);\n}\n\nstatic void h264e_deblock_chroma_sse2(uint8_t *pix, int32_t stride, const deblock_params_t *par)\n{\n    const uint8_t *alpha = par->alpha;\n    const uint8_t *beta = par->beta;\n    const uint8_t *thr = par->tc0;\n    const uint8_t *strength = (uint8_t *)par->strength32;\n    int a, b, x, y;\n    a = alpha[0];\n    b = beta[0];\n    for (x = 0; x < 16; x += 8)\n    {\n        uint32_t str = *(uint32_t*)&strength[x];\n        if (str && a)\n        {\n            deblock_chroma_v_s4_sse(pix + (x >> 1), stride, thr + x, a, b, str);\n        }\n        a = alpha[1];\n        b = beta[1];\n    }\n    thr += 16;\n    strength += 16;\n    a = alpha[2];\n    b = beta[2];\n    for (y = 0; y < 16; y += 8)\n    {\n        uint32_t str = *(uint32_t*)&strength[y];\n        if (str && a)\n        {\n            deblock_chroma_h_s4_sse(pix, stride, thr + y, a, b, str);\n        }\n        pix += 4*stride;\n        a = alpha[3];\n        b = beta[3];\n    }\n}\n\nstatic void h264e_deblock_luma_sse2(uint8_t *pix, int32_t stride, const deblock_params_t *par)\n{\n    const uint8_t *alpha = par->alpha;\n    const uint8_t *beta = par->beta;\n    const uint8_t *thr = par->tc0;\n    const uint8_t *strength = (uint8_t *)par->strength32;\n    int a, b, x, y;\n    a = alpha[0];\n    b = beta[0];\n    for (x = 0; x < 16; x += 4)\n    {\n        uint32_t str = *(uint32_t*)&strength[x];\n        if ((uint8_t)str == 4)\n        {\n            deblock_luma_v_s4_sse(pix + x, stride, a, b);\n        } else if (str && a)\n        {\n            deblock_luma_v_s3_sse(pix + x, stride, a, b, thr + x, str);\n        }\n        a = alpha[1];\n        b = beta[1];\n    }\n    thr += 16;\n    strength += 16;\n    a = alpha[2];\n    b = beta[2];\n    for (y = 0; y < 16; y += 4)\n    {\n        uint32_t str = *(uint32_t*)&strength[y];\n        if ((uint8_t)str == 4)\n        {\n            deblock_luma_h_s4_sse(pix, stride, a, b);\n        } else if (str && a)\n        {\n            deblock_luma_h_s3_sse(pix, stride, a, b, thr + y, str);\n        }\n        a = alpha[3];\n        b = beta[3];\n        pix += 4*stride;\n    }\n}\n\nstatic void h264e_denoise_run_sse2(unsigned char *frm, unsigned char *frmprev, int w, int h_arg, int stride_frm, int stride_frmprev)\n{\n#define MM_LOAD_8TO16(p) _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)(p)), zero)\n    int cloop, h = h_arg;\n    __m128i zero = _mm_setzero_si128();\n    __m128i exp  = _mm_set1_epi32(0x7F800000);\n\n    w -= 2;\n    h -= 2;\n    if (w <= 2 || h <= 2)\n    {\n        return;\n    }\n\n    do\n    {\n        unsigned char *pf = frm += stride_frm;\n        unsigned char *pp = frmprev += stride_frmprev;\n        cloop = w >> 3;\n        pp[-stride_frmprev] = *pf++;\n        pp++;\n\n        while (cloop--)\n        {\n            __m128 float_val;\n            __m128i log_neighbour, log_d;\n            __m128i log_neighbour_h, log_neighbour_l, log_d_h, log_d_l;\n            __m128i a, b;\n            __m128i gain;\n            __m128i abs_d, abs_neighbour;\n            a = MM_LOAD_8TO16(pf);\n            b = MM_LOAD_8TO16(pp);\n            abs_d   = _mm_or_si128(_mm_subs_epu16(a, b), _mm_subs_epu16(b, a));\n            a = MM_LOAD_8TO16(pf-stride_frm);\n            a = _mm_add_epi16(a, MM_LOAD_8TO16(pf - 1));\n            a = _mm_add_epi16(a, MM_LOAD_8TO16(pf + 1));\n            a = _mm_add_epi16(a, MM_LOAD_8TO16(pf + stride_frm));\n            b = MM_LOAD_8TO16(pp-stride_frmprev);\n            b = _mm_add_epi16(b, MM_LOAD_8TO16(pp - 1));\n            b = _mm_add_epi16(b, MM_LOAD_8TO16(pp + 1));\n            b = _mm_add_epi16(b, MM_LOAD_8TO16(pp + stride_frmprev));\n\n            abs_neighbour = _mm_or_si128(_mm_subs_epu16(a, b), _mm_subs_epu16(b, a));\n\n            abs_neighbour = _mm_srai_epi16(abs_neighbour, 2);\n\n            abs_d = _mm_add_epi16(abs_d, _mm_set1_epi16(1));\n            abs_neighbour = _mm_add_epi16(abs_neighbour, _mm_set1_epi16(1));\n\n            float_val = _mm_cvtepi32_ps(_mm_srai_epi32(_mm_slli_epi32(_mm_unpacklo_epi16(abs_neighbour, zero), 16), 16));\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            log_neighbour_l  = _mm_sub_epi32(_mm_srli_epi32(_mm_and_si128(_mm_castps_si128(float_val), exp), 23), _mm_set1_epi32(127));\n\n            float_val = _mm_cvtepi32_ps(_mm_srai_epi32(_mm_slli_epi32(_mm_unpackhi_epi16(abs_neighbour, zero), 16), 16));\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            log_neighbour_h  = _mm_sub_epi32(_mm_srli_epi32(_mm_and_si128(_mm_castps_si128(float_val), exp), 23), _mm_set1_epi32(127));\n\n            float_val = _mm_cvtepi32_ps(_mm_srai_epi32(_mm_slli_epi32(_mm_unpacklo_epi16(abs_d, zero), 16), 16));\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            log_d_l = _mm_sub_epi32(_mm_srli_epi32(_mm_and_si128(_mm_castps_si128(float_val), exp), 23), _mm_set1_epi32(127));\n\n            float_val = _mm_cvtepi32_ps(_mm_srai_epi32(_mm_slli_epi32(_mm_unpackhi_epi16(abs_d, zero), 16), 16));\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            float_val = _mm_mul_ps(float_val, float_val);\n            log_d_h = _mm_sub_epi32(_mm_srli_epi32(_mm_and_si128(_mm_castps_si128(float_val), exp), 23), _mm_set1_epi32(127));\n\n            log_d = _mm_packs_epi32(log_d_l, log_d_h);\n            log_neighbour = _mm_packs_epi32(log_neighbour_l, log_neighbour_h);\n\n            log_neighbour = _mm_slli_epi16(log_neighbour, 8);\n            log_neighbour = _mm_adds_epu16(log_neighbour, log_neighbour);\n            log_neighbour = _mm_adds_epu16(log_neighbour, log_neighbour);\n            log_neighbour = _mm_srli_epi16(log_neighbour, 8);\n\n            log_neighbour = _mm_subs_epu16(_mm_set1_epi16(255), log_neighbour);\n            log_d = _mm_subs_epu16(_mm_set1_epi16(255), log_d);\n\n            gain = _mm_mullo_epi16(log_d, log_neighbour);\n\n            a = MM_LOAD_8TO16(pf);\n            b = MM_LOAD_8TO16(pp);\n{\n            __m128i s;\n            __m128i gain_inv;\n            gain_inv = _mm_sub_epi16(_mm_set1_epi8((char)255), gain);\n            s = _mm_add_epi16(_mm_mulhi_epu16(a, gain_inv), _mm_mulhi_epu16(b, gain));\n            b = _mm_mullo_epi16(b, gain);\n            a = _mm_mullo_epi16(a, gain_inv);\n            a = _mm_sub_epi16(_mm_avg_epu16(a, b), _mm_and_si128(_mm_xor_si128(a, b), _mm_set1_epi16(1)));\n            a = _mm_avg_epu16(_mm_srli_epi16(a, 14), _mm_set1_epi16(0));\n            a = _mm_add_epi16(a, s);\n            _mm_storel_epi64((__m128i *)(pp-stride_frmprev), _mm_packus_epi16(a,zero));\n}\n            pf += 8;\n            pp += 8;\n        }\n\n        cloop = w & 7;\n        while (cloop--)\n        {\n            int d, neighbourhood;\n            unsigned g, gd, gn, out_val;\n            d = pf[0] - pp[0];\n            neighbourhood  = pf[-1]      - pp[-1];\n            neighbourhood += pf[+1]      - pp[+1];\n            neighbourhood += pf[-stride_frm] - pp[-stride_frmprev];\n            neighbourhood += pf[+stride_frm] - pp[+stride_frmprev];\n\n            if (d < 0)\n            {\n                d = -d;\n            }\n            if (neighbourhood < 0)\n            {\n                neighbourhood = -neighbourhood;\n            }\n            neighbourhood >>= 2;\n\n            gd = g_diff_to_gainQ8[d];\n            gn = g_diff_to_gainQ8[neighbourhood];\n\n            gn <<= 2;\n            if (gn > 255)\n            {\n                gn = 255;\n            }\n\n            gn = 255 - gn;\n            gd = 255 - gd;\n            g = gn*gd;  // Q8*Q8 = Q16;\n\n            //out_val = ((pp[0]*g ) >> 16) + (((0xffff-g)*pf[0] ) >> 16);\n            out_val = (pp[0]*g + (0xffff-g)*pf[0]  + (1<<15)) >> 16;\n\n            assert(out_val <= 255);\n\n            pp[-stride_frmprev] = (unsigned char)out_val;\n\n            pf++, pp++;\n        }\n        pp[-stride_frmprev] = *pf++;\n    } while(--h);\n\n    memcpy(frmprev + stride_frmprev, frm + stride_frm, w+2);\n    h = h_arg - 2;\n    do\n    {\n        memcpy(frmprev, frmprev - stride_frmprev, w+2);\n        frmprev -= stride_frmprev;\n    } while(--h);\n    memcpy(frmprev, frm - stride_frm*(h_arg-2), w+2);\n}\n\n#define IS_NULL(p) ((p) < (pix_t *)(uintptr_t)32)\n\nstatic uint32_t intra_predict_dc_sse(const pix_t *left, const pix_t *top, int log_side)\n{\n    unsigned dc = 0, side = 1u << log_side, round = 0;\n    __m128i sum = _mm_setzero_si128();\n    if (!IS_NULL(left))\n    {\n        int cloop = side;\n        round += side >> 1;\n        do\n        {\n            sum = _mm_add_epi64(sum, _mm_sad_epu8(_mm_cvtsi32_si128(*(int*)left), _mm_setzero_si128()));\n            left += 4;\n        } while (cloop -= 4);\n    }\n    if (!IS_NULL(top))\n    {\n        int cloop = side;\n        round += side >> 1;\n        do\n        {\n            sum = _mm_add_epi64(sum, _mm_sad_epu8(_mm_cvtsi32_si128(*(int*)top), _mm_setzero_si128()));\n            top += 4;\n        } while (cloop -= 4);\n    }\n    dc = _mm_cvtsi128_si32(sum);\n    dc += round;\n    if (round == side) dc >>= 1;\n    dc >>= log_side;\n    if (!round) dc = 128;\n    return dc * 0x01010101;\n}\n\n/*\n * Note: To make the code more readable we refer to the neighboring pixels\n * in variables named as below:\n *\n *    UL U0 U1 U2 U3 U4 U5 U6 U7\n *    L0 xx xx xx xx\n *    L1 xx xx xx xx\n *    L2 xx xx xx xx\n *    L3 xx xx xx xx\n */\n#define UL edge[-1]\n#define U0 edge[0]\n#define T1 edge[1]\n#define U2 edge[2]\n#define U3 edge[3]\n#define U4 edge[4]\n#define U5 edge[5]\n#define U6 edge[6]\n#define U7 edge[7]\n#define L0 edge[-2]\n#define L1 edge[-3]\n#define L2 edge[-4]\n#define L3 edge[-5]\n\nstatic void h264e_intra_predict_16x16_sse2(pix_t *predict,  const pix_t *left, const pix_t *top, int mode)\n{\n    int cloop = 16;\n    if (mode < 1)\n    {\n        __m128i a = _mm_load_si128((__m128i *)top);\n        do\n        {\n            _mm_store_si128((__m128i *)predict, a);\n            predict += 16;\n        } while(--cloop);\n    } else if (mode == 1)\n    {\n        const __m128i c1111 = _mm_set1_epi8(1);\n        do\n        {\n            _mm_store_si128((__m128i *)predict, _mm_shuffle_epi32(_mm_mul_epu32(_mm_cvtsi32_si128(*left++), c1111), 0));\n            predict += 16;\n        } while(--cloop);\n    } else //if (mode == 2)\n    {\n        __m128i dc128;\n        int dc = intra_predict_dc_sse(left, top, 4);\n        dc128 = _mm_shuffle_epi32(_mm_cvtsi32_si128(dc), 0);\n        do\n        {\n            _mm_store_si128((__m128i *)predict, dc128);\n            predict += 16;\n        } while(--cloop);\n    }\n}\n\nstatic void h264e_intra_predict_chroma_sse2(pix_t *predict, const pix_t *left, const pix_t *top, int mode)\n{\n    int cloop = 8;\n    if (mode < 1)\n    {\n        __m128i a = _mm_load_si128((__m128i *)top);\n        do\n        {\n            _mm_store_si128((__m128i *)predict, a);\n            predict += 16;\n        } while(--cloop);\n    } else if (mode == 1)\n    {\n        do\n        {\n            __m128i t = _mm_unpacklo_epi32(_mm_cvtsi32_si128(left[0]*0x01010101u), _mm_cvtsi32_si128(left[8]*0x01010101u));\n            t = _mm_unpacklo_epi32(t, t);\n            _mm_store_si128((__m128i *)predict, t);\n            left++;\n            predict += 16;\n        } while(--cloop);\n    } else //if (mode == 2)\n    {\n        // chroma\n        uint32_t *d = (uint32_t*)predict;\n        __m128i *d128 = (__m128i *)predict;\n        __m128i tmp;\n        cloop = 2;\n        do\n        {\n            d[0] = d[1] = d[16] = intra_predict_dc_sse(left, top, 2);\n            d[17] = intra_predict_dc_sse(left + 4, top + 4, 2);\n            if (!IS_NULL(top))\n            {\n                d[1] = intra_predict_dc_sse(NULL, top + 4, 2);\n            }\n            if (!IS_NULL(left))\n            {\n                d[16] = intra_predict_dc_sse(NULL, left + 4, 2);\n            }\n            d += 2;\n            left += 8;\n            top += 8;\n        } while(--cloop);\n        tmp = _mm_load_si128(d128++);\n        _mm_store_si128(d128++, tmp);\n        _mm_store_si128(d128++, tmp);\n        _mm_store_si128(d128++, tmp);\n        tmp = _mm_load_si128(d128++);\n        _mm_store_si128(d128++, tmp);\n        _mm_store_si128(d128++, tmp);\n        _mm_store_si128(d128++, tmp);\n    }\n}\n\nstatic int h264e_intra_choose_4x4_sse2(const pix_t *blockin, pix_t *blockpred, int avail, const pix_t *edge, int mpred, int penalty)\n{\n    int best_m = 0;\n    int sad, best_sad = 0x10000;\n\n    __m128i b0 = _mm_loadl_epi64((__m128i *)blockin);\n    __m128i b1 = _mm_loadl_epi64((__m128i *)(blockin + 16));\n    __m128i b2 = _mm_loadl_epi64((__m128i *)(blockin + 32));\n    __m128i b3 = _mm_loadl_epi64((__m128i *)(blockin + 48));\n    __m128i c  = _mm_unpacklo_epi32(b0, b1);\n    __m128i d  = _mm_unpacklo_epi32(b2, b3);\n    __m128i sse_blockin = _mm_unpacklo_epi64(c, d);\n    __m128i t, t0, t1, t2, res, sad128, best128;\n\n#define TEST(mode) sad128 = _mm_sad_epu8(res, sse_blockin);                 \\\n            sad128 = _mm_adds_epu16 (sad128, _mm_shuffle_epi32(sad128, 2)); \\\n            sad  = _mm_cvtsi128_si32(sad128);                               \\\n            if (mode != mpred) sad += penalty;                              \\\n            if (sad < best_sad)                                             \\\n            {                                                               \\\n                best128 = res;                                              \\\n                best_sad = sad;                                             \\\n                best_m = mode;                                              \\\n            }\n\n    __m128i border = _mm_loadu_si128((__m128i *)(&L3));\n    int topright = 0x01010101u*U7;\n\n    if (!(avail & AVAIL_TR))\n    {\n        topright = 0x01010101u*U3;\n        //border = _mm_insert_epi32 (border, topright, 2);\n        border = _mm_insert_epi16 (border, topright, 4);\n        border = _mm_insert_epi16 (border, topright, 5);\n    }\n    //border = _mm_insert_epi32 (border, topright, 3);\n    border = _mm_insert_epi16 (border, topright, 6);\n    border = _mm_insert_epi16 (border, topright, 7);\n\n    // DC\n    {\n        unsigned dc = 0, round = 0;\n\n        if (avail & AVAIL_L)\n        {\n            dc += _mm_cvtsi128_si32(_mm_sad_epu8(_mm_and_si128(border, _mm_set_epi32(0, 0, 0, ~0)), _mm_setzero_si128()));\n            round += 2;\n        }\n        if (avail & AVAIL_T)\n        {\n            dc += _mm_cvtsi128_si32(_mm_sad_epu8(_mm_and_si128(_mm_srli_si128(border, 5), _mm_set_epi32(0, 0, 0, ~0)), _mm_setzero_si128()));\n            round += 2;\n        }\n        dc += round;\n        if (round == 4) dc >>= 1;\n        dc >>= 2;\n        if (!round) dc = 128;\n        t = _mm_cvtsi32_si128(dc * 0x01010101);\n        t = _mm_unpacklo_epi32(t, t);\n        best128 =_mm_unpacklo_epi32(t, t);\n\n        //TEST(2)\n        sad128 = _mm_sad_epu8(best128, sse_blockin);\n        sad128 = _mm_adds_epu16 (sad128, _mm_shuffle_epi32(sad128, 2));\n        best_sad = _mm_cvtsi128_si32(sad128);\n\n        if (2 != mpred) best_sad += penalty;\n        best_m = 2;\n    }\n\n    if (avail & AVAIL_T)\n    {\n        t = _mm_srli_si128(border, 5);\n        t = _mm_unpacklo_epi32(t, t);\n        res =  _mm_unpacklo_epi32(t, t);\n        TEST(0)\n\n        t0 = _mm_srli_si128(border, 5);\n        t1 = _mm_srli_si128(border, 6);\n        t2 = _mm_srli_si128(border, 7);\n        t = _mm_sub_epi8(_mm_avg_epu8(t0, t2), _mm_and_si128(_mm_xor_si128(t0, t2), _mm_set1_epi8(1)));\n        t = _mm_avg_epu8(t, t1);\n        t2 = _mm_unpacklo_epi32(t, _mm_srli_si128(t, 1));\n\n        res = _mm_unpacklo_epi64(t2, _mm_unpacklo_epi32(_mm_srli_si128(t, 2), _mm_srli_si128(t, 3)));\n        TEST(3)\n\n        t0 = _mm_avg_epu8(t0,t1);\n        t0  = _mm_unpacklo_epi32(t0, _mm_srli_si128(t0, 1));\n        res = _mm_unpacklo_epi32(t0, t2);\n        TEST(7)\n    }\n\n    if (avail & AVAIL_L)\n    {\n        int ext;\n        t = _mm_unpacklo_epi8(border, border);\n        t = _mm_shufflelo_epi16(t, 3 + (2 << 2) + (1 << 4) + (0 << 6));\n        res = _mm_unpacklo_epi8(t, t);\n        TEST(1)\n\n        t0 = _mm_unpacklo_epi8(border, _mm_setzero_si128());\n        t0 = _mm_shufflelo_epi16(t0, 3 + (2 << 2) + (1 << 4) + (0 << 6));\n        t0 = _mm_packus_epi16(t0, t0);       // 0 1 2 3\n\n        t1 = _mm_unpacklo_epi8(t0, t0);      // 0 0 1 1 2 2 3 3\n\n        ext = _mm_extract_epi16(t1, 3);\n        t0 = _mm_insert_epi16 (t0, ext, 2);  // 0 1 2 3 3 3\n        t1 = _mm_insert_epi16 (t1, ext, 4);  // 0 0 1 1 2 2 3 3 33\n        t2 = _mm_slli_si128(t0, 2);          // x x 0 1 2 3 3 3\n        t = _mm_sub_epi8(_mm_avg_epu8(t0, t2), _mm_and_si128(_mm_xor_si128(t0, t2), _mm_set1_epi8(1)));\n        // 0 1 2 3 3 3\n        // x x 0 1 2 3\n        t = _mm_unpacklo_epi8(t2, t);\n        // 0   1   2   3   3   3\n        // x   x   0   1   2   3\n        // x   x   0   1   2   3\n        t = _mm_avg_epu8(t, _mm_slli_si128(t1, 2));\n        // 0 0 1 1 2 2 3 3\n\n        res = _mm_unpacklo_epi32(_mm_srli_si128(t, 4), _mm_srli_si128(t, 6));\n        //res = _mm_insert_epi32 (res, ext|(ext<<16),3);\n        res = _mm_insert_epi16 (res, ext, 6);\n        res = _mm_insert_epi16 (res, ext, 7);\n        TEST(8)\n    }\n\n    if ((avail & (AVAIL_T | AVAIL_L | AVAIL_TL)) == (AVAIL_T | AVAIL_L | AVAIL_TL))\n    {\n        int t16;\n        t0 = _mm_srli_si128(border, 1);\n        t1 = _mm_srli_si128(border, 2);\n        t = _mm_sub_epi8(_mm_avg_epu8(border, t1), _mm_and_si128(_mm_xor_si128(border, t1), _mm_set1_epi8(1)));\n        t = _mm_avg_epu8(t, t0);\n\n        res = _mm_unpacklo_epi64(_mm_unpacklo_epi32(_mm_srli_si128(t, 3), _mm_srli_si128(t, 2)), _mm_unpacklo_epi32(_mm_srli_si128(t, 1), t));\n        TEST(4)\n\n        t1 = _mm_unpacklo_epi8(t2 = _mm_avg_epu8(t0,border), t);\n        t1 = _mm_unpacklo_epi32(t1, _mm_srli_si128(t1, 2));\n        res = _mm_shuffle_epi32(t1, 3 | (2 << 2) | (1 << 4) | (0 << 6));\n        res = _mm_insert_epi16 (res, _mm_extract_epi16 (t, 2), 1);\n        TEST(6)\n\n        t = _mm_srli_si128(t, 1);\n        res = _mm_unpacklo_epi32(_mm_srli_si128(t2, 4), _mm_srli_si128(t, 2));\n        t2 =  _mm_insert_epi16 (t2, t16 = _mm_extract_epi16 (t, 0), 1);\n        t  =  _mm_insert_epi16 (t, (t16 << 8), 0);\n        res = _mm_unpacklo_epi64(res, _mm_unpacklo_epi32(_mm_srli_si128(t2, 3), _mm_srli_si128(t, 1)));\n        TEST(5)\n    }\n\n    ((uint32_t *)blockpred)[ 0] = _mm_extract_epi16(best128, 0) | ((unsigned)_mm_extract_epi16(best128, 1) << 16);\n    ((uint32_t *)blockpred)[ 4] = _mm_extract_epi16(best128, 2) | ((unsigned)_mm_extract_epi16(best128, 3) << 16);\n    ((uint32_t *)blockpred)[ 8] = _mm_extract_epi16(best128, 4) | ((unsigned)_mm_extract_epi16(best128, 5) << 16);\n    ((uint32_t *)blockpred)[12] = _mm_extract_epi16(best128, 6) | ((unsigned)_mm_extract_epi16(best128, 7) << 16);\n\n    return best_m + (best_sad << 4);    // pack result\n}\n\n#define MM_LOAD_8TO16(p) _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)(p)), zero)\n#define MM_LOAD_REG(p, sh) _mm_unpacklo_epi8(_mm_srli_si128(p, sh), zero)\n#define __inline\nstatic __inline void copy_wh_sse(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    assert(h % 4 == 0);\n    if (w == 16)\n    {\n        do\n        {\n            _mm_store_si128((__m128i *)dst, _mm_loadu_si128((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_store_si128((__m128i *)dst, _mm_loadu_si128((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_store_si128((__m128i *)dst, _mm_loadu_si128((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_store_si128((__m128i *)dst, _mm_loadu_si128((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_store_si128((__m128i *)dst, _mm_loadu_si128((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_store_si128((__m128i *)dst, _mm_loadu_si128((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_store_si128((__m128i *)dst, _mm_loadu_si128((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_store_si128((__m128i *)dst, _mm_loadu_si128((__m128i *)src)); src += src_stride; dst += 16;\n        } while(h -= 8);\n    } else //if (w == 8)\n    {\n        do\n        {\n            _mm_storel_epi64((__m128i *)dst, _mm_loadl_epi64((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_storel_epi64((__m128i *)dst, _mm_loadl_epi64((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_storel_epi64((__m128i *)dst, _mm_loadl_epi64((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_storel_epi64((__m128i *)dst, _mm_loadl_epi64((__m128i *)src)); src += src_stride; dst += 16;\n        } while(h -= 4);\n    }\n}\n\nstatic __inline void hpel_lpf_diag_sse(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    ALIGN(16) int16_t scratch[21 * 16] ALIGN2(16);  /* 21 rows by 16 pixels per row */\n\n    /*\n     * Intermediate values will be 1/2 pel at Horizontal direction\n     * Starting at (0.5, -2) at top extending to (0.5, height + 3) at bottom\n     * scratch contains a 2D array of size (w)X(h + 5)\n     */\n    __m128i zero = _mm_setzero_si128();\n    __m128i c32,c5 = _mm_set1_epi16(5);\n    int cloop = h + 5;\n    int16_t *h264e_restrict dst16 = scratch;\n    const int16_t *src16 = scratch + 2*16;\n    src -= 2*src_stride;\n    if (w == 8)\n    {\n        src16 = scratch + 2*8;\n        do\n        {\n            __m128i inp = _mm_loadu_si128((__m128i*)(src - 2));\n            _mm_store_si128((__m128i*)dst16, _mm_add_epi16(\n                _mm_mullo_epi16(\n                    _mm_sub_epi16(\n                        _mm_slli_epi16(\n                            _mm_add_epi16(MM_LOAD_REG(inp, 2), MM_LOAD_REG(inp, 3)),\n                            2),\n                        _mm_add_epi16(MM_LOAD_REG(inp, 1), MM_LOAD_REG(inp, 4))),\n                    c5),\n                _mm_add_epi16(_mm_unpacklo_epi8(inp, zero), MM_LOAD_REG(inp, 5))\n            ));\n            src += src_stride;\n            dst16 += 8;\n        } while (--cloop);\n\n        c32 = _mm_set1_epi16(32);\n        cloop = h;\n        do\n        {\n            // (20*x2 - 5*x1 + x0 + 512) >> 10 =>\n            // (16*x2 + 4*x2 - 4*x1 - x1 + x0 + 512) >> 10 =>\n            // ((((x0 - x1) >> 2) + (x2 - x1)) >> 2) + x2 + 32 >> 6\n            __m128i x1 = _mm_add_epi16(_mm_load_si128((__m128i*)(src16 - 1*8)), _mm_load_si128((__m128i*)(src16 + 2*8)));\n            __m128i x2 = _mm_add_epi16(_mm_load_si128((__m128i*)(src16 - 0*8)), _mm_load_si128((__m128i*)(src16 + 1*8)));\n            _mm_storel_epi64((__m128i*)dst,\n                _mm_packus_epi16(\n                    _mm_srai_epi16(\n                        _mm_add_epi16(\n                            _mm_srai_epi16(\n                                _mm_sub_epi16(\n                                    _mm_srai_epi16(\n                                        _mm_sub_epi16(\n                                            _mm_add_epi16(_mm_load_si128((__m128i*)(src16 - 2*8)), _mm_load_si128((__m128i*)(src16 + 3*8))),\n                                            x1),\n                                        2),\n                                    _mm_sub_epi16(x1, x2)),\n                                2),\n                            _mm_add_epi16(x2, c32)),\n                        6),\n                    zero));\n            src16 += 8;\n            dst += 16;\n        } while(--cloop);\n    } else\n    {\n        do\n        {\n            _mm_store_si128((__m128i*)dst16, _mm_add_epi16(\n                _mm_mullo_epi16(\n                    _mm_sub_epi16(\n                        _mm_slli_epi16(\n                            _mm_add_epi16(MM_LOAD_8TO16(src - 0), MM_LOAD_8TO16(src + 1)),\n                            2),\n                        _mm_add_epi16(MM_LOAD_8TO16(src - 1), MM_LOAD_8TO16(src + 2))),\n                    c5),\n                _mm_add_epi16(MM_LOAD_8TO16(src - 2), MM_LOAD_8TO16(src + 3))\n            ));\n            _mm_store_si128((__m128i*)(dst16 + 8), _mm_add_epi16(\n                _mm_mullo_epi16(\n                    _mm_sub_epi16(\n                        _mm_slli_epi16(\n                            _mm_add_epi16(MM_LOAD_8TO16(src + 8 - 0), MM_LOAD_8TO16(src + 8 + 1)),\n                            2),\n                        _mm_add_epi16(MM_LOAD_8TO16(src + 8 - 1), MM_LOAD_8TO16(src + 8 + 2))),\n                    c5),\n                _mm_add_epi16(MM_LOAD_8TO16(src + 8 - 2), MM_LOAD_8TO16(src + 8 + 3))\n            ));\n            src += src_stride;\n            dst16 += 8*2;\n        } while (--cloop);\n\n        c32 = _mm_set1_epi16(32);\n        cloop = 2*h;\n        do\n        {\n            // (20*x2 - 5*x1 + x0 + 512) >> 10 =>\n            // (16*x2 + 4*x2 - 4*x1 - x1 + x0 + 512) >> 10 =>\n            // ((((x0 - x1) >> 2) + (x2 - x1)) >> 2) + x2 + 32 >> 6\n            __m128i x1 = _mm_add_epi16(_mm_load_si128((__m128i*)(src16 - 1*16)), _mm_load_si128((__m128i*)(src16 + 2*16)));\n            __m128i x2 = _mm_add_epi16(_mm_load_si128((__m128i*)(src16 - 0*16)), _mm_load_si128((__m128i*)(src16 + 1*16)));\n            _mm_storel_epi64((__m128i*)dst,\n                _mm_packus_epi16(\n                    _mm_srai_epi16(\n                        _mm_add_epi16(\n                            _mm_srai_epi16(\n                                _mm_sub_epi16(\n                                    _mm_srai_epi16(\n                                        _mm_sub_epi16(\n                                            _mm_add_epi16(_mm_load_si128((__m128i*)(src16 - 2*16)), _mm_load_si128((__m128i*)(src16 + 3*16))),\n                                            x1),\n                                        2),\n                                    _mm_sub_epi16(x1, x2)),\n                                2),\n                            _mm_add_epi16(x2, c32)),\n                        6),\n                    zero));\n            src16 += 8;\n            dst += 8;\n        } while(--cloop);\n    }\n}\n\nstatic __inline void hpel_lpf_hor_sse(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    __m128i zero = _mm_setzero_si128();\n    const __m128i five = _mm_set1_epi16(5);\n    if (w == 8)\n    {\n        do\n        {\n            __m128i inp = _mm_loadu_si128((__m128i*)(src - 2));\n            _mm_storel_epi64((__m128i*)dst, _mm_packus_epi16(\n                _mm_srai_epi16(\n                    _mm_add_epi16(\n                        _mm_add_epi16(\n                            _mm_mullo_epi16(\n                                _mm_sub_epi16(\n                                    _mm_slli_epi16(_mm_add_epi16(MM_LOAD_REG(inp, 2), MM_LOAD_REG(inp, 3)), 2),\n                                    _mm_add_epi16(MM_LOAD_REG(inp, 1), MM_LOAD_REG(inp, 4))),\n                                 five),\n                            _mm_add_epi16(_mm_unpacklo_epi8(inp, zero), MM_LOAD_REG(inp, 5))),\n                        _mm_set1_epi16(16)),\n                    5),\n                zero));\n            src += src_stride;\n            dst += 16;\n        } while (--h);\n    } else do\n    {\n        __m128i inp = _mm_loadu_si128((__m128i*)(src - 2));\n        _mm_storel_epi64((__m128i*)dst, _mm_packus_epi16(\n            _mm_srai_epi16(\n                _mm_add_epi16(\n                    _mm_add_epi16(\n                        _mm_mullo_epi16(\n                            _mm_sub_epi16(\n                                _mm_slli_epi16(_mm_add_epi16(MM_LOAD_REG(inp, 2), MM_LOAD_REG(inp, 3)), 2),\n                                _mm_add_epi16(MM_LOAD_REG(inp, 1), MM_LOAD_REG(inp, 4))),\n                             five),\n                        _mm_add_epi16(_mm_unpacklo_epi8(inp, zero), MM_LOAD_REG(inp, 5))),\n                    _mm_set1_epi16(16)),\n                5),\n            zero));\n        inp = _mm_loadu_si128((__m128i*)(src + 8 - 2));\n        _mm_storel_epi64((__m128i*)(dst + 8), _mm_packus_epi16(\n            _mm_srai_epi16(\n                _mm_add_epi16(\n                    _mm_add_epi16(\n                        _mm_mullo_epi16(\n                            _mm_sub_epi16(\n                                _mm_slli_epi16(_mm_add_epi16(MM_LOAD_REG(inp, 2), MM_LOAD_REG(inp, 3)), 2),\n                                _mm_add_epi16(MM_LOAD_REG(inp, 1), MM_LOAD_REG(inp, 4))),\n                             five),\n                        _mm_add_epi16(_mm_unpacklo_epi8(inp, zero), MM_LOAD_REG(inp, 5))),\n                    _mm_set1_epi16(16)),\n                5),\n            zero));\n        src += src_stride;\n        dst += 16;\n    } while (--h);\n}\n\nstatic __inline void hpel_lpf_ver_sse(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    __m128i zero = _mm_setzero_si128();\n    __m128i five = _mm_set1_epi16(5);\n    __m128i const16 = _mm_set1_epi16(16);\n\n    do\n    {\n        int cloop = h;\n        do\n        {\n            _mm_storel_epi64((__m128i*)dst, _mm_packus_epi16(\n                _mm_srai_epi16(\n                    _mm_add_epi16(\n                        _mm_add_epi16(\n                            _mm_mullo_epi16(\n                                _mm_sub_epi16(\n                                     _mm_slli_epi16(_mm_add_epi16(MM_LOAD_8TO16(src - 0*src_stride), MM_LOAD_8TO16(src + 1*src_stride)), 2),\n                                    _mm_add_epi16(MM_LOAD_8TO16(src - 1*src_stride), MM_LOAD_8TO16(src + 2*src_stride))),\n                                five),\n                            _mm_add_epi16(MM_LOAD_8TO16(src - 2*src_stride), MM_LOAD_8TO16(src + 3*src_stride))),\n                        const16),\n                    5),\n                zero));\n            src += src_stride;\n            dst += 16;\n        } while(--cloop);\n        src += 8 - src_stride*h;\n        dst += 8 - 16*h;\n    } while ((w -= 8) > 0);\n}\n\nstatic void average_16x16_unalign_sse(uint8_t *dst, const uint8_t *src, int src_stride)\n{\n    __m128i *d = (__m128i *)dst;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n    _mm_store_si128(d, _mm_avg_epu8(_mm_load_si128(d), _mm_loadu_si128((__m128i *)src))); src += src_stride; d++;\n}\n\nstatic void h264e_qpel_average_wh_align_sse2(const uint8_t *src0, const uint8_t *src1, uint8_t *h264e_restrict dst, point_t wh)\n{\n    int w = wh.s.x;\n    int h = wh.s.y;\n    __m128i *d = (__m128i *)dst;\n    const __m128i *s0 = (const __m128i *)src0;\n    const __m128i *s1 = (const __m128i *)src1;\n    if (w == 16)\n    {\n        do\n        {\n            _mm_store_si128(d++, _mm_avg_epu8(_mm_load_si128(s0++), _mm_load_si128(s1++)));\n            _mm_store_si128(d++, _mm_avg_epu8(_mm_load_si128(s0++), _mm_load_si128(s1++)));\n            _mm_store_si128(d++, _mm_avg_epu8(_mm_load_si128(s0++), _mm_load_si128(s1++)));\n            _mm_store_si128(d++, _mm_avg_epu8(_mm_load_si128(s0++), _mm_load_si128(s1++)));\n            _mm_store_si128(d++, _mm_avg_epu8(_mm_load_si128(s0++), _mm_load_si128(s1++)));\n            _mm_store_si128(d++, _mm_avg_epu8(_mm_load_si128(s0++), _mm_load_si128(s1++)));\n            _mm_store_si128(d++, _mm_avg_epu8(_mm_load_si128(s0++), _mm_load_si128(s1++)));\n            _mm_store_si128(d++, _mm_avg_epu8(_mm_load_si128(s0++), _mm_load_si128(s1++)));\n        } while((h -= 8) > 0);\n    } else\n    {\n        do\n        {\n            _mm_storel_epi64(d++, _mm_avg_epu8(_mm_loadl_epi64(s0++), _mm_loadl_epi64(s1++)));\n            _mm_storel_epi64(d++, _mm_avg_epu8(_mm_loadl_epi64(s0++), _mm_loadl_epi64(s1++)));\n            _mm_storel_epi64(d++, _mm_avg_epu8(_mm_loadl_epi64(s0++), _mm_loadl_epi64(s1++)));\n            _mm_storel_epi64(d++, _mm_avg_epu8(_mm_loadl_epi64(s0++), _mm_loadl_epi64(s1++)));\n            _mm_storel_epi64(d++, _mm_avg_epu8(_mm_loadl_epi64(s0++), _mm_loadl_epi64(s1++)));\n            _mm_storel_epi64(d++, _mm_avg_epu8(_mm_loadl_epi64(s0++), _mm_loadl_epi64(s1++)));\n            _mm_storel_epi64(d++, _mm_avg_epu8(_mm_loadl_epi64(s0++), _mm_loadl_epi64(s1++)));\n            _mm_storel_epi64(d++, _mm_avg_epu8(_mm_loadl_epi64(s0++), _mm_loadl_epi64(s1++)));\n        } while((h -= 8) > 0);\n    }\n}\n\nstatic void h264e_qpel_interpolate_luma_sse2(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, point_t wh, point_t dxdy)\n{\n    ALIGN(16) uint8_t scratch[16*16] ALIGN2(16);\n//    src += ((dx + 1) >> 2) + ((dy + 1) >> 2)*src_stride;            // dx == 3 ? next row; dy == 3 ? next line\n//    dxdy              actions: Horizontal, Vertical, Diagonal, Average\n//    0 1 2 3 +1        -   ha    h    ha+\n//    1                 va  hva   hda  hv+a\n//    2                 v   vda   d    v+da\n//    3                 va+ h+va h+da  h+v+a\n//    +stride\n    int32_t pos = 1 << (dxdy.s.x + 4*dxdy.s.y);\n    uint8_t *h264e_restrict dst0 = dst;\n\n    if (pos == 1)\n    {\n        copy_wh_sse(src, src_stride, dst, wh.s.x, wh.s.y);\n        return;\n    }\n    if (pos & 0xe0ee) // 1110 0000 1110 1110\n    {\n        hpel_lpf_hor_sse(src + ((dxdy.s.y + 1) >> 2)*src_stride, src_stride, dst, wh.s.x, wh.s.y);\n        dst = scratch;\n    }\n    if (pos & 0xbbb0) // 1011 1011 1011 0000\n    {\n        hpel_lpf_ver_sse(src + ((dxdy.s.x + 1) >> 2), src_stride, dst, wh.s.x, wh.s.y);\n        dst = scratch;\n    }\n    if (pos & 0x4e40) // 0100 1110 0100 0000\n    {\n        hpel_lpf_diag_sse(src, src_stride, dst, wh.s.x, wh.s.y);\n        dst = scratch;\n    }\n    if (pos & 0xfafa) // 1111 1010 1111 1010\n    {\n        assert(wh.s.x == 16 && wh.s.y == 16);\n        if (pos & 0xeae0)// 1110 1010 1110 0000\n        {\n            point_t p;\n            p.u32 = 16 + (16 << 16);\n            h264e_qpel_average_wh_align_sse2(scratch, dst0, dst0, p);\n        } else\n        {\n            src += ((dxdy.s.x + 1) >> 2) + ((dxdy.s.y + 1) >> 2)*src_stride;\n            average_16x16_unalign_sse(dst0, src, src_stride);\n        }\n    }\n}\n\nstatic void h264e_qpel_interpolate_chroma_sse2(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, point_t wh, point_t dxdy)\n{\n    __m128i zero = _mm_setzero_si128();\n    int w = wh.s.x;\n    int h = wh.s.y;\n    __m128i a, b, c, d;\n\n//        __m128i a = _mm_set1_epi16((short)((8-dx) * (8-dy)));\n//        __m128i b = _mm_set1_epi16((short)(dx * (8-dy)));\n//        __m128i c = _mm_set1_epi16((short)((8-dx) * dy));\n//        __m128i d = _mm_set1_epi16((short)(dx * dy));\n    __m128i c8 = _mm_set1_epi16(8);\n    __m128i y,x = _mm_cvtsi32_si128(dxdy.u32);\n    x = _mm_unpacklo_epi16(x, x);\n    x = _mm_unpacklo_epi32(x, x);\n    y = _mm_unpackhi_epi64(x, x);\n    x = _mm_unpacklo_epi64(x, x);\n    a = _mm_mullo_epi16(_mm_sub_epi16(c8, x), _mm_sub_epi16(c8, y));\n    b = _mm_mullo_epi16(x, _mm_sub_epi16(c8, y));\n    c = _mm_mullo_epi16(_mm_sub_epi16(c8, x), y);\n    d = _mm_mullo_epi16(x, y);\n\n    if (!dxdy.u32)\n    {\n        // 10%\n        if (w == 8) do\n        {\n            _mm_storel_epi64((__m128i *)dst, _mm_loadl_epi64((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_storel_epi64((__m128i *)dst, _mm_loadl_epi64((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_storel_epi64((__m128i *)dst, _mm_loadl_epi64((__m128i *)src)); src += src_stride; dst += 16;\n            _mm_storel_epi64((__m128i *)dst, _mm_loadl_epi64((__m128i *)src)); src += src_stride; dst += 16;\n        } while(h -= 4);\n        else\n        {\n            do\n            {\n                *(int *)dst = *(int_u *)src; src += src_stride; dst += 16;\n                *(int *)dst = *(int_u *)src; src += src_stride; dst += 16;\n                *(int *)dst = *(int_u *)src; src += src_stride; dst += 16;\n                *(int *)dst = *(int_u *)src; src += src_stride; dst += 16;\n            } while(h -= 4);\n        }\n    } else\n    if (!dxdy.s.x || !dxdy.s.y)\n    {\n        // 40%\n        int dsrc = dxdy.s.x?1:src_stride;\n        c = _mm_or_si128(c,b);\n\n        if (w==8)\n        {\n            do\n            {\n                _mm_storel_epi64((__m128i *)dst,\n                _mm_packus_epi16(\n                    _mm_srai_epi16(\n                        _mm_add_epi16(\n                            _mm_add_epi16(\n                                    _mm_mullo_epi16(a, MM_LOAD_8TO16(src)),\n                                    _mm_mullo_epi16(c, MM_LOAD_8TO16(src + dsrc))),\n                            _mm_set1_epi16(32)),\n                        6),\n                    zero)) ;\n                dst += 16;\n                src += src_stride;\n            } while (--h);\n        } else\n        {\n            do\n            {\n                *(int* )(dst) = _mm_cvtsi128_si32 (\n                _mm_packus_epi16(\n                    _mm_srai_epi16(\n                        _mm_add_epi16(\n                            _mm_add_epi16(\n                                    _mm_mullo_epi16(a, MM_LOAD_8TO16(src)),\n                                    _mm_mullo_epi16(c, MM_LOAD_8TO16(src + dsrc))),\n                            _mm_set1_epi16(32)),\n                        6),\n                    zero));\n                dst += 16;\n                src += src_stride;\n            } while (--h);\n        }\n    } else\n    {\n        // 50%\n        if (w == 8)\n        {\n            __m128i x1,x0;\n            x0 = _mm_loadl_epi64((__m128i*)(src));\n            x1 = _mm_loadl_epi64((__m128i*)(src + 1));\n            x0 = _mm_unpacklo_epi8(x0, zero);\n            x1 = _mm_unpacklo_epi8(x1, zero);\n            do\n            {\n                __m128i y0, y1;\n                src += src_stride;\n                y0 = _mm_loadl_epi64((__m128i*)(src));\n                y1 = _mm_loadl_epi64((__m128i*)(src + 1));\n                y0 = _mm_unpacklo_epi8(y0, zero);\n                y1 = _mm_unpacklo_epi8(y1, zero);\n                _mm_storel_epi64((__m128i *)dst,\n                    _mm_packus_epi16(\n                        _mm_srai_epi16(\n                            _mm_add_epi16(\n                                _mm_add_epi16(\n                                    _mm_add_epi16(\n                                        _mm_mullo_epi16(x0, a),\n                                        _mm_mullo_epi16(x1, b)),\n                                    _mm_add_epi16(\n                                        _mm_mullo_epi16(y0, c),\n                                        _mm_mullo_epi16(y1, d))),\n                                _mm_set1_epi16(32)),\n                            6),\n                        zero));\n                x0 = y0;\n                x1 = y1;\n                dst += 16;\n            } while (--h);\n        } else\n        {\n            // TODO: load 32!\n            __m128i x1, x0 = MM_LOAD_8TO16(src);\n            do\n            {\n                src += src_stride;\n                x1 = MM_LOAD_8TO16(src);\n                *(int*)(dst) = _mm_cvtsi128_si32(\n                    _mm_packus_epi16(\n                        _mm_srai_epi16(\n                            _mm_add_epi16(\n                                _mm_add_epi16(\n                                    _mm_add_epi16(\n                                        _mm_mullo_epi16(x0, a),\n                                        _mm_mullo_epi16(_mm_srli_si128(x0, 2), b)),\n                                    _mm_add_epi16(\n                                        _mm_mullo_epi16(x1, c),\n                                        _mm_mullo_epi16(_mm_srli_si128(x1, 2), d))),\n                                _mm_set1_epi16(32)),\n                            6),\n                        zero));\n                x0 = x1;\n                dst += 16;\n            } while (--h);\n        }\n    }\n}\n\nstatic int h264e_sad_mb_unlaign_8x8_sse2(const pix_t *a, int a_stride, const pix_t *b, int sad[4])\n{\n    __m128i *mb = (__m128i *)b;\n    __m128i s01, s23;\n    s01 = _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++)); a += a_stride;\n    s01 = _mm_add_epi64(s01, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s01 = _mm_add_epi64(s01, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s01 = _mm_add_epi64(s01, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s01 = _mm_add_epi64(s01, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s01 = _mm_add_epi64(s01, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s01 = _mm_add_epi64(s01, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s01 = _mm_add_epi64(s01, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n\n    s23 = _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++)); a += a_stride;\n    s23 = _mm_add_epi64(s23, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s23 = _mm_add_epi64(s23, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s23 = _mm_add_epi64(s23, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s23 = _mm_add_epi64(s23, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s23 = _mm_add_epi64(s23, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s23 = _mm_add_epi64(s23, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s23 = _mm_add_epi64(s23, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n\n    sad[0] = _mm_cvtsi128_si32(s01);\n    sad[1] = _mm_extract_epi16(s01, 4);\n    sad[2] = _mm_cvtsi128_si32(s23);\n    sad[3] = _mm_extract_epi16(s23, 4);\n    return sad[0] + sad[1] + sad[2] + sad[3];\n}\n\n\nstatic int h264e_sad_mb_unlaign_wh_sse2(const pix_t *a, int a_stride, const pix_t *b, point_t wh)\n{\n    __m128i *mb = (__m128i *)b;\n    __m128i s;\n\n    assert(wh.s.x == 8 || wh.s.x == 16);\n    assert(wh.s.y == 8 || wh.s.y == 16);\n\n    if (wh.s.x == 8)\n    {\n        s =                  _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++));  a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n\n        if (wh.s.y == 16)\n        {\n            s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n            s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n            s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n            s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n            s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n            s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n            s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n            s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadl_epi64((__m128i *)a), _mm_loadl_epi64(mb++))); a += a_stride;\n        }\n        return _mm_extract_epi16 (s, 0);\n    }\n\n    s =                  _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++));  a += a_stride;\n    s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n\n    if (wh.s.y == 16)\n    {\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n        s = _mm_add_epi16(s, _mm_sad_epu8(_mm_loadu_si128((__m128i *)a), _mm_loadu_si128(mb++))); a += a_stride;\n    }\n\n    s = _mm_adds_epu16(s, _mm_shuffle_epi32(s, 2));\n    return _mm_cvtsi128_si32(s);\n}\n\nstatic void h264e_copy_8x8_sse2(pix_t *d, int d_stride, const pix_t *s)\n{\n    assert(IS_ALIGNED(d, 8));\n    assert(IS_ALIGNED(s, 8));\n    _mm_storel_epi64((__m128i*)(d), _mm_loadl_epi64((__m128i*)(s))); s += 16; d += d_stride;\n    _mm_storel_epi64((__m128i*)(d), _mm_loadl_epi64((__m128i*)(s))); s += 16; d += d_stride;\n    _mm_storel_epi64((__m128i*)(d), _mm_loadl_epi64((__m128i*)(s))); s += 16; d += d_stride;\n    _mm_storel_epi64((__m128i*)(d), _mm_loadl_epi64((__m128i*)(s))); s += 16; d += d_stride;\n    _mm_storel_epi64((__m128i*)(d), _mm_loadl_epi64((__m128i*)(s))); s += 16; d += d_stride;\n    _mm_storel_epi64((__m128i*)(d), _mm_loadl_epi64((__m128i*)(s))); s += 16; d += d_stride;\n    _mm_storel_epi64((__m128i*)(d), _mm_loadl_epi64((__m128i*)(s))); s += 16; d += d_stride;\n    _mm_storel_epi64((__m128i*)(d), _mm_loadl_epi64((__m128i*)(s)));\n}\n\nstatic void h264e_copy_16x16_sse2(pix_t *d, int d_stride, const pix_t *s, int s_stride)\n{\n    assert(IS_ALIGNED(d, 8));\n    assert(IS_ALIGNED(s, 8));\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s))); s += s_stride; d += d_stride;\n    _mm_storeu_si128((__m128i*)(d), _mm_loadu_si128((__m128i*)(s)));\n}\n\nstatic void h264e_copy_borders_sse2(unsigned char *pic, int w, int h, int guard)\n{\n    int rowbytes = w + 2*guard;\n    int topbot = 2;\n    pix_t *s = pic;\n    pix_t *d = pic - guard*rowbytes;\n    assert(guard == 8 || guard == 16);\n    assert((w % 8) == 0);\n    do\n    {\n        int cloop = w;\n        do\n        {\n            __m128i t = _mm_loadu_si128((__m128i*)(s));\n            _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n            _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n            _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n            _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n            _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n            _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n            _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n            _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n            if (guard == 16)\n            {\n                _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n                _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n                _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n                _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n                _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n                _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n                _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n                _mm_storeu_si128((__m128i*)d, t); d += rowbytes;\n            }\n            s += 16;\n            d += 16 - guard*rowbytes;\n        } while((cloop -= 16) > 0);\n        s = pic + (h - 1)*rowbytes;\n        d = s + rowbytes;\n    } while(--topbot);\n\n    {\n        pix_t *s0 = pic - guard*rowbytes;\n        pix_t *s1 = pic - guard*rowbytes + w - 1;\n        int cloop = 2*guard + h;\n        if (guard == 8) do\n        {\n            _mm_storel_epi64((__m128i*)(s0-8), _mm_set1_epi8(*s0));\n            _mm_storel_epi64((__m128i*)(s1+1), _mm_set1_epi8(*s1));\n            s0 += rowbytes;\n            s1 += rowbytes;\n        } while(--cloop); else do\n        {\n            _mm_storeu_si128((__m128i*)(s0-16), _mm_set1_epi8(*s0));\n            _mm_storeu_si128((__m128i*)(s1+1), _mm_set1_epi8(*s1));\n            s0 += rowbytes;\n            s1 += rowbytes;\n        } while(--cloop);\n\n    }\n}\n\nstatic void hadamar4_2d_sse(int16_t *x)\n{\n    __m128i a = _mm_loadl_epi64((__m128i*)x);\n    __m128i b = _mm_loadl_epi64((__m128i*)(x + 4));\n    __m128i c = _mm_loadl_epi64((__m128i*)(x + 8));\n    __m128i d = _mm_loadl_epi64((__m128i*)(x + 12));\n\n    __m128i u0 = _mm_add_epi16(a, c);\n    __m128i u1 = _mm_sub_epi16(a, c);\n    __m128i u2 = _mm_add_epi16(b, d);\n    __m128i u3 = _mm_sub_epi16(b, d);\n    __m128i v0 = _mm_add_epi16(u0, u2);\n    __m128i v3 = _mm_sub_epi16(u0, u2);\n    __m128i v1 = _mm_add_epi16(u1, u3);\n    __m128i v2 = _mm_sub_epi16(u1, u3);\n\n    //    v0: a0 a1 a2 a3\n    //    v1: b0 ......\n    //    v2: c0 ......\n    //    v4: d0 d1 .. d3\n    //\n    __m128i t0 = _mm_unpacklo_epi16(v0, v1);    // a0, b0, a1, b1, a2, b2, a3, b3\n    __m128i t2 = _mm_unpacklo_epi16(v2, v3);    // c0, d0, c1, d1, c2, d2, c3, d3\n    a = _mm_unpacklo_epi32(t0, t2);    // a0, b0, c0, d0, a1, b1, c1, d1\n    c = _mm_unpackhi_epi32(t0, t2);    // a2, b2, c2, d2, a3, b3, c3, d3\n    u0 = _mm_add_epi16(a, c); // u0 u2\n    u1 = _mm_sub_epi16(a, c); // u1 u3\n    v0 = _mm_unpacklo_epi64(u0, u1); // u0 u1\n    v1 = _mm_unpackhi_epi64(u0, u1); // u2 u3\n    u0 = _mm_add_epi16(v0, v1); // v0 v1\n    u1 = _mm_sub_epi16(v0, v1); // v3 v2\n\n    v1 = _mm_shuffle_epi32(u1, 0x4e); // u2 u3      01001110\n    _mm_store_si128((__m128i*)x, u0);\n    _mm_store_si128((__m128i*)(x + 8), v1);\n\n}\n\nstatic void dequant_dc_sse(quant_t *q, int16_t *qval, int dequant, int n)\n{\n    do q++->dq[0] = (int16_t)(*qval++*(int16_t)dequant); while (--n);\n}\n\nstatic void quant_dc_sse(int16_t *qval, int16_t *deq, int16_t quant, int n, int round_q18)\n{\n    int r_minus = (1 << 18) - round_q18;\n    do\n    {\n        int v = *qval;\n        int r = v < 0 ? r_minus : round_q18;\n        *deq++ = *qval++ = (v * quant + r) >> 18;\n    } while (--n);\n}\n\nstatic void hadamar2_2d_sse(int16_t *x)\n{\n    int a = x[0];\n    int b = x[1];\n    int c = x[2];\n    int d = x[3];\n    x[0] = (int16_t)(a + b + c + d);\n    x[1] = (int16_t)(a - b + c - d);\n    x[2] = (int16_t)(a + b - c - d);\n    x[3] = (int16_t)(a - b - c + d);\n}\n\nstatic void h264e_quant_luma_dc_sse2(quant_t *q, int16_t *deq, const uint16_t *qdat)\n{\n    int16_t *tmp = ((int16_t*)q) - 16;\n    hadamar4_2d_sse(tmp);\n    quant_dc_sse(tmp, deq, qdat[0], 16, 0x20000);//0x15555);\n    hadamar4_2d_sse(tmp);\n    assert(!(qdat[1] & 3));\n    // dirty trick here: shift w/o rounding, since it have no effect  for qp >= 10 (or, to be precise, for qp => 9)\n    dequant_dc_sse(q, tmp, qdat[1] >> 2, 16);\n}\n\nstatic int h264e_quant_chroma_dc_sse2(quant_t *q, int16_t *deq, const uint16_t *qdat)\n{\n    int16_t *tmp = ((int16_t*)q) - 16;\n    hadamar2_2d_sse(tmp);\n    quant_dc_sse(tmp, deq, (int16_t)(qdat[0] << 1), 4, 0xAAAA);\n    hadamar2_2d_sse(tmp);\n    assert(!(qdat[1] & 1));\n    dequant_dc_sse(q, tmp, qdat[1] >> 1, 4);\n    return !!(tmp[0] | tmp[1] | tmp[2] | tmp[3]);\n}\n\nstatic int is_zero_sse(const int16_t *dat, int i0, const uint16_t *thr)\n{\n    __m128i t = _mm_loadu_si128((__m128i*)(thr));\n    __m128i d = _mm_load_si128((__m128i*)(dat));\n    __m128i z = _mm_setzero_si128();\n    __m128i m, sign;\n    if (i0) d = _mm_insert_epi16 (d, 0, 0);\n\n    sign = _mm_cmpgt_epi16(z, d);\n    d = _mm_sub_epi16(_mm_xor_si128(d, sign), sign);\n\n    m = _mm_cmpgt_epi16(d, t);\n    d = _mm_loadu_si128((__m128i*)(dat + 8));\n    sign = _mm_cmpgt_epi16(z, d);\n    d = _mm_sub_epi16(_mm_xor_si128(d, sign), sign);\n    m = _mm_or_si128(m, _mm_cmpgt_epi16(d, t));\n    return !_mm_movemask_epi8(m);\n}\n\nstatic int is_zero4_sse(const quant_t *q, int i0, const uint16_t *thr)\n{\n    return is_zero_sse(q[0].dq, i0, thr) &&\n           is_zero_sse(q[1].dq, i0, thr) &&\n           is_zero_sse(q[4].dq, i0, thr) &&\n           is_zero_sse(q[5].dq, i0, thr);\n}\n\nstatic int h264e_transform_sub_quant_dequant_sse2(const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q, const uint16_t *qdat)\n{\n    int crow = mode >> 1;\n    int ccol = crow;\n    int i, i0 = mode & 1;\n    int nz_block_mask = 0;\n    int zmask = 0;\n    quant_t *q_0 = q;\n\n    int y, x;\n    for (y = 0; y < crow; y++)\n    {\n        for (x = 0; x < ccol; x += 2)\n        {\n            const pix_t *pinp  = inp  + inp_stride*4*y + 4*x;\n            const pix_t *ppred = pred +         16*4*y + 4*x;\n\n            __m128i d0, d1, d2, d3;\n            __m128i t0, t1, t2, t3;\n            __m128i q0, q1, q2, q3;\n            __m128i zero = _mm_setzero_si128();\n            __m128i inp8 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)pinp),  zero);\n            __m128i pred8 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)ppred), zero);\n\n            d0 =_mm_sub_epi16(inp8, pred8);\n            pinp += inp_stride;\n            inp8 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)pinp),  zero);\n            pred8 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)(ppred + 16)), zero);\n            d1 =_mm_sub_epi16(inp8, pred8);\n            pinp += inp_stride;\n            inp8 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)pinp),  zero);\n            pred8 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)(ppred + 32)), zero);\n            d2 =_mm_sub_epi16(inp8, pred8);\n            pinp += inp_stride;\n            inp8 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)pinp),  zero);\n            pred8 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i*)(ppred + 48)), zero);\n            d3 =_mm_sub_epi16(inp8, pred8);\n            t0 = _mm_add_epi16(d0, d3);\n            t1 = _mm_sub_epi16(d0, d3);\n            t2 = _mm_add_epi16(d1, d2);\n            t3 = _mm_sub_epi16(d1, d2);\n            q0 = _mm_add_epi16(t0, t2);\n            q1 = _mm_add_epi16(_mm_add_epi16(t1, t1), t3);\n            q2 = _mm_sub_epi16(t0, t2);\n            q3 = _mm_sub_epi16(t1, _mm_add_epi16(t3, t3));\n\n            //    q0: a0 a1 ....... a7\n            //    q1: b0 .............\n            //    q2: c0 .............\n            //    q3: d0 d1 ....... d7\n            //\n            t0 = _mm_unpacklo_epi16(q0, q1);    // a0, b0, a1, b1, a2, b2, a3, b3\n            t1 = _mm_unpackhi_epi16(q0, q1);    // a4, b4, a5, b5, a6, b6, a7, b7\n            t2 = _mm_unpacklo_epi16(q2, q3);    // c0, d0\n            t3 = _mm_unpackhi_epi16(q2, q3);    // c4, d4\n\n            q0 = _mm_unpacklo_epi32(t0, t2);    // a0, b0, c0, d0, a1, b1, c1, d1\n            q1 = _mm_unpackhi_epi32(t0, t2);    // a2, b2,\n            q2 = _mm_unpacklo_epi32(t1, t3);    // a4, b4\n            q3 = _mm_unpackhi_epi32(t1, t3);    // a6, b6\n\n            d0 = _mm_unpacklo_epi64(q0, q2);    // a0, b0, c0, d0, a4, b4, c4, d4\n            d1 = _mm_unpackhi_epi64(q0, q2);    // a1, b1, c1, d1\n            d2 = _mm_unpacklo_epi64(q1, q3);    // a2, b2,\n            d3 = _mm_unpackhi_epi64(q1, q3);    // a3, b3,\n\n            t0 = _mm_add_epi16(d0, d3);\n            t1 = _mm_sub_epi16(d0, d3);\n            t2 = _mm_add_epi16(d1, d2);\n            t3 = _mm_sub_epi16(d1, d2);\n            q0 = _mm_add_epi16(t0, t2);\n            q1 = _mm_add_epi16(_mm_add_epi16(t1, t1), t3);\n            q2 = _mm_sub_epi16(t0, t2);\n            q3 = _mm_sub_epi16(t1, _mm_add_epi16(t3, t3));\n\n            _mm_storel_epi64((__m128i*)(q[0].dq    ), q0);\n            _mm_storel_epi64((__m128i*)(q[0].dq + 4), q1);\n            _mm_storel_epi64((__m128i*)(q[0].dq + 8), q2);\n            _mm_storel_epi64((__m128i*)(q[0].dq + 12), q3);\n            if (ccol > 1)\n            {\n                q0 = _mm_unpackhi_epi64(q0, q0); _mm_storel_epi64((__m128i*)(q[1].dq    ), q0);\n                q1 = _mm_unpackhi_epi64(q1, q1); _mm_storel_epi64((__m128i*)(q[1].dq + 4), q1);\n                q2 = _mm_unpackhi_epi64(q2, q2); _mm_storel_epi64((__m128i*)(q[1].dq + 8), q2);\n                q3 = _mm_unpackhi_epi64(q3, q3); _mm_storel_epi64((__m128i*)(q[1].dq + 12), q3);\n            }\n            q += 2;\n        }\n    }\n    q = q_0;\n    crow = mode >> 1;\n    ccol = crow;\n\n    if (mode & 1) // QDQ_MODE_INTRA_16 || QDQ_MODE_CHROMA\n    {\n        int cloop = (mode >> 1)*(mode >> 1);\n        short *dc = ((short *)q) - 16;\n        quant_t *pq = q;\n        do\n        {\n            *dc++ = pq->dq[0];\n            pq++;\n        } while (--cloop);\n    }\n\n    if (mode == QDQ_MODE_INTER || mode == QDQ_MODE_CHROMA)\n    {\n        for (i = 0; i < crow*ccol; i++)\n        {\n            if (is_zero_sse(q[i].dq, i0, qdat + OFFS_THR_1_OFF))\n            {\n                zmask |= (1 << i);\n            }\n        }\n\n        if (mode == QDQ_MODE_INTER)\n        {\n            if ((~zmask & 0x0033) && is_zero4_sse(q +  0, i0, qdat + OFFS_THR_2_OFF)) zmask |= 0x33;\n            if ((~zmask & 0x00CC) && is_zero4_sse(q +  2, i0, qdat + OFFS_THR_2_OFF)) zmask |= (0x33 << 2);\n            if ((~zmask & 0x3300) && is_zero4_sse(q +  8, i0, qdat + OFFS_THR_2_OFF)) zmask |= (0x33 << 8);\n            if ((~zmask & 0xCC00) && is_zero4_sse(q + 10, i0, qdat + OFFS_THR_2_OFF)) zmask |= (0x33 << 10);\n        }\n    }\n\n    do\n    {\n        do\n        {\n            int nz_mask = 0;\n            if (zmask & 1)\n            {\n                _mm_store_si128((__m128i*)(q->qv),     _mm_setzero_si128());\n                _mm_store_si128((__m128i*)(q->qv) + 1, _mm_setzero_si128());\n            } else\n            {\n                int16_t *qv_tmp = q->qv;//[16];\n                __m128i t;\n                const __m128i const_q  = _mm_loadu_si128((__m128i*)(qdat + OFFS_QUANT_VECT));\n                const __m128i const_dq = _mm_loadu_si128((__m128i*)(qdat + OFFS_DEQUANT_VECT));\n\n                __m128i src = _mm_load_si128((__m128i*)(q[0].dq));\n                __m128i r = _mm_xor_si128(_mm_set1_epi16(qdat[OFFS_RND_INTER]), _mm_cmpgt_epi16(_mm_setzero_si128(), src));\n                __m128i lo = _mm_mullo_epi16(src, const_q);\n                __m128i hi = _mm_mulhi_epi16(src, const_q);\n                __m128i dst0 = _mm_unpacklo_epi16(lo, hi);\n                __m128i dst1 = _mm_unpackhi_epi16(lo, hi);\n                dst0 = _mm_srai_epi32(_mm_add_epi32(dst0, _mm_unpacklo_epi16(r, _mm_setzero_si128())), 16);\n                dst1 = _mm_srai_epi32(_mm_add_epi32(dst1, _mm_unpackhi_epi16(r, _mm_setzero_si128())), 16);\n                dst0 = _mm_packs_epi32(dst0, dst1);\n                _mm_store_si128((__m128i*)(qv_tmp), dst0);\n\n                t = _mm_cmpeq_epi16(_mm_setzero_si128(), dst0);\n                nz_mask = _mm_movemask_epi8( _mm_packs_epi16(t, t)) & 0xff;\n                dst0 = _mm_mullo_epi16(dst0, const_dq);\n                _mm_store_si128((__m128i*)(q[0].dq), dst0);\n\n\n                src = _mm_load_si128((__m128i*)(q[0].dq + 8));\n                r = _mm_xor_si128(_mm_set1_epi16(qdat[OFFS_RND_INTER]), _mm_cmpgt_epi16(_mm_setzero_si128(), src));\n                lo = _mm_mullo_epi16(src, const_q);\n                hi = _mm_mulhi_epi16(src, const_q);\n                dst0 = _mm_unpacklo_epi16(lo, hi);\n                dst1 = _mm_unpackhi_epi16(lo, hi);\n\n                dst0 = _mm_srai_epi32(_mm_add_epi32(dst0, _mm_unpacklo_epi16(r, _mm_setzero_si128())), 16);\n                dst1 = _mm_srai_epi32(_mm_add_epi32(dst1, _mm_unpackhi_epi16(r, _mm_setzero_si128())), 16);\n                dst0 = _mm_packs_epi32(dst0, dst1);\n                _mm_store_si128((__m128i*)(qv_tmp + 8), dst0);\n\n                t = _mm_cmpeq_epi16(_mm_setzero_si128(), dst0);\n                nz_mask |= _mm_movemask_epi8( _mm_packs_epi16(t, t)) << 8;\n                dst0 = _mm_mullo_epi16(dst0, const_dq);\n                _mm_store_si128((__m128i*)(q[0].dq + 8), dst0);\n                nz_mask = ~nz_mask & 0xffff;\n                if (i0)\n                {\n                    nz_mask &= ~1;\n                }\n            }\n\n            zmask >>= 1;\n            nz_block_mask <<= 1;\n            if (nz_mask)\n                nz_block_mask |= 1;\n            q++;\n        } while (--ccol);\n        ccol = mode >> 1;\n    } while (--crow);\n    return nz_block_mask;\n}\n\nstatic void h264e_transform_add_sse2(pix_t *out, int out_stride, const pix_t *pred, quant_t *q, int side, int32_t mask)\n{\n    int crow = side;\n    int ccol = crow;\n\n    assert(IS_ALIGNED(out, 4));\n    assert(IS_ALIGNED(pred, 4));\n    assert(!(out_stride % 4));\n\n    do\n    {\n        do\n        {\n            if (mask >= 0)\n            {\n                // copy 4x4\n                pix_t *dst = out;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 0 * 16); dst += out_stride;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 1 * 16); dst += out_stride;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 2 * 16); dst += out_stride;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 3 * 16);\n            }\n            else\n            {\n                __m128i zero = _mm_setzero_si128();\n                __m128i c32 = _mm_set1_epi16(32);\n                __m128i d0, d1, d2, d3;\n                __m128i e0, e1, e2, e3;\n                d0 = _mm_load_si128((__m128i*)(q->dq + 0));\n                d2 = _mm_load_si128((__m128i*)(q->dq + 8));\n                d1 = _mm_unpackhi_epi64(d0, d2);\n                d3 = _mm_unpackhi_epi64(d2, d0);\n\n                e0 = _mm_add_epi16(d0, d2);\n                e1 = _mm_sub_epi16(d0, d2);\n\n                e2 = _mm_srai_epi16(d1, 1);\n                e2 = _mm_sub_epi16(e2, d3);\n                e3 = _mm_srai_epi16(d3, 1);\n                e3 = _mm_add_epi16(e3, d1);\n\n                d0 = _mm_add_epi16(e0, e3);\n                d1 = _mm_add_epi16(e1, e2);\n                d2 = _mm_sub_epi16(e1, e2);\n                d3 = _mm_sub_epi16(e0, e3);\n\n                e1 = _mm_unpacklo_epi16(d0, d1);    // a0, b0, a1, b1, a2, b2, a3, b3\n                e3 = _mm_unpacklo_epi16(d2, d3);    // c0, d0\n\n                e0 = _mm_unpacklo_epi32(e1, e3);    // a0, b0, c0, d0, a1, b1, c1, d1\n                e2 = _mm_unpackhi_epi32(e1, e3);    // a2, b2,\n\n                e1 = _mm_unpackhi_epi64(e0, e2);\n                e3 = _mm_unpackhi_epi64(e2, e0);\n\n                d0 = _mm_add_epi16(e0, e2);\n                d1 = _mm_sub_epi16(e0, e2);\n                d2 = _mm_srai_epi16(e1, 1);\n                d2 = _mm_sub_epi16(d2, e3);\n                d3 = _mm_srai_epi16(e3, 1);\n                d3 = _mm_add_epi16(d3, e1);\n\n                // Pack 4x64 to 2x128\n                e0 = _mm_unpacklo_epi64(d0, d1);\n                e1 = _mm_unpacklo_epi64(d3, d2);\n\n                e0 = _mm_add_epi16(e0, c32);\n                d0 = _mm_srai_epi16(_mm_add_epi16(e0, e1), 6);\n                d3 = _mm_srai_epi16(_mm_sub_epi16(e0, e1), 6);\n                // Unpack back to 4x64\n                d1 = _mm_unpackhi_epi64(d0, zero);\n                d2 = _mm_unpackhi_epi64(d3, zero);\n\n                *(int* )(out)                = _mm_cvtsi128_si32(_mm_packus_epi16(_mm_add_epi16(_mm_unpacklo_epi8(_mm_cvtsi32_si128(*(int*)(pred +  0)), zero), d0), zero));\n                *(int* )(out + 1*out_stride) = _mm_cvtsi128_si32(_mm_packus_epi16(_mm_add_epi16(_mm_unpacklo_epi8(_mm_cvtsi32_si128(*(int*)(pred + 16)), zero), d1), zero));\n                *(int* )(out + 2*out_stride) = _mm_cvtsi128_si32(_mm_packus_epi16(_mm_add_epi16(_mm_unpacklo_epi8(_mm_cvtsi32_si128(*(int*)(pred + 32)), zero), d2), zero));\n                *(int* )(out + 3*out_stride) = _mm_cvtsi128_si32(_mm_packus_epi16(_mm_add_epi16(_mm_unpacklo_epi8(_mm_cvtsi32_si128(*(int*)(pred + 48)), zero), d3), zero));\n\n            }\n            mask = (uint32_t)mask << 1;\n            q++;\n            out += 4;\n            pred += 4;\n        } while (--ccol);\n        ccol = side;\n        out += 4*(out_stride - ccol);\n        pred += 4*(16 - ccol);\n    } while (--crow);\n}\n#endif\n\n#if H264E_ENABLE_NEON && !defined(MINIH264_ASM)\n#define TR32(x, y) tr0 = vtrnq_u32(vreinterpretq_u32_u8(x), vreinterpretq_u32_u8(y)); x = vreinterpretq_u8_u32(tr0.val[0]); y = vreinterpretq_u8_u32(tr0.val[1]);\n#define TR16(x, y) tr1 = vtrnq_u16(vreinterpretq_u16_u8(x), vreinterpretq_u16_u8(y)); x = vreinterpretq_u8_u16(tr1.val[0]); y = vreinterpretq_u8_u16(tr1.val[1]);\n#define TR8(x, y)  tr2 = vtrnq_u8((x), (y)); x = (tr2.val[0]); y = (tr2.val[1]);\n\nstatic void deblock_luma_v_neon(uint8_t *pix, int stride, int alpha, int beta, const uint8_t *pthr, const uint8_t *pstr)\n{\n    uint8x16_t q0, q1, q2, q3, q4, q5, q6, q7, q8, q9, q10, q11, q12, q13, q14, q15;\n    uint8x16_t tmp;\n    uint32x4x2_t tr0;\n    uint16x8x2_t tr1;\n    uint8x16x2_t tr2;\n    q8 = vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q9 = vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q10= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q11= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q12= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q13= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q14= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q15= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n\n    TR32(q8,  q12);\n    TR32(q9,  q13);\n    TR32(q10, q14);\n    TR32(q11, q15);\n    TR16(q8,  q10);\n    TR16(q9,  q11);\n    TR16(q12, q14);\n    TR16(q13, q15);\n    TR8(q8,   q9 );\n    TR8(q10,  q11);\n    TR8(q12,  q13);\n    TR8(q14,  q15);\n\n    q1  = vabdq_u8(q11, q12);\n    q2  = vcltq_u8(q1, vdupq_n_u8(alpha));\n    q1  = vcltq_u8(vmaxq_u8(vabdq_u8(q11, q10), vabdq_u8(q12, q13)), vdupq_n_u8(beta));\n    q2  = vandq_u8(q2, q1);\n\n    tmp = vreinterpretq_u8_u32(vdupq_n_u32(*(uint32_t*)pstr));\n    tmp = vzipq_u8(tmp, tmp).val[0];\n    tmp = vzipq_u8(tmp, tmp).val[0];\n    q1  = tmp;\n\n    q1  = vcgtq_s8(vreinterpretq_s8_u8(q1), vdupq_n_s8(0));\n    q2  = vandq_u8(q2, q1);\n    q7 = vhsubq_u8(q10, q13);\n    q7 = vreinterpretq_u8_s8(vshrq_n_s8(vreinterpretq_s8_u8(q7), 1));\n    q0 = veorq_u8(q12, q11);\n    q6 = vandq_u8(vdupq_n_u8(1), q0);\n\n    q0 = vhsubq_u8(q12, q11);// ;(q0-p0))>>1\n\n    q7 = vreinterpretq_u8_s8(vrhaddq_s8(vreinterpretq_s8_u8(q7), vreinterpretq_s8_u8(q6))); //((p1-q1)>>2 + carry + 1) >> 1\n    q7 = vreinterpretq_u8_s8(vqaddq_s8(vreinterpretq_s8_u8(q0),  vreinterpretq_s8_u8(q7))); //=delta = (((q0-p0)<<2) + (p1-q1) + 4) >> 3;\n    q7 = vandq_u8(q7, q2);\n\n    tmp = vreinterpretq_u8_u32(vdupq_n_u32(*(uint32_t*)pthr));\n    tmp = vzipq_u8(tmp, tmp).val[0];\n    tmp = vzipq_u8(tmp, tmp).val[0];\n    q1  = tmp;\n\n    q1  = vandq_u8(q2, q1);\n\n    q0 = vabdq_u8(q9,  q11); // ap = ABS(p2 - p0);\n    q0 = vcltq_u8(q0,  vdupq_n_u8(beta)); //sp = (ap - beta) >> 31;\n    q4 = vandq_u8(q0,  q2);  // & sp\n    q0 = vabdq_u8(q14, q12); //aq = ABS(q2 - q0);\n    q0 = vcltq_u8(q0,  vdupq_n_u8(beta))  ;//sq = (aq - beta) >> 31;\n    q3 = vandq_u8(q0,  q2);  //  & sq\n\n    q0  = vrhaddq_u8(q11, q12);//((p0+q0+1)>>1)\n    q0  = vhaddq_u8 (q0,  q9 );//((p2 + ((p0+q0+1)>>1))>>1)\n    q5  = vandq_u8  (q1,  q4 );\n    q6  = vqaddq_u8 (q10, q5 );//{p1+thr}\n    q0  = vminq_u8  (q0,  q6 );\n    q6  = vqsubq_u8 (q10, q5 );//{p1-thr}\n    q10 = vmaxq_u8  (q0,  q6 );\n\n    q0  = vrhaddq_u8(q11, q12);// ;((p0+q0+1)>>1)\n    q0  = vhaddq_u8 (q0,  q14);// ;((q2 + ((p0+q0+1)>>1))>>1)\n    q5  = vandq_u8  (q1,  q3 );\n    q6  = vqaddq_u8 (q13, q5 );// ;{q1+thr}\n    q0  = vminq_u8  (q0,  q6 );\n    q6  = vqsubq_u8 (q13, q5 );// ;{q1-thr}\n    q13 = vmaxq_u8  (q0,  q6 );\n\n    q1  = vreinterpretq_u8_s8(vsubq_s8(vreinterpretq_s8_u8(q1), vreinterpretq_s8_u8(q3)));\n    q1  = vreinterpretq_u8_s8(vsubq_s8(vreinterpretq_s8_u8(q1), vreinterpretq_s8_u8(q4))); //tC = thr - sp - sq;\n    q1  = vandq_u8(q1, q2);// ; set thr = 0 if str==0\n\n    q6  = veorq_u8(q6, q6);\n    q5  = vreinterpretq_u8_s8(vmaxq_s8(vreinterpretq_s8_u8(q6), vreinterpretq_s8_u8(q7))); //delta > 0\n    q7  = vreinterpretq_u8_s8(vsubq_s8(vreinterpretq_s8_u8(q6), vreinterpretq_s8_u8(q7)));\n    q6  = vreinterpretq_u8_s8(vmaxq_s8(vreinterpretq_s8_u8(q6), vreinterpretq_s8_u8(q7))); //-(delta < 0)\n    q5  =  vminq_u8(q1, q5);\n    q6  =  vminq_u8(q1, q6);\n\n    q11 = vqaddq_u8(q11, q5);\n    q11 = vqsubq_u8(q11, q6);\n    q12 = vqsubq_u8(q12, q5);\n    q12 = vqaddq_u8(q12, q6);\n\n    TR8(q8,   q9 );\n    TR8(q10,  q11);\n    TR8(q12,  q13);\n    TR8(q14,  q15);\n    TR16(q8,  q10);\n    TR16(q9,  q11);\n    TR16(q12, q14);\n    TR16(q13, q15);\n    TR32(q8,  q12);\n    TR32(q9,  q13);\n    TR32(q10, q14);\n    TR32(q11, q15);\n\n    pix -= 8*stride + 4;\n    vst1_u8(pix, vget_low_u8(q8));  pix += stride;\n    vst1_u8(pix, vget_low_u8(q9));  pix += stride;\n    vst1_u8(pix, vget_low_u8(q10)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q11)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q12)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q13)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q14)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q15)); pix += stride;\n\n    vst1_u8(pix, vget_high_u8(q8)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q9)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q10)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q11)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q12)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q13)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q14)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q15)); pix += stride;\n}\n\nstatic void deblock_luma_h_s4_neon(uint8_t *pix, int stride, int alpha, int beta)\n{\n    uint8x16_t q0, q1, q2, q3, q4, q5, q6, q7, q8, q9, q10, q11, q12, q13, q14, q15, vspill0, vspill1;\n    q8  = vld1q_u8(pix - 4*stride);\n    q9  = vld1q_u8(pix - 3*stride);\n    q10 = vld1q_u8(pix - 2*stride);\n    q11 = vld1q_u8(pix - 1*stride);\n    q12 = vld1q_u8(pix);\n    q13 = vld1q_u8(pix + 1*stride);\n    q14 = vld1q_u8(pix + 2*stride);\n    q15 = vld1q_u8(pix + 3*stride);\n    q0  = vabdq_u8(q11, q12);\n    q2  = vcltq_u8(q0, vdupq_n_u8(alpha));\n    q2  = vandq_u8(q2, vcltq_u8(vabdq_u8(q11, q10), vdupq_n_u8(beta)));\n    q2  = vandq_u8(q2, vcltq_u8(vabdq_u8(q12, q13), vdupq_n_u8(beta)));\n    q1  = vandq_u8(q2, vcltq_u8(q0, vdupq_n_u8(((alpha >> 2) + 2))));\n    q0  = vandq_u8(q1, vcltq_u8(vabdq_u8(q9,  q11), vdupq_n_u8(beta)));\n    q3  = vandq_u8(q1, vcltq_u8(vabdq_u8(q14, q12), vdupq_n_u8(beta)));\n    q4 = vhaddq_u8(q9,  q10);\n    q5 = vhaddq_u8(q11, q12);\n    q6 = vsubq_u8(vrhaddq_u8(q9,  q10), q4);\n    q7 = vsubq_u8(vrhaddq_u8(q11, q12), q5);\n    q6 = vhaddq_u8(q6, q7);\n    q7 = vrhaddq_u8(q4, q8);\n    q4 = vhaddq_u8(q4, q8);\n    q7 = vsubq_u8(q7, q4);\n    q6 = vaddq_u8(q6, q7);\n    q7 = vrhaddq_u8(q5, q9);\n    q5 = vhaddq_u8(q5, q9);\n    q7 = vsubq_u8(q7, q5);\n    q6 = vhaddq_u8(q6, q7);\n\n    q7 = vrhaddq_u8(q4, q5);\n    q4 = vhaddq_u8(q4, q5);\n    q7 = vsubq_u8(q7, q4);\n    q6 = vrhaddq_u8(q6, q7);\n    q4 = vaddq_u8(q4, q6);\n    vspill0 =  vbslq_u8(q0, q4, q9);   // VMOV        q6,     q9   VBIT        q6,     q4,     q0\n\n    q4 = vhaddq_u8(q14, q13);\n    q5 = vhaddq_u8(q12, q11);\n    q6 = vsubq_u8(vrhaddq_u8(q14, q13), q4);\n    q7 = vsubq_u8(vrhaddq_u8(q12, q11), q5);\n    q6 = vhaddq_u8(q6, q7);\n    q7 = vrhaddq_u8(q4, q15);\n    q4 = vhaddq_u8(q4, q15);\n    q7 = vsubq_u8(q7, q4);\n    q6 = vaddq_u8(q6, q7);\n    q7 = vrhaddq_u8(q5, q14);\n    q5 = vhaddq_u8(q5, q14);\n    q7 = vsubq_u8(q7, q5);\n    q6 = vhaddq_u8(q6, q7);\n\n    q7 = vrhaddq_u8(q4, q5);\n    q4 = vhaddq_u8(q4, q5);\n    q7 = vsubq_u8(q7, q4);\n    q6 = vrhaddq_u8(q6, q7);\n    q4 = vaddq_u8(q4, q6);\n    vspill1 =  vbslq_u8(q3, q4, q14);   //     VMOV        q6,     q14    VBIT        q6,     q4,     q3\n\n    q1 = vhaddq_u8 (q9,  q13);\n    q4 = vrhaddq_u8(q1,  q10);\n    q5 = vrhaddq_u8(q11, q12);\n    q6 = vhaddq_u8 (q1,  q10);\n    q7 = vhaddq_u8 (q11, q12);\n    q4 = vhaddq_u8 (q4,  q5);\n    q6 = vrhaddq_u8(q6,  q7);\n    q1 = vrhaddq_u8(q4,  q6);\n    q4 = vrhaddq_u8(q9,  q10);\n    q5 = vrhaddq_u8(q11, q12);\n    q6 = vhaddq_u8 (q9,  q10);\n    q7 = vhaddq_u8 (q11, q12);\n    q4 = vhaddq_u8 (q4,  q5);\n    q6 = vrhaddq_u8(q6,  q7);\n    q4 = vrhaddq_u8(q4,  q6);\n    q5 = vhaddq_u8 (q11, q13);\n    q5 = vrhaddq_u8(q5,  q10);\n\n    q1 = vbslq_u8(q0, q1, q5); //VBIF        q1,     q5,     q0\n    q0 = vbslq_u8(q0, q4, q10);//VBSL        q0,     q4,     q10\n\n    q7 = vhaddq_u8 (q14, q10);\n    q4 = vrhaddq_u8(q7,  q13);\n    q5 = vrhaddq_u8(q11, q12);\n    q6 = vhaddq_u8 (q7,  q13);\n    q7 = vhaddq_u8 (q11, q12);\n    q4 = vhaddq_u8 (q4,  q5 );\n    q6 = vrhaddq_u8(q6,  q7 );\n    q4 = vrhaddq_u8(q4,  q6 );\n    q6 = vrhaddq_u8(q14, q13);\n    q5 = vrhaddq_u8(q11, q12);\n    q5 = vhaddq_u8 (q6,  q5 );\n    q6 = vhaddq_u8 (q14, q13);\n    q7 = vhaddq_u8 (q11, q12);\n    q6 = vrhaddq_u8(q6,  q7 );\n    q5 = vrhaddq_u8(q5,  q6 );\n    q6 = vhaddq_u8 (q12, q10);\n    q6 = vrhaddq_u8(q6,  q13);\n\n    q4 = vbslq_u8(q3, q4, q6); //    VBIF        q4,     q6,     q3    ;q0\n    q3 = vbslq_u8(q3, q5, q13);//    VBSL        q3,     q5,     q13   ;q1\n\n    q10 = vbslq_u8(q2, q0, q10);\n    q11 = vbslq_u8(q2, q1, q11);\n    q12 = vbslq_u8(q2, q4, q12);\n    q13 = vbslq_u8(q2, q3, q13);\n\n    vst1q_u8(pix - 3*stride, vspill0);\n    vst1q_u8(pix - 2*stride, q10);\n    vst1q_u8(pix - 1*stride, q11);\n    vst1q_u8(pix           , q12);\n    vst1q_u8(pix + 1*stride, q13);\n    vst1q_u8(pix + 2*stride, vspill1);\n\n}\n\nstatic void deblock_luma_v_s4_neon(uint8_t *pix, int stride, int alpha, int beta)\n{\n    uint32x4x2_t tr0;\n    uint16x8x2_t tr1;\n    uint8x16x2_t tr2;\n    uint8x16_t q0, q1, q2, q3, q4, q5, q6, q7, q8, q9, q10, q11, q12, q13, q14, q15, vspill0, vspill1;\n    q8 = vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q9 = vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q10= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q11= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q12= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q13= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q14= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n    q15= vcombine_u8(vld1_u8(pix - 4), vld1_u8(pix - 4 + 8*stride)); pix += stride;\n\n    TR32(q8,  q12);\n    TR32(q9,  q13);\n    TR32(q10, q14);\n    TR32(q11, q15);\n    TR16(q8,  q10);\n    TR16(q9,  q11);\n    TR16(q12, q14);\n    TR16(q13, q15);\n    TR8(q8,   q9 );\n    TR8(q10,  q11);\n    TR8(q12,  q13);\n    TR8(q14,  q15);\n\n    q0 = vabdq_u8(q11, q12);\n    q2 = vcltq_u8(q0, vdupq_n_u8(alpha));\n    q2 = vandq_u8(q2, vcltq_u8(vabdq_u8(q11,    q10), vdupq_n_u8(beta)));\n    q2 = vandq_u8(q2, vcltq_u8(vabdq_u8(q12,    q13), vdupq_n_u8(beta)));\n    q1 = vandq_u8(q2, vcltq_u8(q0, vdupq_n_u8(((alpha >> 2) + 2))));\n    q0 = vandq_u8(q1, vcltq_u8(vabdq_u8(q9,     q11), vdupq_n_u8(beta)));\n    q3 = vandq_u8(q1, vcltq_u8(vabdq_u8(q14,    q12), vdupq_n_u8(beta)));\n    q4 = vhaddq_u8(q9,  q10);\n    q5 = vhaddq_u8(q11, q12);\n    q6 = vsubq_u8(vrhaddq_u8(q9,  q10), q4);\n    q7 = vsubq_u8(vrhaddq_u8(q11, q12), q5);\n    q6 = vhaddq_u8(q6, q7);\n    q7 = vrhaddq_u8(q4, q8);\n    q4 = vhaddq_u8(q4, q8);\n    q7 = vsubq_u8(q7, q4);\n    q6 = vaddq_u8(q6, q7);\n    q7 = vrhaddq_u8(q5, q9);\n    q5 = vhaddq_u8(q5, q9);\n    q7 = vsubq_u8(q7, q5);\n    q6 = vhaddq_u8(q6, q7);\n\n    q7 = vrhaddq_u8(q4, q5);\n    q4 = vhaddq_u8(q4, q5);\n    q7 = vsubq_u8(q7, q4);\n    q6 = vrhaddq_u8(q6, q7);\n    q4 = vaddq_u8(q4, q6);\n    vspill0 =  vbslq_u8(q0, q4, q9);   // VMOV        q6,     q9   VBIT        q6,     q4,     q0\n\n    q4 = vhaddq_u8(q14, q13);\n    q5 = vhaddq_u8(q12, q11);\n    q6 = vsubq_u8(vrhaddq_u8(q14, q13), q4);\n    q7 = vsubq_u8(vrhaddq_u8(q12, q11), q5);\n    q6 = vhaddq_u8(q6, q7);\n    q7 = vrhaddq_u8(q4, q15);\n    q4 = vhaddq_u8(q4, q15);\n    q7 = vsubq_u8(q7, q4);\n    q6 = vaddq_u8(q6, q7);\n    q7 = vrhaddq_u8(q5, q14);\n    q5 = vhaddq_u8(q5, q14);\n    q7 = vsubq_u8(q7, q5);\n    q6 = vhaddq_u8(q6, q7);\n\n    q7 = vrhaddq_u8(q4, q5);\n    q4 = vhaddq_u8(q4, q5);\n    q7 = vsubq_u8(q7, q4);\n    q6 = vrhaddq_u8(q6, q7);\n    q4 = vaddq_u8(q4, q6);\n    vspill1 =  vbslq_u8(q3, q4, q14);   //     VMOV        q6,     q14    VBIT        q6,     q4,     q3\n\n    q1 = vhaddq_u8 (q9,  q13);\n    q4 = vrhaddq_u8(q1,  q10);\n    q5 = vrhaddq_u8(q11, q12);\n    q6 = vhaddq_u8 (q1,  q10);\n    q7 = vhaddq_u8 (q11, q12);\n    q4 = vhaddq_u8 (q4,  q5);\n    q6 = vrhaddq_u8(q6,  q7);\n    q1 = vrhaddq_u8(q4,  q6);\n    q4 = vrhaddq_u8(q9,  q10);\n    q5 = vrhaddq_u8(q11, q12);\n    q6 = vhaddq_u8 (q9,  q10);\n    q7 = vhaddq_u8 (q11, q12);\n    q4 = vhaddq_u8 (q4,  q5);\n    q6 = vrhaddq_u8(q6,  q7);\n    q4 = vrhaddq_u8(q4,  q6);\n    q5 = vhaddq_u8 (q11, q13);\n    q5 = vrhaddq_u8(q5,  q10);\n\n    q1 = vbslq_u8(q0, q1, q5); //VBIF        q1,     q5,     q0\n    q0 = vbslq_u8(q0, q4, q10);//VBSL        q0,     q4,     q10\n\n    q7 = vhaddq_u8 (q14, q10);\n    q4 = vrhaddq_u8(q7,  q13);\n    q5 = vrhaddq_u8(q11, q12);\n    q6 = vhaddq_u8 (q7,  q13);\n    q7 = vhaddq_u8 (q11, q12);\n    q4 = vhaddq_u8 (q4,  q5 );\n    q6 = vrhaddq_u8(q6,  q7 );\n    q4 = vrhaddq_u8(q4,  q6 );\n    q6 = vrhaddq_u8(q14, q13);\n    q5 = vrhaddq_u8(q11, q12);\n    q5 = vhaddq_u8 (q6,  q5 );\n    q6 = vhaddq_u8 (q14, q13);\n    q7 = vhaddq_u8 (q11, q12);\n    q6 = vrhaddq_u8(q6,  q7 );\n    q5 = vrhaddq_u8(q5,  q6 );\n    q6 = vhaddq_u8 (q12, q10);\n    q6 = vrhaddq_u8(q6,  q13);\n\n    q4 = vbslq_u8(q3, q4, q6); //    VBIF        q4,     q6,     q3    ;q0\n    q3 = vbslq_u8(q3, q5, q13);//    VBSL        q3,     q5,     q13   ;q1\n\n    q10 = vbslq_u8(q2,q0, q10);\n    q11 = vbslq_u8(q2,q1, q11);\n    q12 = vbslq_u8(q2,q4, q12);\n    q13 = vbslq_u8(q2,q3, q13);\n\n    q9 = vspill0;\n    q14 = vspill1;\n\n    TR8(q8,   q9 );\n    TR8(q10,  q11);\n    TR8(q12,  q13);\n    TR8(q14,  q15);\n    TR16(q8,  q10);\n    TR16(q9,  q11);\n    TR16(q12, q14);\n    TR16(q13, q15);\n    TR32(q8,  q12);\n    TR32(q9,  q13);\n    TR32(q10, q14);\n    TR32(q11, q15);\n\n    pix -= 8*stride + 4;\n    vst1_u8(pix, vget_low_u8(q8)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q9)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q10)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q11)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q12)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q13)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q14)); pix += stride;\n    vst1_u8(pix, vget_low_u8(q15)); pix += stride;\n\n    vst1_u8(pix, vget_high_u8(q8)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q9)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q10)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q11)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q12)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q13)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q14)); pix += stride;\n    vst1_u8(pix, vget_high_u8(q15)); pix += stride;\n}\n\nstatic void deblock_luma_h_neon(uint8_t *pix, int stride, int alpha, int beta, const uint8_t *pthr, const uint8_t *pstr)\n{\n    uint8x16_t q0, q1, q2, q3, q4, q5, q6, q7, q9, q10, q11, q12, q13, q14;\n    uint8x16_t tmp;\n\n    q9  = vld1q_u8(pix - 3*stride);\n    q10 = vld1q_u8(pix - 2*stride);\n    q11 = vld1q_u8(pix - 1*stride);\n    q12 = vld1q_u8(pix);\n    q13 = vld1q_u8(pix + 1*stride);\n    q14 = vld1q_u8(pix + 2*stride);\n\n    q1  = vabdq_u8(q11, q12);\n    q2  = vcltq_u8(q1, vdupq_n_u8(alpha));\n    q1  = vcltq_u8(vmaxq_u8(vabdq_u8(q11, q10), vabdq_u8(q12, q13)), vdupq_n_u8(beta));\n    q2  = vandq_u8(q2, q1);\n\n    tmp = vreinterpretq_u8_u32(vdupq_n_u32(*(uint32_t*)pstr));\n    tmp = vzipq_u8(tmp, tmp).val[0];\n    tmp = vzipq_u8(tmp, tmp).val[0];\n    q1  = tmp;\n\n    q1  = vcgtq_s8(vreinterpretq_s8_u8(q1), vdupq_n_s8(0));\n    q2  = vandq_u8(q2, q1);\n    q7 = vhsubq_u8(q10, q13);\n    q7 = vreinterpretq_u8_s8(vshrq_n_s8(vreinterpretq_s8_u8(q7), 1));\n    q0 = veorq_u8(q12, q11);\n    q6 = vandq_u8(vdupq_n_u8(1), q0);\n\n    q0 = vhsubq_u8(q12, q11);// ;(q0-p0))>>1\n\n    q7 = vreinterpretq_u8_s8(vrhaddq_s8(vreinterpretq_s8_u8(q7), vreinterpretq_s8_u8(q6))); //((p1-q1)>>2 + carry + 1) >> 1\n    q7 = vreinterpretq_u8_s8(vqaddq_s8(vreinterpretq_s8_u8(q0),  vreinterpretq_s8_u8(q7))); //=delta = (((q0-p0)<<2) + (p1-q1) + 4) >> 3;\n    q7 = vandq_u8(q7, q2);\n\n    tmp = vreinterpretq_u8_u32(vdupq_n_u32(*(uint32_t*)pthr));\n    tmp = vzipq_u8(tmp, tmp).val[0];\n    tmp = vzipq_u8(tmp, tmp).val[0];\n    q1  = tmp;\n\n    q1  = vandq_u8(q2, q1);\n\n    q0 = vabdq_u8(q9,  q11); // ap = ABS(p2 - p0);\n    q0 = vcltq_u8(q0,  vdupq_n_u8(beta)); //sp = (ap - beta) >> 31;\n    q4 = vandq_u8(q0,  q2); // & sp\n    q0 = vabdq_u8(q14, q12);//aq = ABS(q2 - q0);\n    q0 = vcltq_u8(q0,  vdupq_n_u8(beta));//sq = (aq - beta) >> 31;\n    q3 = vandq_u8(q0,  q2); // & sq\n\n    q0  = vrhaddq_u8(q11, q12);//((p0+q0+1)>>1)\n    q0  = vhaddq_u8 (q0,  q9 );//((p2 + ((p0+q0+1)>>1))>>1)\n    q5  = vandq_u8  (q1,  q4 );\n    q6  = vqaddq_u8 (q10, q5 );//{p1+thr}\n    q0  = vminq_u8  (q0,  q6 );\n    q6  = vqsubq_u8 (q10, q5 );//{p1-thr}\n    q10 = vmaxq_u8  (q0,  q6 );\n\n    q0   = vrhaddq_u8(q11, q12);// ;((p0+q0+1)>>1)\n    q0   = vhaddq_u8 (q0,  q14);// ;((q2 + ((p0+q0+1)>>1))>>1)\n    q5   = vandq_u8  (q1,  q3 );\n    q6   = vqaddq_u8 (q13, q5 );// ;{q1+thr}\n    q0   = vminq_u8  (q0,  q6 );\n    q6   = vqsubq_u8 (q13, q5 );// ;{q1-thr}\n    q13  = vmaxq_u8  (q0,  q6 );\n\n    q1  = vreinterpretq_u8_s8(vsubq_s8(vreinterpretq_s8_u8(q1), vreinterpretq_s8_u8(q3)));\n    q1  = vreinterpretq_u8_s8(vsubq_s8(vreinterpretq_s8_u8(q1), vreinterpretq_s8_u8(q4))); //tC = thr - sp - sq;\n    q1  = vandq_u8(q1, q2);// ; set thr = 0 if str==0\n\n    q6  = veorq_u8(q6, q6);\n    q5  = vreinterpretq_u8_s8(vmaxq_s8(vreinterpretq_s8_u8(q6), vreinterpretq_s8_u8(q7))); //delta > 0\n    q7  = vreinterpretq_u8_s8(vsubq_s8(vreinterpretq_s8_u8(q6), vreinterpretq_s8_u8(q7)));\n    q6  = vreinterpretq_u8_s8(vmaxq_s8(vreinterpretq_s8_u8(q6), vreinterpretq_s8_u8(q7))); //-(delta < 0)\n    q5  =  vminq_u8(q1, q5);\n    q6  =  vminq_u8(q1, q6);\n\n    q11 = vqaddq_u8(q11, q5);\n    q11 = vqsubq_u8(q11, q6);\n    q12 = vqsubq_u8(q12, q5);\n    q12 = vqaddq_u8(q12, q6);\n\n    vst1q_u8(pix - 2*stride, q10);\n    vst1q_u8(pix - 1*stride, q11);\n    vst1q_u8(pix           , q12);\n    vst1q_u8(pix + 1*stride, q13);\n}\n\nstatic void deblock_chroma_v_neon(uint8_t *pix, int32_t stride, int a, int b, const uint8_t *thr, const uint8_t *str)\n{\n    int32x2_t d16 = vld1_s32((int32_t*)(pix - 2 + 0*stride));\n    int32x2_t d18 = vld1_s32((int32_t*)(pix - 2 + 1*stride));\n    int32x2_t d20 = vld1_s32((int32_t*)(pix - 2 + 2*stride));\n    int32x2_t d22 = vld1_s32((int32_t*)(pix - 2 + 3*stride));\n    int32x2_t d17 = vld1_s32((int32_t*)(pix - 2 + 4*stride));\n    int32x2_t d19 = vld1_s32((int32_t*)(pix - 2 + 5*stride));\n    int32x2_t d21 = vld1_s32((int32_t*)(pix - 2 + 6*stride));\n    int32x2_t d23 = vld1_s32((int32_t*)(pix - 2 + 7*stride));\n    int32x2x2_t tr0 = vtrn_s32(d16, d17);\n    int32x2x2_t tr1 = vtrn_s32(d18, d19);\n    int32x2x2_t tr2 = vtrn_s32(d20, d21);\n    int32x2x2_t tr3 = vtrn_s32(d22, d23);\n    int16x8x2_t tr4 = vtrnq_s16(vreinterpretq_s16_s32(vcombine_s32(tr0.val[0], tr0.val[1])), vreinterpretq_s16_s32(vcombine_s32(tr2.val[0], tr2.val[1])));\n    int16x8x2_t tr5 = vtrnq_s16(vreinterpretq_s16_s32(vcombine_s32(tr1.val[0], tr1.val[1])), vreinterpretq_s16_s32(vcombine_s32(tr3.val[0], tr3.val[1])));\n    uint8x16x2_t tr6 = vtrnq_u8(vreinterpretq_u8_s16(tr4.val[0]), vreinterpretq_u8_s16(tr5.val[0]));\n    uint8x16x2_t tr7 = vtrnq_u8(vreinterpretq_u8_s16(tr4.val[1]), vreinterpretq_u8_s16(tr5.val[1]));\n\n{\n    uint8x16_t q8  = tr6.val[0];\n    uint8x16_t q9  = tr6.val[1];\n    uint8x16_t q10 = tr7.val[0];\n    uint8x16_t q11 = tr7.val[1];\n\n    uint8x16_t q1  = vabdq_u8(q9, q10);\n    uint8x16_t q2  = vcltq_u8(q1, vdupq_n_u8(a));\n    uint8x16_t q4  = vmaxq_u8(vabdq_u8(q10, q11), vabdq_u8(q8, q9));\n    uint8x16_t q0;\n    uint8x16_t q3;\n    uint8x16_t q6;\n     int8x16_t q4s;\n     int8x16_t q7;\n    uint8x16_t q7u;\n    uint8x16_t q5;\n    uint8x16_t vstr = vld1q_u8(str);\n    uint8x16_t vthr = vld1q_u8(thr);\n\n    q4 = vcltq_u8(q4, vdupq_n_u8(b));\n    q2 = vandq_u8(q2, q4);\n    q1 = vzipq_u8(vstr, vstr).val[0];\n    q3 = vcgtq_s8(vreinterpretq_s8_u8(q1), vdupq_n_s8(0));\n    q1 = vshrq_n_u8(q1, 2);\n    q1 = vcgtq_s8(vreinterpretq_s8_u8(q1), vdupq_n_s8(0));\n    q2 = vandq_u8(q2, q3);\n\n    q0 = vzipq_u8(vthr, vthr).val[0];\n    q0 = vaddq_u8(q0, vdupq_n_u8(1));\n    q0 = vandq_u8(q0, q2);\n\n    q7 = vshrq_n_s8(vreinterpretq_s8_u8(vhsubq_u8(q8, q11)), 1);\n    q6 = vandq_u8(vdupq_n_u8(1), veorq_u8(q10, q9));\n    q4 = vhsubq_u8(q10, q9);\n    q7 = vrhaddq_s8(q7, vreinterpretq_s8_u8(q6));\n    q7 = vqaddq_s8(vreinterpretq_s8_u8(q4), q7);\n\n    q4s = vdupq_n_s8(0);\n    q5 = vreinterpretq_u8_s8(vmaxq_s8(q4s,               q7));\n    q4 = vreinterpretq_u8_s8(vmaxq_s8(q4s, vsubq_s8(q4s, q7)));\n    q5 = vminq_u8(q0, q5);\n    q4 = vminq_u8(q0, q4);\n\n    q0 = vqaddq_u8(q9,  q5);\n    q0 = vqsubq_u8(q0,  q4);\n    q3 = vqsubq_u8(q10, q5);\n    q3 = vqaddq_u8(q3,  q4);\n\n    q6  = vrhaddq_u8(vhaddq_u8(q9, q11), q8);\n    q7u = vrhaddq_u8(vhaddq_u8(q8, q10), q11);\n\n    q0 = vbslq_u8(q1,  q6, q0 );\n    q3 = vbslq_u8(q1, q7u, q3 );\n    q9 = vbslq_u8(q2,  q0, q9 );\n    q10= vbslq_u8(q2,  q3, q10);\n\n    tr6 = vtrnq_u8(q8,  q9);\n    tr7 = vtrnq_u8(q10, q11);\n\n    tr4 = vtrnq_s16(vreinterpretq_s16_u8(tr6.val[0]), vreinterpretq_s16_u8(tr7.val[0]));\n    tr5 = vtrnq_s16(vreinterpretq_s16_u8(tr6.val[1]), vreinterpretq_s16_u8(tr7.val[1]));\n\n    tr0 = vtrn_s32(vget_low_s32(vreinterpretq_s32_s16(tr4.val[0])), vget_high_s32(vreinterpretq_s32_s16(tr4.val[0])));\n    tr1 = vtrn_s32(vget_low_s32(vreinterpretq_s32_s16(tr5.val[0])), vget_high_s32(vreinterpretq_s32_s16(tr5.val[0])));\n    tr2 = vtrn_s32(vget_low_s32(vreinterpretq_s32_s16(tr4.val[1])), vget_high_s32(vreinterpretq_s32_s16(tr4.val[1])));\n    tr3 = vtrn_s32(vget_low_s32(vreinterpretq_s32_s16(tr5.val[1])), vget_high_s32(vreinterpretq_s32_s16(tr5.val[1])));\n\n#if 0\n    // unaligned store fools Android NDK 15 optimizer\n    *(int32_t*)(uint8_t*)(pix - 2 + 0*stride) = vget_lane_s32(tr0.val[0], 0);\n    *(int32_t*)(uint8_t*)(pix - 2 + 1*stride) = vget_lane_s32(tr1.val[0], 0);\n    *(int32_t*)(uint8_t*)(pix - 2 + 2*stride) = vget_lane_s32(tr2.val[0], 0);\n    *(int32_t*)(uint8_t*)(pix - 2 + 3*stride) = vget_lane_s32(tr3.val[0], 0);\n    *(int32_t*)(uint8_t*)(pix - 2 + 4*stride) = vget_lane_s32(tr0.val[1], 0);\n    *(int32_t*)(uint8_t*)(pix - 2 + 5*stride) = vget_lane_s32(tr1.val[1], 0);\n    *(int32_t*)(uint8_t*)(pix - 2 + 6*stride) = vget_lane_s32(tr2.val[1], 0);\n    *(int32_t*)(uint8_t*)(pix - 2 + 7*stride) = vget_lane_s32(tr3.val[1], 0);\n#else\n    vst1_lane_s16((int16_t*)(pix - 2 + 0*stride),     vreinterpret_s16_s32(tr0.val[0]), 0);\n    vst1_lane_s16((int16_t*)(pix - 2 + 0*stride) + 1, vreinterpret_s16_s32(tr0.val[0]), 1);\n    vst1_lane_s16((int16_t*)(pix - 2 + 1*stride),     vreinterpret_s16_s32(tr1.val[0]), 0);\n    vst1_lane_s16((int16_t*)(pix - 2 + 1*stride) + 1, vreinterpret_s16_s32(tr1.val[0]), 1);\n    vst1_lane_s16((int16_t*)(pix - 2 + 2*stride),     vreinterpret_s16_s32(tr2.val[0]), 0);\n    vst1_lane_s16((int16_t*)(pix - 2 + 2*stride) + 1, vreinterpret_s16_s32(tr2.val[0]), 1);\n    vst1_lane_s16((int16_t*)(pix - 2 + 3*stride),     vreinterpret_s16_s32(tr3.val[0]), 0);\n    vst1_lane_s16((int16_t*)(pix - 2 + 3*stride) + 1, vreinterpret_s16_s32(tr3.val[0]), 1);\n    vst1_lane_s16((int16_t*)(pix - 2 + 4*stride),     vreinterpret_s16_s32(tr0.val[1]), 0);\n    vst1_lane_s16((int16_t*)(pix - 2 + 4*stride) + 1, vreinterpret_s16_s32(tr0.val[1]), 1);\n    vst1_lane_s16((int16_t*)(pix - 2 + 5*stride),     vreinterpret_s16_s32(tr1.val[1]), 0);\n    vst1_lane_s16((int16_t*)(pix - 2 + 5*stride) + 1, vreinterpret_s16_s32(tr1.val[1]), 1);\n    vst1_lane_s16((int16_t*)(pix - 2 + 6*stride),     vreinterpret_s16_s32(tr2.val[1]), 0);\n    vst1_lane_s16((int16_t*)(pix - 2 + 6*stride) + 1, vreinterpret_s16_s32(tr2.val[1]), 1);\n    vst1_lane_s16((int16_t*)(pix - 2 + 7*stride),     vreinterpret_s16_s32(tr3.val[1]), 0);\n    vst1_lane_s16((int16_t*)(pix - 2 + 7*stride) + 1, vreinterpret_s16_s32(tr3.val[1]), 1);\n#endif\n}\n}\n\nstatic void deblock_chroma_h_neon(uint8_t *pix, int32_t stride, int a, int b, const uint8_t *thr, const uint8_t *str)\n{\n    uint8x16_t q0;\n    uint8x16_t q8  = vld1q_u8(pix - 2*stride);\n    uint8x16_t q9  = vld1q_u8(pix - 1*stride);\n    uint8x16_t q10 = vld1q_u8(pix);\n    uint8x16_t q11 = vld1q_u8(pix + stride);\n    uint8x16_t q1  = vabdq_u8(q9, q10);\n    uint8x16_t q2  = vcltq_u8(q1, vdupq_n_u8(a));\n    uint8x16_t q4  = vmaxq_u8(vabdq_u8(q10, q11), vabdq_u8(q8, q9));\n    uint8x16_t q3;\n    uint8x16_t q6;\n     int8x16_t q4s;\n     int8x16_t q7;\n    uint8x16_t q7u;\n    uint8x16_t q5;\n    uint8x16_t vstr = vld1q_u8(str);\n    uint8x16_t vthr = vld1q_u8(thr);\n\n    q4 = vcltq_u8(q4, vdupq_n_u8(b));\n    q2 = vandq_u8(q2, q4);\n    q1 = vzipq_u8(vstr, vstr).val[0];\n    q3 = vcgtq_s8(vreinterpretq_s8_u8(q1), vdupq_n_s8(0));\n    q1 = vshrq_n_u8(q1, 2);\n    q1 = vcgtq_s8(vreinterpretq_s8_u8(q1), vdupq_n_s8(0));\n    q2 = vandq_u8(q2, q3);\n\n    q0 = vzipq_u8(vthr, vthr).val[0];\n    q0 = vaddq_u8(q0, vdupq_n_u8(1));\n    q0 = vandq_u8(q0, q2);\n\n    q7 = vshrq_n_s8(vreinterpretq_s8_u8(vhsubq_u8(q8, q11)), 1);\n    q6 = vandq_u8(vdupq_n_u8(1), veorq_u8(q10, q9));\n    q4 = vhsubq_u8(q10, q9);\n    q7 = vrhaddq_s8(q7, vreinterpretq_s8_u8(q6));\n    q7 = vqaddq_s8(vreinterpretq_s8_u8(q4), q7);\n\n    q4s = vdupq_n_s8(0);\n    q5 = vreinterpretq_u8_s8(vmaxq_s8(q4s,               q7));\n    q4 = vreinterpretq_u8_s8(vmaxq_s8(q4s, vsubq_s8(q4s, q7)));\n    q5 = vminq_u8(q0, q5);\n    q4 = vminq_u8(q0, q4);\n\n    q0 = vqaddq_u8(q9,  q5);\n    q0 = vqsubq_u8(q0,  q4);\n    q3 = vqsubq_u8(q10, q5);\n    q3 = vqaddq_u8(q3,  q4);\n\n    q6  = vrhaddq_u8(vhaddq_u8(q9, q11), q8);\n    q7u = vrhaddq_u8(vhaddq_u8(q8, q10), q11);\n\n    q0 = vbslq_u8(q1,  q6, q0 );\n    q3 = vbslq_u8(q1, q7u, q3 );\n    q9 = vbslq_u8(q2,  q0, q9 );\n    q10= vbslq_u8(q2,  q3, q10);\n\n    vst1_u8(pix - stride, vget_low_u8(q9));\n    vst1_u8(pix,          vget_low_u8(q10));\n}\n\nstatic void h264e_deblock_chroma_neon(uint8_t *pix, int32_t stride, const deblock_params_t *par)\n{\n    const uint8_t *alpha = par->alpha;\n    const uint8_t *beta = par->beta;\n    const uint8_t *thr = par->tc0;\n    const uint8_t *strength = (uint8_t *)par->strength32;\n    int a, b, x, y;\n    a = alpha[0];\n    b = beta[0];\n    for (x = 0; x < 16; x += 8)\n    {\n        uint32_t str = *(uint32_t*)&strength[x];\n        if (str && a)\n        {\n            deblock_chroma_v_neon(pix + (x >> 1), stride, a, b, thr + x, strength + x);\n        }\n        a = alpha[1];\n        b = beta[1];\n    }\n    thr += 16;\n    strength += 16;\n    a = alpha[2];\n    b = beta[2];\n    for (y = 0; y < 16; y += 8)\n    {\n        uint32_t str = *(uint32_t*)&strength[y];\n        if (str && a)\n        {\n            deblock_chroma_h_neon(pix, stride, a, b, thr + y, strength + y);\n        }\n        pix += 4*stride;\n        a = alpha[3];\n        b = beta[3];\n    }\n}\n\nstatic void h264e_deblock_luma_neon(uint8_t *pix, int32_t stride, const deblock_params_t *par)\n{\n    const uint8_t *alpha = par->alpha;\n    const uint8_t *beta = par->beta;\n    const uint8_t *thr = par->tc0;\n    const uint8_t *strength = (uint8_t *)par->strength32;\n    int a = alpha[0];\n    int b = beta[0];\n    int x, y;\n    for (x = 0; x < 16; x += 4)\n    {\n        uint32_t str = *(uint32_t*)&strength[x];\n        if ((uint8_t)str == 4)\n        {\n            deblock_luma_v_s4_neon(pix + x, stride, a, b);\n        } else if (str && a)\n        {\n            deblock_luma_v_neon(pix + x, stride, a, b, thr + x, strength + x);\n        }\n        a = alpha[1];\n        b = beta[1];\n    }\n    a = alpha[2];\n    b = beta[2];\n    thr += 16;\n    strength += 16;\n    for (y = 0; y < 16; y += 4)\n    {\n        uint32_t str = *(uint32_t*)&strength[y];\n        if ((uint8_t)str == 4)\n        {\n            deblock_luma_h_s4_neon(pix, stride, a, b);\n        } else if (str && a)\n        {\n            deblock_luma_h_neon(pix, stride, a, b, thr + y, strength + y);\n        }\n        a = alpha[3];\n        b = beta[3];\n        pix += 4*stride;\n    }\n}\n\nstatic void h264e_denoise_run_neon(unsigned char *frm, unsigned char *frmprev, int w, int h_arg, int stride_frm, int stride_frmprev)\n{\n    int cloop, h = h_arg;\n    if (w <= 2 || h <= 2)\n    {\n        return;\n    }\n    w -= 2;\n    h -= 2;\n\n    do\n    {\n        unsigned char *pf = frm += stride_frm;\n        unsigned char *pp = frmprev += stride_frmprev;\n        cloop = w;\n        pp[-stride_frmprev] = *pf++;\n        pp++;\n\n        for (;cloop >= 8; cloop -= 8, pf += 8, pp += 8)\n        {\n            uint16x8_t vp0w;\n            uint32x4_t vpr0;\n            uint32x4_t vpr1;\n            uint16x8_t vf0w;\n            int16x8_t vcls, vt, vcl, vgn, vgd;\n            uint16x8_t vg;\n            uint8x8_t vf0 = vld1_u8(pf);\n            uint8x8_t vft = vld1_u8(pf - stride_frm);\n            uint8x8_t vfb = vld1_u8(pf + stride_frm);\n            uint8x8_t vfl = vld1_u8(pf - 1);\n            uint8x8_t vfr = vld1_u8(pf + 1);\n            uint8x8_t vp0 = vld1_u8(pp);\n            uint8x8_t vpt = vld1_u8(pp - stride_frmprev);\n            uint8x8_t vpb = vld1_u8(pp + stride_frmprev);\n            uint8x8_t vpl = vld1_u8(pp - 1);\n            uint8x8_t vpr = vld1_u8(pp + 1);\n            uint16x8_t vd  = vabdl_u8(vf0, vp0);\n            uint16x8_t vfs = vaddw_u8(vaddw_u8(vaddl_u8(vft, vfb), vfl), vfr);\n            uint16x8_t vps = vaddw_u8(vaddw_u8(vaddl_u8(vpt, vpb), vpl), vpr);\n            uint16x8_t vneighbourhood = vshrq_n_u16(vabdq_u16(vfs, vps), 2);\n\n            vt = vaddq_s16(vreinterpretq_s16_u16(vd), vdupq_n_s16(1));\n\n            vt = vqshlq_n_s16(vt, 7);\n            vcls = vclsq_s16(vt);\n            vt = vshlq_s16(vt, vcls);\n            vt = vqdmulhq_s16(vt,vt);                             // 1\n\n            vcl = vclsq_s16(vt);\n            vt = vshlq_s16(vt, vcl);\n            vcls = vaddq_s16(vaddq_s16(vcls, vcls), vcl);\n            vt = vqdmulhq_s16(vt,vt);                             // 2\n            vcl = vclsq_s16(vt);\n            vt = vshlq_s16(vt, vcl);\n            vcls = vaddq_s16(vaddq_s16(vcls, vcls), vcl);\n            vt = vqdmulhq_s16(vt,vt);                             // 3\n            vcl = vclsq_s16(vt);\n            vt = vshlq_s16(vt, vcl);\n            vcls = vaddq_s16(vaddq_s16(vcls, vcls), vcl);\n            vt = vqdmulhq_s16(vt,vt);                             // 4\n            vcl = vclsq_s16(vt);\n            // vt = vshlq_s16(vt, vcl);\n            vcls = vaddq_s16(vaddq_s16(vcls, vcls), vcl);\n\n            vgd = vsubq_s16(vdupq_n_s16(127), vcls);\n\n            // same as above \n            vt = vaddq_s16(vreinterpretq_s16_u16(vneighbourhood), vdupq_n_s16(1));\n            \n            vt = vqshlq_n_s16(vt, 7);\n            vcls = vclsq_s16(vt);\n            vt = vshlq_s16(vt, vcls);\n            vt = vqdmulhq_s16(vt,vt);                             // 1\n            vcl = vclsq_s16(vt);\n            vt = vshlq_s16(vt, vcl);\n            vcls = vaddq_s16(vaddq_s16(vcls, vcls), vcl);\n            vt = vqdmulhq_s16(vt,vt);                             // 2\n            vcl = vclsq_s16(vt);\n            vt = vshlq_s16(vt, vcl);\n            vcls = vaddq_s16(vaddq_s16(vcls, vcls), vcl);\n            vt = vqdmulhq_s16(vt,vt);                             // 3\n            vcl = vclsq_s16(vt);\n            vt = vshlq_s16(vt, vcl);\n            vcls = vaddq_s16(vaddq_s16(vcls, vcls), vcl);\n            vt = vqdmulhq_s16(vt,vt);                             // 4\n            vcl = vclsq_s16(vt);\n            // vt = vshlq_s16(vt, vcl);\n            vcls = vaddq_s16(vaddq_s16(vcls, vcls), vcl);\n\n            vgn = vsubq_s16(vdupq_n_s16(127), vcls);\n\n            vgn = vreinterpretq_s16_u16(vshrq_n_u16(vqshlq_n_u16(vreinterpretq_u16_s16(vgn), 10), 8));            // <<=2, saturated\n\n            vgd = vsubq_s16(vdupq_n_s16(255), vgd);\n            vgn = vsubq_s16(vdupq_n_s16(255), vgn);\n\n            //vst1_u8(pp - stride_frmprev, vreinterpret_u8_s8(vmovn_s16(vgn)));\n            //vst1_u8(pp - stride_frmprev, vreinterpret_u8_s8(vmovn_s16(vreinterpretq_s16_u16(vneighbourhood))));\n            //vst1_u8(pp - stride_frmprev, vp0);\n\n            vg  = vmulq_u16(vreinterpretq_u16_s16(vgn), vreinterpretq_u16_s16(vgd));\n\n            vp0w = vmovl_u8(vp0);\n            vpr0 = vmull_u16(vget_low_u16(vp0w), vget_low_u16(vg));\n            vpr1 = vmull_u16(vget_high_u16(vp0w), vget_high_u16(vg));\n            vg = vreinterpretq_u16_s16(vsubq_s16(vreinterpretq_s16_u8(vdupq_n_u8(255)), vreinterpretq_s16_u16(vg)));\n\n            vf0w = vmovl_u8(vf0);\n            vpr0 = vmlal_u16(vpr0, vget_low_u16(vf0w), vget_low_u16(vg));\n            vpr1 = vmlal_u16(vpr1, vget_high_u16(vf0w), vget_high_u16(vg));\n\n            vst1_u8(pp - stride_frmprev, vmovn_u16(vcombine_u16(vrshrn_n_u32(vpr0, 16), vrshrn_n_u32(vpr1, 16))));\n        }                    \n\n        while (cloop--)\n        {\n            int d, neighbourhood;\n            unsigned g, gd, gn, out_val;\n            d = pf[0] - pp[0];\n            neighbourhood  = pf[-1] - pp[-1];\n            neighbourhood += pf[+1] - pp[+1];\n            neighbourhood += pf[-stride_frm] - pp[-stride_frmprev];\n            neighbourhood += pf[+stride_frm] - pp[+stride_frmprev];\n\n            if (d < 0) \n            {\n                d = -d;\n            }\n            if (neighbourhood < 0) \n            {\n                neighbourhood = -neighbourhood;\n            }\n            neighbourhood >>= 2;\n\n            gd = g_diff_to_gainQ8[d];\n            gn = g_diff_to_gainQ8[neighbourhood];\n\n            gn <<= 2;\n            if (gn > 255) \n            {\n                gn = 255;\n            }\n\n            gn = 255 - gn;\n            gd = 255 - gd;\n            g = gn*gd;  // Q8*Q8 = Q16;\n\n            //out_val = ((pp[0]*g ) >> 16) + (((0xffff-g)*pf[0] ) >> 16);\n            //out_val = ((pp[0]*g + (1<<15)) >> 16) + (((0xffff-g)*pf[0]  + (1<<15)) >> 16);\n            out_val = (pp[0]*g + (0xffff - g)*pf[0]  + (1 << 15)) >> 16;\n            \n            assert(out_val <= 255);\n            \n            pp[-stride_frmprev] = (unsigned char)out_val;\n            //pp[-stride_frmprev] = gn;\n            //pp[-stride_frmprev] = neighbourhood;\n            //pp[-stride_frmprev] = pp[0];\n\n            pf++, pp++;\n        } \n\n        pp[-stride_frmprev] = *pf++;\n    } while(--h);\n\n    memcpy(frmprev + stride_frmprev, frm + stride_frm, w + 2);\n    h = h_arg - 2;\n    do\n    {\n        memcpy(frmprev, frmprev - stride_frmprev, w + 2);\n        frmprev -= stride_frmprev;\n    } while(--h);\n    memcpy(frmprev, frm - stride_frm*(h_arg - 2), w + 2);\n}\n\n#undef IS_NULL\n#define IS_NULL(p) ((p) < (pix_t *)32)\n\nstatic uint32_t intra_predict_dc4_neon(const pix_t *left, const pix_t *top)\n{\n    unsigned dc = 0, side = 4, round = 0;\n    uint32x2_t s = vdup_n_u32(0);\n\n    if (!IS_NULL(left))\n    {\n        s = vpaddl_u16(vpaddl_u8(vld1_u8(left)));\n        round += side >> 1;\n    }\n    if (!IS_NULL(top))\n    {\n        s = vadd_u32(s, vpaddl_u16(vpaddl_u8(vld1_u8(top))));\n        round += side >> 1;\n    }\n    dc = vget_lane_u32(s, 0);\n\n    dc += round;\n    if (round == side) dc >>= 1;\n    dc >>= 2;\n    if (!round) dc = 128;\n    return dc * 0x01010101;\n}\n\nstatic uint8x16_t intra_predict_dc16_neon(const pix_t *left, const pix_t *top)\n{\n    unsigned dc = 0, side = 16, round = 0;\n\n    if (!IS_NULL(left))\n    {\n        uint8x16_t v = vld1q_u8(left);\n        uint64x2_t s = vpaddlq_u32(vpaddlq_u16(vpaddlq_u8(v)));\n        uint64x1_t q = vadd_u64(vget_high_u64(s), vget_low_u64(s));\n        dc += vget_lane_u32(vreinterpret_u32_u64(q), 0);\n        round += side >> 1;\n    }\n    if (!IS_NULL(top))\n    {\n        uint8x16_t v = vld1q_u8(top);\n        uint64x2_t s = vpaddlq_u32(vpaddlq_u16(vpaddlq_u8(v)));\n        uint64x1_t q = vadd_u64(vget_high_u64(s), vget_low_u64(s));\n        dc += vget_lane_u32(vreinterpret_u32_u64(q), 0);\n        round += side >> 1;\n    }\n    dc += round;\n    if (round == side) dc >>= 1;\n    dc >>= 4;\n    if (!round) dc = 128;\n    return vdupq_n_u8(dc);\n}\n\n/*\n * Note: To make the code more readable we refer to the neighboring pixels\n * in variables named as below:\n *\n *    UL U0 U1 U2 U3 U4 U5 U6 U7\n *    L0 xx xx xx xx\n *    L1 xx xx xx xx\n *    L2 xx xx xx xx\n *    L3 xx xx xx xx\n */\n#define UL edge[-1] \n#define U0 edge[0] \n#define T1 edge[1] \n#define U2 edge[2] \n#define U3 edge[3] \n#define U4 edge[4] \n#define U5 edge[5] \n#define U6 edge[6] \n#define U7 edge[7] \n#define L0 edge[-2]\n#define L1 edge[-3]\n#define L2 edge[-4]\n#define L3 edge[-5]\n\nstatic void h264e_intra_predict_16x16_neon(pix_t *predict, const pix_t *left, const pix_t *top, int mode)\n{\n    int cloop = 4;\n    uint32_t *d = (uint32_t*)predict;\n    uint32x4_t v;\n    assert(IS_ALIGNED(predict, 4));\n    assert(IS_ALIGNED(top, 4));\n    if (mode != 1)\n    {\n        if (mode < 1)\n        {\n            v = vld1q_u32((uint32_t*)top);\n        } else //(mode == 2)\n        {\n            v = vreinterpretq_u32_u8(intra_predict_dc16_neon(left, top));\n        }\n        do\n        {\n            vst1q_u32(d, v); d += 4;\n            vst1q_u32(d, v); d += 4;\n            vst1q_u32(d, v); d += 4;\n            vst1q_u32(d, v); d += 4;\n        } while (--cloop);\n    } else //if (mode == 1)\n    {\n        do\n        {\n            vst1q_u8((uint8_t*)d, vdupq_n_u8(*left++)); d += 4;\n            vst1q_u8((uint8_t*)d, vdupq_n_u8(*left++)); d += 4;\n            vst1q_u8((uint8_t*)d, vdupq_n_u8(*left++)); d += 4;\n            vst1q_u8((uint8_t*)d, vdupq_n_u8(*left++)); d += 4;\n        } while (--cloop);\n    }\n}\n\nstatic void h264e_intra_predict_chroma_neon(pix_t *predict, const pix_t *left, const pix_t *top, int mode)\n{\n    int cloop = 8;\n    uint32_t *d = (uint32_t*)predict;\n    uint32x4_t v;\n    assert(IS_ALIGNED(predict, 4));\n    assert(IS_ALIGNED(top, 4));\n    if (mode < 1)\n    {\n        v = vld1q_u32((uint32_t*)top);\n        vst1q_u32(d, v); d += 4;\n        vst1q_u32(d, v); d += 4;\n        vst1q_u32(d, v); d += 4;\n        vst1q_u32(d, v); d += 4;\n        vst1q_u32(d, v); d += 4;\n        vst1q_u32(d, v); d += 4;\n        vst1q_u32(d, v); d += 4;\n        vst1q_u32(d, v); d += 4;\n    } else if (mode == 1)\n    {\n        do \n        {\n            v = vreinterpretq_u32_u8(vcombine_u8(vdup_n_u8(left[0]), vdup_n_u8(left[8])));\n            vst1q_u32(d, v); d += 4;\n            left++;\n        } while(--cloop);\n    } else //if (mode == 2)\n    {\n        int ccloop = 2;\n        cloop = 2;\n        do\n        {\n            d[0] = d[1] = d[16] = intra_predict_dc4_neon(left, top);\n            d[17] = intra_predict_dc4_neon(left + 4, top + 4);\n            if (!IS_NULL(top))\n            {\n                d[1] = intra_predict_dc4_neon(NULL, top + 4);\n            }\n            if (!IS_NULL(left))\n            {\n                d[16] = intra_predict_dc4_neon(NULL, left + 4);\n            }\n            d += 2;\n            left += 8;\n            top += 8;\n        } while(--cloop);\n\n        do\n        {\n            v = vld1q_u32(d - 4);\n            vst1q_u32(d, v); d += 4;\n            vst1q_u32(d, v); d += 4;\n            vst1q_u32(d, v); d += 4;\n            d += 4;\n        } while(--ccloop);\n    }\n}\n\nstatic __inline int vsad_neon(uint8x16_t a, uint8x16_t b)\n{\n    uint64x2_t s = vpaddlq_u32(vpaddlq_u16(vpaddlq_u8(vabdq_u8(a, b))));\n    uint64x1_t q = vadd_u64(vget_high_u64(s), vget_low_u64(s));\n    return vget_lane_u32(vreinterpret_u32_u64(q), 0);\n}\n\nstatic int h264e_intra_choose_4x4_neon(const pix_t *blockin, pix_t *blockpred, int avail, const pix_t *edge, int mpred, int penalty)\n{\n    int sad, best_sad, best_m = 2;\n\n    uint32_t r0, r1, r2, r3;\n    uint8x16_t vx, vt, vr, vpred, v1, v2, v8, v9, q1, q2, q10, q11, q12;\n    uint8x8_t d2, d3;\n\n    r0 = ((uint32_t *)blockin)[ 0];\n    r1 = ((uint32_t *)blockin)[ 4];\n    r2 = ((uint32_t *)blockin)[ 8];\n    r3 = ((uint32_t *)blockin)[12];\n    vr = vcombine_u8(vcreate_u8(((uint64_t)r1 << 32) | r0), vcreate_u8(((uint64_t)r3 << 32) | r2));\n\n#define VTEST(mode) sad = vsad_neon(vr,vx);    \\\n            if (mode != mpred) sad += penalty; \\\n            if (sad < best_sad)                \\\n            {                                  \\\n                vpred = vx;                    \\\n                best_sad = sad;                \\\n                best_m = mode;                 \\\n            }\n\n    // DC\n    vx = vdupq_n_u8(intra_predict_dc4_neon((avail & AVAIL_L) ? &L3 : 0, (avail & AVAIL_T) ? &U0 : 0));\n\n    best_sad = vsad_neon(vx, vr);\n    if (2 != mpred) \n    {   \n        best_sad += penalty;\n    }\n    vpred = vx;\n\n    vt = vld1q_u8(&L3);\n    vt = vreinterpretq_u8_u32(vsetq_lane_u32(U7*0x01010101, vreinterpretq_u32_u8(vt), 3));\n    if (avail & AVAIL_T)\n    {\n        uint32x2_t t2;\n        if (!(avail & AVAIL_TR))\n        {\n            vt = vcombine_u8(vget_low_u8(vt), vdup_n_u8(U3));\n        }\n\n        vx =  vreinterpretq_u8_u32(vdupq_n_u32(*(uint32_t*)&U0));\n        VTEST(0);\n\n        vx = vt;\n        vx = vrhaddq_u8(vhaddq_u8(vextq_u8(vx, vx, 5), vextq_u8(vx, vx, 7)), vextq_u8(vx, vx, 6));\n\n        v1 = vextq_u8(vx, vx, 1);\n        d2 = vext_u8(vget_low_u8(vx), vget_low_u8(vx), 2);\n        d3 = vext_u8(vget_low_u8(vx), vget_low_u8(vx), 3);\n        vx = vreinterpretq_u8_u32(vcombine_u32(\n            t2 = vzip_u32(vreinterpret_u32_u8(vget_low_u8(vx)), vreinterpret_u32_u8(vget_low_u8(v1))).val[0], \n            vzip_u32(vreinterpret_u32_u8(d2), vreinterpret_u32_u8(d3)).val[0]));\n        VTEST(3);\n\n        vx = vt;\n        vx = vrhaddq_u8(vextq_u8(vt, vt, 5), vextq_u8(vt, vt, 6));\n        vx = vreinterpretq_u8_u32(vzipq_u32(vreinterpretq_u32_u8(vx), vreinterpretq_u32_u8(vextq_u8(vx, vx, 1))).val[0]);\n        vx = vreinterpretq_u8_u32(vzipq_u32(vreinterpretq_u32_u8(vx), \n        vreinterpretq_u32_u8(vcombine_u8(vreinterpret_u8_u32(t2), vget_high_u8(vextq_u8(vt, vt, 7))))).val[0]);\n\n        VTEST(7);\n    }\n\n    if (avail & AVAIL_L)\n    {\n        vx = vrev32q_u8(vt);\n        vx = vzipq_u8(vx, vx).val[0];\n        vx = vzipq_u8(vx, vx).val[0];\n        VTEST(1);\n\n        v2 = vrev32q_u8(vt);\n        v8 = vrev32q_u8(vt);\n        vx = vrev32q_u8(vt);\n        v8 = vzipq_u8(vx, vx).val[0];\n        {\n            int tmp = vgetq_lane_u16(vreinterpretq_u16_u8(v8), 3);\n            v2 = vreinterpretq_u8_u16(vsetq_lane_u16(tmp, vreinterpretq_u16_u8(v2), 2));\n            v8 = vreinterpretq_u8_u16(vsetq_lane_u16(tmp, vreinterpretq_u16_u8(v8), 4));\n            v9 = vextq_u8(v2, v2, 14);\n            v9 = vzipq_u8(v9, vhaddq_u8(v9, v2)).val[0]; \n            v9 = vrhaddq_u8(v9, vextq_u8(v8, v8, 14));\n            tmp |= tmp << 16;\n            vx = vreinterpretq_u8_u32(vzipq_u32(vreinterpretq_u32_u8(vextq_u8(v9, v9, 4)),\n                                                vreinterpretq_u32_u8(vextq_u8(v9, v9, 6))).val[0]);\n            vx = vreinterpretq_u8_u32(vsetq_lane_u32(tmp, vreinterpretq_u32_u8(vx), 3));\n        }\n        VTEST(8);\n    }\n\n    if ((avail & (AVAIL_T | AVAIL_L | AVAIL_TL)) == (AVAIL_T | AVAIL_L | AVAIL_TL))\n    {\n        uint32x2x2_t pair;\n        uint8x8_t d4, d6;\n        int lr;\n        q11 = q2 = vrhaddq_u8(vhaddq_u8(vt, vextq_u8(vt, vt, 2)), q10 = vextq_u8(vt, vt, 1));\n        d4 = vget_low_u8(q2);\n        d6 = vreinterpret_u8_u32(vzip_u32(vreinterpret_u32_u8(vext_u8(d4, d4, 3)), vreinterpret_u32_u8(vext_u8(d4, d4, 1))).val[0]);\n        d4 = vreinterpret_u8_u32(vzip_u32(vreinterpret_u32_u8(vext_u8(d4, d4, 2)), vreinterpret_u32_u8(d4)).val[0]);\n        pair = vzip_u32(vreinterpret_u32_u8(d6), vreinterpret_u32_u8(d4));\n        vx = vcombine_u8(vreinterpret_u8_u32(pair.val[0]), vreinterpret_u8_u32(pair.val[1]));\n        VTEST(4);\n\n        vx  = q12 = vrhaddq_u8(vt, q10);\n        q1  = vzipq_u8(vx, q11).val[0];\n        q1  = vreinterpretq_u8_u32(vzipq_u32(vreinterpretq_u32_u8(q1), vreinterpretq_u32_u8(vextq_u8(q1, q1, 2))).val[0]);\n        q1  = vreinterpretq_u8_u32(vrev64q_u32(vreinterpretq_u32_u8(q1)));\n        vx  = vcombine_u8(vget_high_u8(q1), vget_low_u8(q1));\n        vx = vreinterpretq_u8_u16(\n            vsetq_lane_u16(vgetq_lane_u16(vreinterpretq_u16_u8(q11), 2), vreinterpretq_u16_u8(vx), 1));\n        VTEST(6);\n\n        q11 = vextq_u8(q11, q11, 1);\n        q1  = vextq_u8(q12, q12, 4);\n        q2  = vextq_u8(q11, q11, 2);\n        q1  = vreinterpretq_u8_u32(vzipq_u32(vreinterpretq_u32_u8(q1), vreinterpretq_u32_u8(q2)).val[0]);\n        q12 = vreinterpretq_u8_u16(vsetq_lane_u16(lr = vgetq_lane_u16(vreinterpretq_u16_u8(q11), 0), vreinterpretq_u16_u8(q12), 1));\n        q11 = vreinterpretq_u8_u16(vsetq_lane_u16((lr << 8) & 0xffff, vreinterpretq_u16_u8(q11), 0));\n        vx = vcombine_u8(vget_low_u8(q1), vreinterpret_u8_u32(vzip_u32(\n            vreinterpret_u32_u8(vext_u8(vget_low_u8(q12), vget_low_u8(q12), 3)),\n            vreinterpret_u32_u8(vext_u8(vget_low_u8(q11), vget_low_u8(q11), 1))\n            ).val[0]));\n        VTEST(5);\n    }\n\n    vst1q_lane_u32(((uint32_t *)blockpred) + 0, vreinterpretq_u32_u8(vpred ), 0);\n    vst1q_lane_u32(((uint32_t *)blockpred) + 4, vreinterpretq_u32_u8(vpred ), 1);\n    vst1q_lane_u32(((uint32_t *)blockpred) + 8, vreinterpretq_u32_u8(vpred ), 2);\n    vst1q_lane_u32(((uint32_t *)blockpred) +12, vreinterpretq_u32_u8(vpred ), 3);\n    return best_m + (best_sad << 4); // pack result\n}\n\nstatic void copy_wh_neon(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    if (w == 4)\n    {\n        do\n        {\n            *(int32_t*)dst = *(int32_t*)src; dst += 16; src += src_stride;\n            *(int32_t*)dst = *(int32_t*)src; dst += 16; src += src_stride;\n            *(int32_t*)dst = *(int32_t*)src; dst += 16; src += src_stride;\n            *(int32_t*)dst = *(int32_t*)src; dst += 16; src += src_stride;\n        } while (h -= 4);\n    } else if (w == 8)\n    {\n        do\n        {\n            vst1_u8(dst, vld1_u8(src)); dst += 16; src += src_stride;\n            vst1_u8(dst, vld1_u8(src)); dst += 16; src += src_stride;\n            vst1_u8(dst, vld1_u8(src)); dst += 16; src += src_stride;\n            vst1_u8(dst, vld1_u8(src)); dst += 16; src += src_stride;\n        } while (h -= 4);\n    } else\n    {\n        do\n        {\n            uint8x16_t v0, v1, v2, v3;\n            v0 = vld1q_u8(src); src += src_stride;\n            v1 = vld1q_u8(src); src += src_stride;\n            v2 = vld1q_u8(src); src += src_stride;\n            v3 = vld1q_u8(src); src += src_stride;\n\n            vst1q_u8(dst, v0); dst += 16; \n            vst1q_u8(dst, v1); dst += 16; \n            vst1q_u8(dst, v2); dst += 16; \n            vst1q_u8(dst, v3); dst += 16; \n        } while (h -= 4);\n    }\n}\n\nstatic void hpel_lpf_hor_neon(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    uint8x8_t c5 = vdup_n_u8(5);\n    uint8x8_t c20 = vshl_n_u8(c5, 2);\n    if (w == 16)\n    {\n        do\n        {\n            uint8x16_t s0 = vld1q_u8(src - 2);\n            uint8x16_t s1 = vld1q_u8(src - 2 + 16);\n            uint8x16_t v0 = s0;\n            uint8x16_t v1 = vextq_u8(s0, s1, 1);\n            uint8x16_t v2 = vextq_u8(s0, s1, 2);\n            uint8x16_t v3 = vextq_u8(s0, s1, 3);\n            uint8x16_t v4 = vextq_u8(s0, s1, 4);\n            uint8x16_t v5 = vextq_u8(s0, s1, 5);\n\n            uint16x8_t q, s = vaddl_u8(vget_low_u8(v0), vget_low_u8(v5));\n            s = vmlsl_u8(s, vget_low_u8(v1), c5);\n            s = vmlsl_u8(s, vget_low_u8(v4), c5);\n            s = vmlal_u8(s, vget_low_u8(v2), c20);\n            s = vmlal_u8(s, vget_low_u8(v3), c20);\n\n            q = vaddl_u8(vget_high_u8(v0), vget_high_u8(v5));\n            q = vmlsl_u8(q, vget_high_u8(v1), c5);\n            q = vmlsl_u8(q, vget_high_u8(v4), c5);\n            q = vmlal_u8(q, vget_high_u8(v2), c20);\n            q = vmlal_u8(q, vget_high_u8(v3), c20);\n\n            vst1q_u8(dst, vcombine_u8(\n                vqrshrun_n_s16(vreinterpretq_s16_u16(s), 5),\n                vqrshrun_n_s16(vreinterpretq_s16_u16(q), 5)));\n\n            dst += 16;\n            src += src_stride;\n        } while (--h);\n    } else\n    {\n        do\n        {\n            uint8x16_t line = vld1q_u8(src - 2);\n            uint8x8_t s0 = vget_low_u8(line);\n            uint8x8_t s1 = vget_high_u8(line);\n            uint8x8_t v0 = s0;\n            uint8x8_t v1 = vext_u8(s0, s1, 1);\n            uint8x8_t v2 = vext_u8(s0, s1, 2);\n            uint8x8_t v3 = vext_u8(s0, s1, 3);\n            uint8x8_t v4 = vext_u8(s0, s1, 4);\n            uint8x8_t v5 = vext_u8(s0, s1, 5);\n\n            uint16x8_t s = vaddl_u8(v0, v5);\n            s = vmlsl_u8(s, v1, c5);\n            s = vmlsl_u8(s, v4, c5);\n            s = vmlal_u8(s, v2, c20);\n            s = vmlal_u8(s, v3, c20);\n\n            vst1_u8(dst, vqrshrun_n_s16(vreinterpretq_s16_u16(s), 5));\n\n            dst += 16;\n            src += src_stride;\n        } while (--h);\n    }\n}\n\nstatic void hpel_lpf_hor16_neon(const uint8_t *src, int src_stride, int16_t *h264e_restrict dst, int w, int h)\n{\n    uint8x8_t c5 = vdup_n_u8(5);\n    uint8x8_t c20 = vshl_n_u8(c5, 2);\n    if (w == 16)\n    {\n        do\n        {\n            uint8x16_t s0 = vld1q_u8(src - 2);\n            uint8x16_t s1 = vld1q_u8(src - 2 + 16);\n            uint8x16_t v0 = s0;\n            uint8x16_t v1 = vextq_u8(s0, s1, 1);\n            uint8x16_t v2 = vextq_u8(s0, s1, 2);\n            uint8x16_t v3 = vextq_u8(s0, s1, 3);\n            uint8x16_t v4 = vextq_u8(s0, s1, 4);\n            uint8x16_t v5 = vextq_u8(s0, s1, 5);\n\n            uint16x8_t q, s = vaddl_u8(vget_low_u8(v0), vget_low_u8(v5));\n            s = vmlsl_u8(s, vget_low_u8(v1), c5);\n            s = vmlsl_u8(s, vget_low_u8(v4), c5);\n            s = vmlal_u8(s, vget_low_u8(v2), c20);\n            s = vmlal_u8(s, vget_low_u8(v3), c20);\n\n            q = vaddl_u8(vget_high_u8(v0), vget_high_u8(v5));\n            q = vmlsl_u8(q, vget_high_u8(v1), c5);\n            q = vmlsl_u8(q, vget_high_u8(v4), c5);\n            q = vmlal_u8(q, vget_high_u8(v2), c20);\n            q = vmlal_u8(q, vget_high_u8(v3), c20);\n\n            vst1q_s16(dst, vreinterpretq_s16_u16(s));\n            vst1q_s16(dst + 8, vreinterpretq_s16_u16(q));\n\n            dst += 16;\n            src += src_stride;\n        } while (--h);\n    } else\n    {\n        do\n        {\n            uint8x16_t line = vld1q_u8(src - 2);\n            uint8x8_t s0 = vget_low_u8(line);\n            uint8x8_t s1 = vget_high_u8(line);\n            uint8x8_t v0 = s0;\n            uint8x8_t v1 = vext_u8(s0, s1,  1);\n            uint8x8_t v2 = vext_u8(s0, s1, 2);\n            uint8x8_t v3 = vext_u8(s0, s1, 3);\n            uint8x8_t v4 = vext_u8(s0, s1, 4);\n            uint8x8_t v5 = vext_u8(s0, s1, 5);\n\n            uint16x8_t s = vaddl_u8(v0, v5);\n            s = vmlsl_u8(s, v1, c5);\n            s = vmlsl_u8(s, v4, c5);\n            s = vmlal_u8(s, v2, c20);\n            s = vmlal_u8(s, v3, c20);\n\n            vst1q_s16(dst, vreinterpretq_s16_u16(s));\n\n            dst += 16;\n            src += src_stride;\n        } while (--h);\n    }\n}\n\nstatic void hpel_lpf_ver_neon(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    uint8x8_t c5 = vdup_n_u8(5);\n    uint8x8_t c20 = vshl_n_u8(c5, 2);\n\n    if (w == 16)\n    {\n        uint8x16_t v0 = vld1q_u8(src - 2*src_stride);\n        uint8x16_t v1 = vld1q_u8(src - 1*src_stride);\n        uint8x16_t v2 = vld1q_u8(src);\n        uint8x16_t v3 = vld1q_u8(src + 1*src_stride);\n        uint8x16_t v4 = vld1q_u8(src + 2*src_stride);\n        do\n        {\n            uint8x16_t v5 = vld1q_u8(src + 3*src_stride);\n            uint16x8_t q, s = vaddl_u8(vget_low_u8(v0), vget_low_u8(v5));\n            s = vmlsl_u8(s, vget_low_u8(v1), c5);\n            s = vmlsl_u8(s, vget_low_u8(v4), c5);\n            s = vmlal_u8(s, vget_low_u8(v2), c20);\n            s = vmlal_u8(s, vget_low_u8(v3), c20);\n\n            q = vaddl_u8(vget_high_u8(v0), vget_high_u8(v5));\n            q = vmlsl_u8(q, vget_high_u8(v1), c5);\n            q = vmlsl_u8(q, vget_high_u8(v4), c5);\n            q = vmlal_u8(q, vget_high_u8(v2), c20);\n            q = vmlal_u8(q, vget_high_u8(v3), c20);\n\n            vst1q_u8(dst, vcombine_u8(\n                vqrshrun_n_s16(vreinterpretq_s16_u16(s), 5),\n                vqrshrun_n_s16(vreinterpretq_s16_u16(q), 5)));\n            dst += 16;\n            src += src_stride;\n            v0 = v1;\n            v1 = v2;\n            v2 = v3;\n            v3 = v4;\n            v4 = v5;\n        } while (--h);\n    } else\n    {\n        uint8x8_t v0 = vld1_u8(src - 2*src_stride);\n        uint8x8_t v1 = vld1_u8(src - 1*src_stride);\n        uint8x8_t v2 = vld1_u8(src);\n        uint8x8_t v3 = vld1_u8(src + 1*src_stride);\n        uint8x8_t v4 = vld1_u8(src + 2*src_stride);\n        do\n        {\n            uint8x8_t v5 = vld1_u8(src + 3*src_stride);\n            uint16x8_t s = vaddl_u8(v0, v5);\n            s = vmlsl_u8(s, v1, c5);\n            s = vmlsl_u8(s, v4, c5);\n            s = vmlal_u8(s, v2, c20);\n            s = vmlal_u8(s, v3, c20);\n\n            vst1_u8(dst, vqrshrun_n_s16(vreinterpretq_s16_u16(s), 5));\n            dst += 16;\n            src += src_stride;\n            v0 = v1;\n            v1 = v2;\n            v2 = v3;\n            v3 = v4;\n            v4 = v5;\n        } while (--h);\n    }\n}\n\nstatic void hpel_lpf_ver16_neon(const int16_t *src, uint8_t *h264e_restrict dst, int w, int h)\n{\n    do\n    {\n        int cloop = h;\n        int16x8_t v0 = vld1q_s16(src);\n        int16x8_t v1 = vld1q_s16(src + 16);\n        int16x8_t v2 = vld1q_s16(src + 16*2);\n        int16x8_t v3 = vld1q_s16(src + 16*3);\n        int16x8_t v4 = vld1q_s16(src + 16*4);\n        do\n        {\n            int16x8_t v5 = vld1q_s16(src+16*5);\n\n            int16x8_t s0 = vaddq_s16(v0, v5);\n            int16x8_t s1 = vaddq_s16(v1, v4);\n            int16x8_t s2 = vaddq_s16(v2, v3);\n\n            int16x8_t vs = vshrq_n_s16(vsubq_s16(s0, s1), 2);\n            int16x8_t vq = vsubq_s16(s2, s1);\n            s0 = vshrq_n_s16(vaddq_s16(vq, vs), 2);\n            s0 = vaddq_s16(s0, s2);\n\n            vst1_u8(dst, vqrshrun_n_s16(s0, 6));\n\n            dst += 16;\n            src += 16;\n            v0 = v1;\n            v1 = v2;\n            v2 = v3;\n            v3 = v4;\n            v4 = v5;\n        } while (--cloop);\n\n        src -= 16*h - 8;\n        dst -= 16*h - 8;\n    } while (w -= 8);\n}\n\nstatic void hpel_lpf_diag_neon(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    ALIGN(16) int16_t scratch[21 * 16] ALIGN2(16);  /* 21 rows by 16 pixels per row */\n\n    /*\n     * Intermediate values will be 1/2 pel at Horizontal direction\n     * Starting at (0.5, -2) at top extending to (0.5, height + 3) at bottom\n     * scratch contains a 2D array of size (w)X(h + 5)\n     */\n    hpel_lpf_hor16_neon(src - 2*src_stride, src_stride, scratch, w, h + 5);\n    hpel_lpf_ver16_neon(scratch, dst, w, h);\n}\n\nstatic void average_16x16_unalign_neon(uint8_t *dst, const uint8_t *src, int src_stride)\n{\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n    vst1q_u8(dst, vrhaddq_u8(vld1q_u8(dst), vld1q_u8(src)));  src += src_stride; dst += 16;\n}\n\nstatic void h264e_qpel_average_wh_align_neon(const uint8_t *src0, const uint8_t *src1, uint8_t *dst, point_t wh)\n{\n    int w = wh.s.x;\n    int h = wh.s.y;\n    int cloop = h;\n    if (w == 8)\n    {\n        do\n        {\n            vst1_u8(dst, vrhadd_u8(vld1_u8(src0), vld1_u8(src1)));\n            dst += 16;\n            src0 += 16;\n            src1 += 16;\n        } while (--cloop);\n    } else\n    {\n        do\n        {\n            vst1q_u8(dst, vrhaddq_u8(vld1q_u8(src0), vld1q_u8(src1)));\n            dst += 16;\n            src0 += 16;\n            src1 += 16;\n        } while (--cloop);\n    }\n}\n\nstatic void h264e_qpel_interpolate_luma_neon(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, point_t wh, point_t dxdy)\n{\n//    src += ((dx + 1) >> 2) + ((dy + 1) >> 2)*src_stride;            // dx == 3 ? next row; dy == 3 ? next line\n//    dxdy              actions: Horizontal, Vertical, Diagonal, Average\n//    0 1 2 3 +1        -   ha    h    ha+\n//    1                 va  hva   hda  hv+a\n//    2                 v   vda   d    v+da\n//    3                 va+ h+va h+da  h+v+a\n//    +stride\n    int32_t pos = 1 << (dxdy.s.x + 4*dxdy.s.y);\n\n    if (pos == 1)\n    {\n        copy_wh_neon(src, src_stride, dst, wh.s.x, wh.s.y);\n    } else\n    {\n        ALIGN(16) uint8_t scratch[16*16] ALIGN2(16);\n        int dstused = 0;\n        if (pos & 0xe0ee)// 1110 0000 1110 1110\n        {\n            hpel_lpf_hor_neon(src + ((pos & 0xe000) ? src_stride : 0), src_stride, dst, wh.s.x, wh.s.y);\n            dstused++;\n        }\n        if (pos & 0xbbb0)// 1011 1011 1011 0000\n        {\n            hpel_lpf_ver_neon(src + ((pos & 0x8880) ? 1 : 0), src_stride, dstused ? scratch : dst, wh.s.x, wh.s.y);\n            dstused++;\n        }\n        if (pos & 0x4e40)// 0100 1110 0100 0000\n        {\n            hpel_lpf_diag_neon(src, src_stride, dstused ? scratch : dst, wh.s.x, wh.s.y);\n            dstused++;\n        }\n        if (pos & 0xfafa)// 1111 1010 1111 1010\n        {\n            assert(wh.s.x == 16 && wh.s.y == 16);\n            if (dstused == 2)\n            {\n                point_t p;\n\n                src = scratch;\n                src_stride = 16;\n                p.u32 = 16 + (16 << 16);\n\n                h264e_qpel_average_wh_align_neon(src, dst, dst, p);\n            } else\n            {\n                src += ((dxdy.s.x + 1) >> 2) + ((dxdy.s.y + 1) >> 2)*src_stride;\n                average_16x16_unalign_neon(dst, src, src_stride);\n            }\n        }\n    }\n}\n\nstatic void h264e_qpel_interpolate_chroma_neon(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, point_t wh, point_t dxdy)\n{\n    /* if fractionl mv is not (0, 0) */\n    if (dxdy.u32)\n    {\n        uint8x8_t v8 = vdup_n_u8(8);\n        uint8x8_t vx = vdup_n_u8(dxdy.s.x);\n        uint8x8_t vy = vdup_n_u8(dxdy.s.y);\n        uint8x8_t v8x = vsub_u8(v8, vx);\n        uint8x8_t v8y = vsub_u8(v8, vy);\n        uint8x8_t va = vmul_u8(v8x, v8y);\n        uint8x8_t vb = vmul_u8(vx, v8y);\n        uint8x8_t vc = vmul_u8(v8x, vy);\n        uint8x8_t vd = vmul_u8(vx, vy);\n        int h = wh.s.y;\n        if (wh.s.x == 8)\n        {\n            uint8x16_t vt0 = vld1q_u8(src);\n            uint8x16_t vt1 = vextq_u8(vt0, vt0, 1);\n            src += src_stride;\n            do\n            {\n                uint8x16_t vb0 = vld1q_u8(src);\n                uint8x16_t vb1 = vextq_u8(vb0, vb0, 1);\n                uint16x8_t vs = vmull_u8(vget_low_u8(vt0), va);\n                vs = vmlal_u8(vs, vget_low_u8(vt1), vb);\n                vs = vmlal_u8(vs, vget_low_u8(vb0), vc);\n                vs = vmlal_u8(vs, vget_low_u8(vb1), vd);\n                vst1_u8(dst, vqrshrun_n_s16(vreinterpretq_s16_u16(vs), 6));\n                vt0 = vb0;\n                vt1 = vb1;\n                dst += 16;\n                src += src_stride;\n             } while(--h);\n         } else\n         {\n            uint8x8_t vt0 = vld1_u8(src);\n            uint8x8_t vt1 = vext_u8(vt0, vt0, 1);\n            src += src_stride;\n            do\n            {\n                uint8x8_t vb0 = vld1_u8(src);\n                uint8x8_t vb1 = vext_u8(vb0, vb0, 1);\n                uint16x8_t vs = vmull_u8(vt0, va);\n                vs = vmlal_u8(vs, vt1, vb);\n                vs = vmlal_u8(vs, vb0, vc);\n                vs = vmlal_u8(vs, vb1, vd);\n                *(int32_t*)dst = vget_lane_s32(vreinterpret_s32_u8(vqrshrun_n_s16(vreinterpretq_s16_u16(vs), 6)), 0);\n                vt0 = vb0;\n                vt1 = vb1;\n                dst += 16;\n                src += src_stride;\n             } while(--h);\n         }\n    } else\n    {\n        copy_wh_neon(src, src_stride, dst, wh.s.x, wh.s.y);\n    }\n}\n\nstatic int h264e_sad_mb_unlaign_8x8_neon(const pix_t *a, int a_stride, const pix_t *b, int _sad[4])\n{\n    uint16x8_t s0, s1;\n    uint8x16_t va, vb;\n    int cloop = 2, sum = 0;\n    do\n    {\n        va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n        s0 = vabdl_u8(    vget_low_u8(va), vget_low_u8(vb));   s1 = vabdl_u8(    vget_high_u8(va), vget_high_u8(vb)); \n        va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n        s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n        va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n        s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n        va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n        s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n        va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n        s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n        va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n        s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n        va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n        s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n        va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n        s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n        {\n        uint32x4_t v0 = vpaddlq_u16(s0);\n        uint64x2_t v1 = vpaddlq_u32(v0);\n        sum += _sad[0] = (int)(vgetq_lane_u64(v1, 0)+vgetq_lane_u64(v1, 1));\n        v0 = vpaddlq_u16(s1);\n        v1 = vpaddlq_u32(v0);\n        sum += _sad[1] = (int)(vgetq_lane_u64(v1, 0)+vgetq_lane_u64(v1, 1));\n        _sad += 2;\n        }\n    } while(--cloop);\n    return sum;\n}\n\nstatic int h264e_sad_mb_unlaign_wh_neon(const pix_t *a, int a_stride, const pix_t *b, point_t wh)\n{\n    uint16x8_t s0, s1;\n    uint8x16_t va, vb;\n    int cloop = wh.s.y/8, sum = 0;\n    if (wh.s.x == 16)\n    {\n        do\n        {\n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabdl_u8(    vget_low_u8(va), vget_low_u8(vb));   s1 = vabdl_u8(    vget_high_u8(va), vget_high_u8(vb)); \n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));   s1 = vabal_u8(s1, vget_high_u8(va), vget_high_u8(vb)); \n\n            uint32x4_t v0 = vpaddlq_u16(s0);\n            uint64x2_t v1 = vpaddlq_u32(v0);\n            sum += vgetq_lane_u64(v1, 0) + vgetq_lane_u64(v1, 1);\n\n            v0 = vpaddlq_u16(s1);\n            v1 = vpaddlq_u32(v0);\n            sum += vgetq_lane_u64(v1, 0) + vgetq_lane_u64(v1, 1);\n        } while(--cloop);\n    } else\n    {\n        do\n        {\n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabdl_u8(    vget_low_u8(va), vget_low_u8(vb));\n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));\n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));\n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));\n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));\n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));\n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));\n            va = vld1q_u8(a), vb = vld1q_u8(b);  a += a_stride, b += 16;\n            s0 = vabal_u8(s0, vget_low_u8(va), vget_low_u8(vb));\n\n            uint32x4_t v0 = vpaddlq_u16(s0);\n            uint64x2_t v1 = vpaddlq_u32(v0);\n            sum += vgetq_lane_u64(v1, 0) + vgetq_lane_u64(v1, 1);\n        } while(--cloop);\n    }\n    return sum;\n}\n\nstatic void h264e_copy_8x8_neon(pix_t *d, int d_stride, const pix_t *s)\n{\n    vst1_u8(d, vld1_u8(s)); s += 16;  d += d_stride;\n    vst1_u8(d, vld1_u8(s)); s += 16;  d += d_stride;\n    vst1_u8(d, vld1_u8(s)); s += 16;  d += d_stride;\n    vst1_u8(d, vld1_u8(s)); s += 16;  d += d_stride;\n\n    vst1_u8(d, vld1_u8(s)); s += 16;  d += d_stride;\n    vst1_u8(d, vld1_u8(s)); s += 16;  d += d_stride;\n    vst1_u8(d, vld1_u8(s)); s += 16;  d += d_stride;\n    vst1_u8(d, vld1_u8(s)); s += 16;  d += d_stride;\n}\n\n\nstatic void h264e_copy_16x16_neon(pix_t *d, int d_stride, const pix_t *s, int s_stride)\n{\n    assert(!((unsigned)d & 7));\n    assert(!((unsigned)s & 7));\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n    vst1q_u8(d, vld1q_u8(s)); s += s_stride; d += d_stride;\n}\n\n// Keep intermediate data in transposed format.\n// Save transpose for vectorized implementation\n// TODO: TRANSPOSE_BLOCK==0 broken\n#define TRANSPOSE_BLOCK     0\n#define UNZIGSAG_IN_QUANT   0\n\n#define SUM_DIF(a, b) { int t = a + b; b = a - b; a = t; }\n\nstatic void hadamar4_2d_neon(int16_t *x)\n{\n    int16x8_t q0 = vld1q_s16(x);\n    int16x8_t q1 = vld1q_s16(x + 8);\n    int16x8_t s = vaddq_s16(q0, q1);\n    int16x8_t d = vsubq_s16(q0, q1);\n    int16x8_t q2 = vcombine_s16(vget_low_s16(s), vget_low_s16(d));\n    int16x8_t q3 = vcombine_s16(vget_high_s16(s), vget_high_s16(d));\n    q0 = vaddq_s16(q2, q3);\n    d  = vsubq_s16(q2, q3);\n    q1 = vcombine_s16(vget_high_s16(d), vget_low_s16(d));\n{\n    int16x4x2_t t0 = vtrn_s16(vget_low_s16(q0), vget_high_s16(q0));\n    int16x4x2_t t1 = vtrn_s16(vget_low_s16(q1), vget_high_s16(q1));\n    int32x4x2_t tq = vtrnq_s32(vreinterpretq_s32_s16(vcombine_s16(t0.val[0], t0.val[1])), vreinterpretq_s32_s16(vcombine_s16(t1.val[0], t1.val[1])));\n\n    q0 = vcombine_s16(vget_low_s16(vreinterpretq_s16_s32(tq.val[0])), vget_high_s16(vreinterpretq_s16_s32(tq.val[0])));\n    q1 = vcombine_s16(vget_low_s16(vreinterpretq_s16_s32(tq.val[1])), vget_high_s16(vreinterpretq_s16_s32(tq.val[1])));\n\n    s = vaddq_s16(q0, q1);\n    d = vsubq_s16(q0, q1);\n    q2 = vcombine_s16(vget_low_s16(s), vget_low_s16(d));\n    q3 = vcombine_s16(vget_high_s16(s), vget_high_s16(d));\n    q0 = vaddq_s16(q2, q3);\n    d = vsubq_s16(q2, q3);\n    q1 = vcombine_s16(vget_high_s16(d), vget_low_s16(d));\n    vst1q_s16(x, q0);\n    vst1q_s16(x + 8, q1);\n}\n}\n\nstatic void dequant_dc_neon(quant_t *q, int16_t *qval, int dequant, int n)\n{\n    do q++->dq[0] = (int16_t)(*qval++*(int16_t)dequant); while (--n);\n}\n\nstatic void quant_dc_neon(int16_t *qval, int16_t *deq, int16_t quant, int n, int round_q18)\n{\n#if 1\n    int r_minus =  (1 << 18) - round_q18;\n    static const uint8_t iscan16[16] = {0, 2, 3, 9, 1, 4, 8, 10, 5, 7, 11, 14, 6, 12, 13, 15};\n    static const uint8_t iscan4[4] = {0, 1, 2, 3};\n    const uint8_t *scan = n == 4 ? iscan4 : iscan16;\n    do\n    {\n        int v = *qval;\n        int r = v < 0 ? r_minus : round_q18;\n        deq[*scan++] = *qval++ = (v * quant + r) >> 18;\n    } while (--n);\n#else\n    int r_minus =  (1 << 18) - round_q18;\n    do\n    {\n        int v = *qval;\n        int r = v < 0 ? r_minus : round_q18;\n        *deq++ = *qval++ = (v * quant + r) >> 18;\n    } while (--n);\n#endif\n}\n\nstatic void hadamar2_2d_neon(int16_t *x)\n{\n    int a = x[0];\n    int b = x[1];\n    int c = x[2];\n    int d = x[3];\n    x[0] = (int16_t)(a + b + c + d);\n    x[1] = (int16_t)(a - b + c - d);\n    x[2] = (int16_t)(a + b - c - d);\n    x[3] = (int16_t)(a - b - c + d);\n}\n\nstatic void h264e_quant_luma_dc_neon(quant_t *q, int16_t *deq, const uint16_t *qdat)\n{\n    int16_t *tmp = ((int16_t*)q) - 16;\n    hadamar4_2d_neon(tmp);\n    quant_dc_neon(tmp, deq, qdat[0], 16, 0x20000);//0x15555);\n    hadamar4_2d_neon(tmp);\n    assert(!(qdat[1] & 3));\n    // dirty trick here: shift w/o rounding, since it have no effect  for qp >= 10 (or, to be precise, for qp => 9)\n    dequant_dc_neon(q, tmp, qdat[1] >> 2, 16);\n}\n\nstatic int h264e_quant_chroma_dc_neon(quant_t *q, int16_t *deq, const uint16_t *qdat)\n{\n    int16_t *tmp = ((int16_t*)q) - 16;\n    hadamar2_2d_neon(tmp);\n    quant_dc_neon(tmp, deq, (int16_t)(qdat[0] << 1), 4, 0xAAAA);\n    hadamar2_2d_neon(tmp);\n    assert(!(qdat[1] & 1));\n    dequant_dc_neon(q, tmp, qdat[1] >> 1, 4);\n    return !!(tmp[0] | tmp[1] | tmp[2] | tmp[3]);\n}\n\n#define TRANSFORM(x0, x1, x2, x3, p, s) { \\\n    int t0 = x0 + x3;                     \\\n    int t1 = x0 - x3;                     \\\n    int t2 = x1 + x2;                     \\\n    int t3 = x1 - x2;                     \\\n    (p)[  0] = (int16_t)(t0 + t2);        \\\n    (p)[  s] = (int16_t)(t1*2 + t3);      \\\n    (p)[2*s] = (int16_t)(t0 - t2);        \\\n    (p)[3*s] = (int16_t)(t1 - t3*2);      \\\n}\n\nstatic void FwdTransformResidual4x42_neon(const uint8_t *inp, const uint8_t *pred, uint32_t inp_stride, int16_t *out)\n{\n#if TRANSPOSE_BLOCK\n    int i;\n    int16_t tmp[16];\n    // Transform columns\n    for (i = 0; i < 4; i++, pred++, inp++)\n    {\n        int f0 = inp[0] - pred[0];\n        int f1 = inp[1*inp_stride] - pred[1*16];\n        int f2 = inp[2*inp_stride] - pred[2*16];\n        int f3 = inp[3*inp_stride] - pred[3*16];\n        TRANSFORM(f0, f1, f2, f3, tmp + i*4, 1);\n    }\n    // Transform rows\n    for (i = 0; i < 4; i++)\n    {\n        int d0 = tmp[i + 0];\n        int d1 = tmp[i + 4];\n        int d2 = tmp[i + 8];\n        int d3 = tmp[i + 12];\n        TRANSFORM(d0, d1, d2, d3, out + i, 4);\n    }\n#else\n    /* Transform rows */\n    uint8x8_t inp0  = vreinterpret_u8_s32(vtrn_s32(vreinterpret_s32_u8(vld1_u8(inp)),  vreinterpret_s32_u8(vld1_u8(inp + inp_stride))).val[0]);\n    uint8x8_t inp1  = vreinterpret_u8_s32(vtrn_s32(vreinterpret_s32_u8(vld1_u8(inp + 2*inp_stride)), vreinterpret_s32_u8(vld1_u8(inp + 3*inp_stride))).val[0]);\n    uint8x8_t pred0 = vreinterpret_u8_s32(vtrn_s32(vreinterpret_s32_u8(vld1_u8(pred)),  vreinterpret_s32_u8(vld1_u8(pred + 16))).val[0]);\n    uint8x8_t pred1 = vreinterpret_u8_s32(vtrn_s32(vreinterpret_s32_u8(vld1_u8(pred + 2*16)), vreinterpret_s32_u8(vld1_u8(pred + 3*16))).val[0]);\n    int16x8_t q0 = vreinterpretq_s16_u16(vsubl_u8(inp0, pred0));\n    int16x8_t q1 = vreinterpretq_s16_u16(vsubl_u8(inp1, pred1));\n\n    int16x4x2_t  t0 = vtrn_s16(vget_low_s16(q0), vget_high_s16(q0));\n    int16x4x2_t  t1 = vtrn_s16(vget_low_s16(q1), vget_high_s16(q1));\n    int32x4x2_t  tq = vtrnq_s32(vreinterpretq_s32_s16(vcombine_s16(t0.val[0], t0.val[1])), vreinterpretq_s32_s16(vcombine_s16(t1.val[0], t1.val[1])));\n\n    int16x4_t d4 = vadd_s16(vget_low_s16(vreinterpretq_s16_s32(tq.val[0])), vget_high_s16(vreinterpretq_s16_s32(tq.val[1])));\n    int16x4_t d5 = vsub_s16(vget_low_s16(vreinterpretq_s16_s32(tq.val[0])), vget_high_s16(vreinterpretq_s16_s32(tq.val[1])));\n    int16x4_t d6 = vadd_s16(vget_high_s16(vreinterpretq_s16_s32(tq.val[0])), vget_low_s16(vreinterpretq_s16_s32(tq.val[1])));\n    int16x4_t d7 = vsub_s16(vget_high_s16(vreinterpretq_s16_s32(tq.val[0])), vget_low_s16(vreinterpretq_s16_s32(tq.val[1])));\n    int16x8_t q2 = vcombine_s16(d4, d5);\n    int16x8_t q3 = vcombine_s16(d6, d7);\n    q0 = vaddq_s16(q2, q3);\n    q0 = vcombine_s16(vget_low_s16(q0), vadd_s16(vget_high_s16(q0), d5));\n    q1 = vsubq_s16(q2, q3);\n    q1 = vcombine_s16(vget_low_s16(q1), vsub_s16(vget_high_s16(q1), d7));\n\n    t0 = vtrn_s16(vget_low_s16(q0), vget_high_s16(q0));\n    t1 = vtrn_s16(vget_low_s16(q1), vget_high_s16(q1));\n    tq = vtrnq_s32(vreinterpretq_s32_s16(vcombine_s16(t0.val[0], t0.val[1])), vreinterpretq_s32_s16(vcombine_s16(t1.val[0], t1.val[1])));\n\n    d4 = vadd_s16(vget_low_s16(vreinterpretq_s16_s32(tq.val[0])), vget_high_s16(vreinterpretq_s16_s32(tq.val[1])));\n    d5 = vsub_s16(vget_low_s16(vreinterpretq_s16_s32(tq.val[0])), vget_high_s16(vreinterpretq_s16_s32(tq.val[1])));\n    d6 = vadd_s16(vget_high_s16(vreinterpretq_s16_s32(tq.val[0])), vget_low_s16(vreinterpretq_s16_s32(tq.val[1])));\n    d7 = vsub_s16(vget_high_s16(vreinterpretq_s16_s32(tq.val[0])), vget_low_s16(vreinterpretq_s16_s32(tq.val[1])));\n    q2 = vcombine_s16(d4, d5);\n    q3 = vcombine_s16(d6, d7);\n    q0 = vaddq_s16(q2, q3);\n    q0 = vcombine_s16(vget_low_s16(q0), vadd_s16(vget_high_s16(q0), d5));\n    q1 = vsubq_s16(q2, q3);\n    q1 = vcombine_s16(vget_low_s16(q1), vsub_s16(vget_high_s16(q1), d7));\n\n    vst1q_s16(out, q0);\n    vst1q_s16(out + 8, q1);\n#endif\n}\n\nstatic void TransformResidual4x4_neon(const int16_t *pSrc, const pix_t *pred, pix_t *out, int out_stride)\n{\n    int16x4_t e0, e1, e2, e3;\n    int16x4_t f0, f1, f2, f3;\n    int16x4_t g0, g1, g2, g3;\n    int16x4_t h0, h1, h2, h3;\n    int16x4_t d0 = vld1_s16(pSrc);\n    int16x4_t d1 = vld1_s16(pSrc + 4);\n    int16x4_t d2 = vld1_s16(pSrc + 8);\n    int16x4_t d3 = vld1_s16(pSrc + 12);\n    int16x4x2_t dd0 = vtrn_s16(d0, d1);\n    int16x4x2_t dd1 = vtrn_s16(d2, d3);\n    int32x4x2_t d = vtrnq_s32(vreinterpretq_s32_s16(vcombine_s16(dd0.val[0], dd0.val[1])), vreinterpretq_s32_s16(vcombine_s16(dd1.val[0], dd1.val[1])));\n    d0 = vreinterpret_s16_s32(vget_low_s32(d.val[0]));\n    d1 = vreinterpret_s16_s32(vget_high_s32(d.val[0]));\n    d2 = vreinterpret_s16_s32(vget_low_s32(d.val[1]));\n    d3 = vreinterpret_s16_s32(vget_high_s32(d.val[1]));\n\n    e0 = vadd_s16(d0, d2);\n    e1 = vsub_s16(d0, d2);\n    e2 = vsub_s16(vshr_n_s16(d1, 1), d3);\n    e3 = vadd_s16(d1, vshr_n_s16(d3, 1));\n    f0 = vadd_s16(e0, e3);\n    f1 = vadd_s16(e1, e2);\n    f2 = vsub_s16(e1, e2);\n    f3 = vsub_s16(e0, e3);\n\n    dd0 = vtrn_s16(f0, f1);\n    dd1 = vtrn_s16(f2, f3);\n    d = vtrnq_s32(vreinterpretq_s32_s16(vcombine_s16(dd0.val[0], dd0.val[1])), vreinterpretq_s32_s16(vcombine_s16(dd1.val[0], dd1.val[1])));\n    f0 = vreinterpret_s16_s32(vget_low_s32(d.val[0]));\n    f1 = vreinterpret_s16_s32(vget_high_s32(d.val[0]));\n    f2 = vreinterpret_s16_s32(vget_low_s32(d.val[1]));\n    f3 = vreinterpret_s16_s32(vget_high_s32(d.val[1]));\n\n    g0 = vadd_s16(f0, f2);\n    g1 = vsub_s16(f0, f2);\n    g2 = vsub_s16(vshr_n_s16(f1, 1), f3);\n    g3 = vadd_s16(f1, vshr_n_s16(f3, 1));\n    h0 = vadd_s16(g0, g3);\n    h1 = vadd_s16(g1, g2);\n    h2 = vsub_s16(g1, g2);\n    h3 = vsub_s16(g0, g3);\n\n    {\n        uint8x8_t inp0 = vreinterpret_u8_s32(vtrn_s32(vreinterpret_s32_u8(vld1_u8(pred)),  vreinterpret_s32_u8(vld1_u8(pred + 16))).val[0]);\n        uint8x8_t inp1 = vreinterpret_u8_s32(vtrn_s32(vreinterpret_s32_u8(vld1_u8(pred + 2*16)), vreinterpret_s32_u8(vld1_u8(pred + 3*16))).val[0]);\n        int16x8_t a0 = vaddq_s16(vcombine_s16(h0, h1), vreinterpretq_s16_u16(vshll_n_u8(inp0, 6)));\n        int16x8_t a1 = vaddq_s16(vcombine_s16(h2, h3), vreinterpretq_s16_u16(vshll_n_u8(inp1, 6)));\n        uint8x8_t r0 = vqrshrun_n_s16(a0, 6);\n        uint8x8_t r1 = vqrshrun_n_s16(a1, 6);\n        *(uint32_t*)(&out[0*out_stride]) = vget_lane_u32(vreinterpret_u32_u8(r0), 0);\n        *(uint32_t*)(&out[1*out_stride]) = vget_lane_u32(vreinterpret_u32_u8(r0), 1);\n        *(uint32_t*)(&out[2*out_stride]) = vget_lane_u32(vreinterpret_u32_u8(r1), 0);\n        *(uint32_t*)(&out[3*out_stride]) = vget_lane_u32(vreinterpret_u32_u8(r1), 1);\n    }\n}\n\nstatic int is_zero_neon(const int16_t *dat, int i0, const uint16_t *thr)\n{\n    static const uint16x8_t g_ign_first = { 0, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff };\n    int16x8_t v0 = vabsq_s16(*(int16x8_t *)dat);\n    int16x8_t v1 = vabsq_s16(*(int16x8_t *)(dat + 8));\n    int16x8_t t = *(int16x8_t *)thr;\n    uint16x8_t m0 = vcgtq_s16(v0, t);\n    uint16x8_t m1 = vcgtq_s16(v1, t);\n    if (i0)\n        m0 = vandq_u16(m0, g_ign_first);\n    m0 = vorrq_u16(m0, m1);\n    uint16x4_t m4 = vorr_u16(vget_low_u16(m0), vget_high_u16(m0));\n    return !(vget_lane_u32(vreinterpret_u32_u16(m4), 0) | vget_lane_u32(vreinterpret_u32_u16(m4), 1));\n}\n\nstatic int is_zero4_neon(const quant_t *q, int i0, const uint16_t *thr)\n{\n    return is_zero_neon(q[0].dq, i0, thr) &&\n           is_zero_neon(q[1].dq, i0, thr) &&\n           is_zero_neon(q[4].dq, i0, thr) &&\n           is_zero_neon(q[5].dq, i0, thr);\n}\n\nstatic int zero_smallq_neon(quant_t *q, int mode, const uint16_t *qdat)\n{\n    int zmask = 0;\n    int i, i0 = mode & 1, n = mode >> 1;\n    if (mode == QDQ_MODE_INTER || mode == QDQ_MODE_CHROMA)\n    {\n        for (i = 0; i < n*n; i++)\n        {\n            if (is_zero_neon(q[i].dq, i0, qdat + OFFS_THR_1_OFF))\n            {\n                zmask |= (1 << i); //9.19\n            }\n        }\n        if (mode == QDQ_MODE_INTER)   //8.27\n        {\n            if ((~zmask & 0x0033) && is_zero4_neon(q +  0, i0, qdat + OFFS_THR_2_OFF)) zmask |= 0x33;\n            if ((~zmask & 0x00CC) && is_zero4_neon(q +  2, i0, qdat + OFFS_THR_2_OFF)) zmask |= (0x33 << 2);\n            if ((~zmask & 0x3300) && is_zero4_neon(q +  8, i0, qdat + OFFS_THR_2_OFF)) zmask |= (0x33 << 8);\n            if ((~zmask & 0xCC00) && is_zero4_neon(q + 10, i0, qdat + OFFS_THR_2_OFF)) zmask |= (0x33 << 10);\n        }\n    }\n    return zmask;\n}\n\nstatic int quantize_neon(quant_t *q, int mode, const uint16_t *qdat, int zmask)\n{\n#if UNZIGSAG_IN_QUANT\n#if TRANSPOSE_BLOCK\n    //         ; Zig-zag scan      Transposed zig-zag\n    //         ;    0 1 5 6        0 2 3 9\n    //         ;    2 4 7 C        1 4 8 A\n    //         ;    3 8 B D        5 7 B E\n    //         ;    9 A E F        6 C D F\n    static const unsigned char iscan16[16] = {0, 2, 3, 9, 1, 4, 8, 10, 5, 7, 11, 14, 6, 12, 13, 15};\n#else\n    static const unsigned char iscan16[16] = {0, 1, 5, 6, 2, 4, 7, 12, 3, 8, 11, 13, 9, 10, 14, 15};\n#endif\n#endif\n    int ccol, crow, nz_block_mask = 0;\n    ccol = mode >> 1;\n    crow = ccol;\n    do\n    {\n        do\n        {\n            int nz_mask = 0;\n\n            if (zmask & 1)\n            {\n                int32_t *p = (int32_t *)q->qv;\n                *p++ = 0; *p++ = 0; *p++ = 0; *p++ = 0;\n                *p++ = 0; *p++ = 0; *p++ = 0; *p++ = 0;\n            } else\n            {\n                static const uint8_t iscan16_neon [] = {\n                    0x00,0x01,0x02,0x03,0x08,0x09,0x10,0x11,\n                    0x0A,0x0B,0x04,0x05,0x06,0x07,0x0C,0x0D,\n                    0x12,0x13,0x18,0x19,0x1A,0x1B,0x14,0x15,\n                    0x0E,0x0F,0x16,0x17,0x1C,0x1D,0x1E,0x1F};\n                static const uint16_t imask16_neon [] = {\n                    0x0001,0x0002,0x0004,0x0008,\n                    0x0010,0x0020,0x0040,0x0080,\n                    0x0100,0x0200,0x0400,0x0800,\n                    0x1000,0x2000,0x4000,0x8000};\n                short save = 0;\n                uint8x16_t q8,q9;\n                int16x8_t q0 = vld1q_s16(q->dq);\n                int16x8_t q1 = vld1q_s16(q->dq + 8);\n                uint16x8_t r =  vdupq_n_u16(qdat[OFFS_RND_INTER]);\n                uint16x8_t r0 = veorq_u16(r, vcltq_s16(q0, vdupq_n_s16(0)));\n                uint16x8_t r1 = veorq_u16(r, vcltq_s16(q1, vdupq_n_s16(0)));\n                int16x4_t d4, d5, d6, d7;\n                int16x4_t d22, d23, d24, d25;\n                int16x4_t d26, d27, d28, d29;\n\n                d4 = d6 = vdup_n_s16(qdat[2]);\n                d5 = d7 = vdup_n_s16(qdat[3]);\n                d4 = vset_lane_s16(qdat[0], d4, 0);\n                d4 = vset_lane_s16(qdat[0], d4, 2);\n                d5 = vset_lane_s16(qdat[1], d5, 0);\n                d5 = vset_lane_s16(qdat[1], d5, 2);\n                d6 = vset_lane_s16(qdat[4], d6, 1);\n                d6 = vset_lane_s16(qdat[4], d6, 3);\n                d7 = vset_lane_s16(qdat[5], d7, 1);\n                d7 = vset_lane_s16(qdat[5], d7, 3);\n\n                d22 = vqshrn_n_s32(vreinterpretq_s32_u32(vaddw_u16(vreinterpretq_u32_s32(vmull_s16(vget_low_s16(q0), d4)), vget_low_u16(r0))), 16);\n                d26 = vmul_s16(d22, d5);\n                d23 = vqshrn_n_s32(vreinterpretq_s32_u32(vaddw_u16(vreinterpretq_u32_s32(vmull_s16(vget_high_s16(q0), d6)), vget_high_u16(r0))), 16);\n                d27 = vmul_s16(d23, d7);\n                d24 = vqshrn_n_s32(vreinterpretq_s32_u32(vaddw_u16(vreinterpretq_u32_s32(vmull_s16(vget_low_s16(q1), d4)), vget_low_u16(r1))), 16);\n                d28 = vmul_s16(d24, d5);\n                d25 = vqshrn_n_s32(vreinterpretq_s32_u32(vaddw_u16(vreinterpretq_u32_s32(vmull_s16(vget_high_s16(q1), d6)), vget_high_u16(r1))), 16);\n                d29 = vmul_s16(d25, d7);\n                if (mode & 1)\n                {\n                    save = q->dq[0];\n                }\n                vst1q_s16(q->dq,     vcombine_s16(d26, d27));\n                vst1q_s16(q->dq + 8, vcombine_s16(d28, d29));\n                if (mode & 1)\n                {\n                    q->dq[0] = save;\n                }\n\n                if (mode & 1)\n                {\n                    save = q->qv[0];\n                }\n                q8 = vld1q_u8(iscan16_neon);\n                q9 = vld1q_u8(iscan16_neon + 16);\n\n                {\n// vtbl4_u8 is marked unavailable for iOS arm64, use wider versions there.\n#if defined(__APPLE__) && defined(__aarch64__) &&  defined(__apple_build_version__)\n                uint8x16x2_t vlut;\n                vlut.val[0] = vreinterpretq_u8_s16(vcombine_s16(d22, d23));\n                vlut.val[1] = vreinterpretq_u8_s16(vcombine_s16(d24, d25));\n                vst1_s16(q->qv + 0, d4 = vreinterpret_s16_u8(vtbl2q_u8(vlut, vget_low_u8(q8))));\n                vst1_s16(q->qv + 4, d5 = vreinterpret_s16_u8(vtbl2q_u8(vlut, vget_high_u8(q8))));\n                vst1_s16(q->qv + 8, d6 = vreinterpret_s16_u8(vtbl2q_u8(vlut, vget_low_u8(q9))));\n                vst1_s16(q->qv +12, d7 = vreinterpret_s16_u8(vtbl2q_u8(vlut, vget_high_u8(q9))));\n#else\n                uint8x8x4_t vlut;\n                vlut.val[0] = vreinterpret_u8_s16(d22);\n                vlut.val[1] = vreinterpret_u8_s16(d23);\n                vlut.val[2] = vreinterpret_u8_s16(d24);\n                vlut.val[3] = vreinterpret_u8_s16(d25);\n                vst1_s16(q->qv + 0, d4 = vreinterpret_s16_u8(vtbl4_u8(vlut, vget_low_u8(q8))));\n                vst1_s16(q->qv + 4, d5 = vreinterpret_s16_u8(vtbl4_u8(vlut, vget_high_u8(q8))));\n                vst1_s16(q->qv + 8, d6 = vreinterpret_s16_u8(vtbl4_u8(vlut, vget_low_u8(q9))));\n                vst1_s16(q->qv +12, d7 = vreinterpret_s16_u8(vtbl4_u8(vlut, vget_high_u8(q9))));\n#endif\n                }\n                {\n                    uint16x8_t bm0 = vld1q_u16(imask16_neon);\n                    uint16x8_t bm1 = vld1q_u16(imask16_neon + 8);\n                    uint16x4_t m;\n                    bm0 = vandq_u16(bm0, vceqq_s16(vcombine_s16(d4, d5), vdupq_n_s16(0)));\n                    bm1 = vandq_u16(bm1, vceqq_s16(vcombine_s16(d6, d7), vdupq_n_s16(0)));\n                    bm0 = vorrq_u16(bm0, bm1);\n                    m = vorr_u16(vget_low_u16(bm0), vget_high_u16(bm0));\n                    m = vpadd_u16(m, m);\n                    m = vpadd_u16(m, m);\n                    nz_mask = vget_lane_u16(vmvn_u16(m), 0);\n                }\n\n                if (mode & 1)\n                {\n                    q->qv[0] = save;\n                    nz_mask &= ~1;\n                }\n            }\n\n            zmask >>= 1;\n            nz_block_mask <<= 1;\n            if (nz_mask)\n                nz_block_mask |= 1;\n            q++;\n        } while (--ccol);\n        ccol = mode >> 1;\n    } while (--crow);\n    return nz_block_mask;\n}\n\nstatic void transform_neon(const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q)\n{\n    int crow = mode >> 1;\n    int ccol = crow;\n\n    do\n    {\n        do\n        {\n            FwdTransformResidual4x42_neon(inp, pred, inp_stride, q->dq);\n            q++;\n            inp += 4;\n            pred += 4;\n        } while (--ccol);\n        ccol = mode >> 1;\n        inp += 4*(inp_stride - ccol);\n        pred += 4*(16 - ccol);\n    } while (--crow);\n}\n\nstatic int h264e_transform_sub_quant_dequant_neon(const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q, const uint16_t *qdat)\n{\n    int zmask;\n    transform_neon(inp, pred, inp_stride, mode, q);\n    if (mode & 1) // QDQ_MODE_INTRA_16 || QDQ_MODE_CHROMA\n    {\n        int cloop = (mode >> 1)*(mode >> 1);\n        short *dc = ((short *)q) - 16;\n        quant_t *pq = q;\n        do\n        {\n            *dc++ = pq->dq[0];\n            pq++;\n        } while (--cloop);\n    }\n    zmask = zero_smallq_neon(q, mode, qdat);\n    return quantize_neon(q, mode, qdat, zmask);\n}\n\nstatic void h264e_transform_add_neon(pix_t *out, int out_stride, const pix_t *pred, quant_t *q, int side, int32_t mask)\n{\n    int crow = side;\n    int ccol = crow;\n\n    assert(!((unsigned)out % 4));\n    assert(!((unsigned)pred % 4));\n    assert(!(out_stride % 4));\n    do\n    {\n        do\n        {\n            if (mask >= 0)\n            {\n                // copy 4x4\n                pix_t *dst = out;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 0 * 16); dst += out_stride;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 1 * 16); dst += out_stride;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 2 * 16); dst += out_stride;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 3 * 16);\n            } else\n            {\n                TransformResidual4x4_neon(q->dq, pred, out, out_stride);\n            }\n            mask <<= 1;\n            q++;\n            out += 4;\n            pred += 4;\n        } while (--ccol);\n        ccol = side;\n        out += 4*(out_stride - ccol);\n        pred += 4*(16 - ccol);\n    } while (--crow);\n}\n#endif\n\n#if H264E_ENABLE_PLAIN_C\n\nstatic uint8_t byteclip_deblock(int x)\n{\n    if (x > 255)\n    {\n        return 255;\n    }\n    if (x < 0)\n    {\n        return 0;\n    }\n    return (uint8_t)x;\n}\n\nstatic int clip_range(int range, int src)\n{\n    if (src > range)\n    {\n        src = range;\n    }\n    if (src < -range)\n    {\n        src = -range;\n    }\n    return src;\n}\n\nstatic void deblock_chroma(uint8_t *pix, int stride, int alpha, int beta, int thr, int strength)\n{\n    int p1, p0, q0, q1;\n    int delta;\n\n    if (strength == 0)\n    {\n        return;\n    }\n\n    p1 = pix[-2*stride];\n    p0 = pix[-1*stride];\n    q0 = pix[ 0*stride];\n    q1 = pix[ 1*stride];\n\n    if (ABS(p0 - q0) >= alpha || ABS(p1 - p0) >= beta || ABS(q1 - q0) >= beta)\n    {\n        return;\n    }\n\n    if (strength < 4)\n    {\n        int tC = thr + 1;\n        delta = (((q0 - p0)*4) + (p1 - q1) + 4) >> 3;\n        delta = clip_range(tC, delta);\n        pix[-1*stride] = byteclip_deblock(p0 + delta);\n        pix[ 0*stride] = byteclip_deblock(q0 - delta);\n    } else\n    {\n        pix[-1*stride] = (pix_t)((2*p1 + p0 + q1 + 2) >> 2);\n        pix[ 0*stride] = (pix_t)((2*q1 + q0 + p1 + 2) >> 2);\n    }\n}\n\nstatic void deblock_luma_v(uint8_t *pix, int stride, int alpha, int beta, const uint8_t *pthr, const uint8_t *pstr)\n{\n    int p2, p1, p0, q0, q1, q2, thr;\n    int ap, aq, delta, cloop, i;\n    for (i = 0; i < 4; i++)\n    {\n        cloop = 4;\n        if (pstr[i])\n        {\n            thr = pthr[i];\n            do\n            {\n                p1 = pix[-2];\n                p0 = pix[-1];\n                q0 = pix[ 0];\n                q1 = pix[ 1];\n\n                //if (ABS(p0 - q0) < alpha && ABS(p1 - p0) < beta && ABS(q1 - q0) < beta)\n                if (((ABS(p0 - q0) - alpha) & (ABS(p1 - p0) - beta) & (ABS(q1 - q0) - beta)) < 0)\n                {\n                    int tC = thr;\n                    // avoid conditons\n                    int sp, sq, d2;\n                    p2 = pix[-3];\n                    q2 = pix[ 2];\n                    ap = ABS(p2 - p0);\n                    aq = ABS(q2 - q0);\n                    delta = (((q0 - p0)*4) + (p1 - q1) + 4) >> 3;\n\n                    sp = (ap - beta) >> 31;\n                    sq = (aq - beta) >> 31;\n                    d2 = (((p2 + ((p0 + q0 + 1) >> 1)) >> 1) - p1) & sp;\n                    d2 = clip_range(thr, d2);\n                    pix[-2] = (pix_t)(p1 + d2);\n                    d2 = (((q2 + ((p0 + q0 + 1) >> 1)) >> 1) - q1) & sq;\n                    d2 = clip_range(thr, d2);\n                    pix[ 1] = (pix_t)(q1 + d2);\n                    tC = thr - sp - sq;\n                    delta = clip_range(tC, delta);\n                    pix[-1] = byteclip_deblock(p0 + delta);\n                    pix[ 0] = byteclip_deblock(q0 - delta);\n                }\n                pix += stride;\n            } while (--cloop);\n        } else\n        {\n                pix += 4*stride;\n        }\n    }\n}\n\nstatic void deblock_luma_h_s4(uint8_t *pix, int stride, int alpha, int beta)\n{\n    int p3, p2, p1, p0, q0, q1, q2, q3;\n    int ap, aq, cloop = 16;\n    do\n    {\n        int abs_p0_q0, abs_p1_p0, abs_q1_q0;\n        p1 = pix[-2*stride];\n        p0 = pix[-1*stride];\n        q0 = pix[ 0*stride];\n        q1 = pix[ 1*stride];\n        abs_p0_q0 = ABS(p0 - q0);\n        abs_p1_p0 = ABS(p1 - p0);\n        abs_q1_q0 = ABS(q1 - q0);\n        if (abs_p0_q0 < alpha && abs_p1_p0 < beta && abs_q1_q0 < beta)\n        {\n            int short_p = (2*p1 + p0 + q1 + 2);\n            int short_q = (2*q1 + q0 + p1 + 2);\n\n            if (abs_p0_q0 < ((alpha>>2)+2))\n            {\n                p2 = pix[-3*stride];\n                q2 = pix[ 2*stride];\n                ap = ABS(p2 - p0);\n                aq = ABS(q2 - q0);\n                if (ap < beta)\n                {\n                    int t = p2 + p1 + p0 + q0 + 2;\n                    p3 = pix[-4*stride];\n                    short_p += t - p1 + q0; //(p2 + 2*p1 + 2*p0 + 2*q0 + q1 + 4) >> 3);\n                    short_p >>= 1;\n                    pix[-2*stride] = (pix_t)(t >> 2);\n                    pix[-3*stride] = (pix_t)((2*p3 + 2*p2 + t + 2) >> 3); //(2*p3 + 3*p2 + p1 + p0 + q0 + 4) >> 3);\n                }\n                if (aq < beta)\n                {\n                    int t = q2 + q1 + p0 + q0 + 2;\n                    q3 = pix[ 3*stride];\n                    short_q += (t - q1 + p0);//(q2 + 2*q1 + 2*q0 + 2*p0 + p1 + 4)>>3);\n                    short_q >>= 1;\n                    pix[ 1*stride] = (pix_t)(t >> 2);\n                    pix[ 2*stride] = (pix_t)((2*q3 + 2*q2 + t + 2) >> 3); //((2*q3 + 3*q2 + q1 + q0 + p0 + 4) >> 3);\n                }\n            }\n            pix[-1*stride] = (pix_t)(short_p >> 2);\n            pix[ 0*stride] = (pix_t)(short_q >> 2);\n        }\n        pix += 1;\n    } while (--cloop);\n}\n\nstatic void deblock_luma_v_s4(uint8_t *pix, int stride, int alpha, int beta)\n{\n    int p3, p2, p1, p0, q0, q1, q2, q3;\n    int ap, aq, cloop = 16;\n    do\n    {\n        p2 = pix[-3];\n        p1 = pix[-2];\n        p0 = pix[-1];\n        q0 = pix[ 0];\n        q1 = pix[ 1];\n        q2 = pix[ 2];\n        if (ABS(p0 - q0) < alpha && ABS(p1 - p0) < beta && ABS(q1 - q0) < beta)\n        {\n            ap = ABS(p2 - p0);\n            aq = ABS(q2 - q0);\n\n            if (ap < beta && ABS(p0 - q0) < ((alpha >> 2) + 2))\n            {\n                p3 = pix[-4];\n                pix[-1] = (pix_t)((p2 + 2*p1 + 2*p0 + 2*q0 + q1 + 4) >> 3);\n                pix[-2] = (pix_t)((p2 + p1 + p0 + q0 + 2) >> 2);\n                pix[-3] = (pix_t)((2*p3 + 3*p2 + p1 + p0 + q0 + 4) >> 3);\n            } else\n            {\n                pix[-1] = (pix_t)((2*p1 + p0 + q1 + 2) >> 2);\n            }\n\n            if (aq < beta && ABS(p0 - q0) < ((alpha >> 2) + 2))\n            {\n                q3 = pix[ 3];\n                pix[ 0] = (pix_t)((q2 + 2*q1 + 2*q0 + 2*p0 + p1 + 4) >> 3);\n                pix[ 1] = (pix_t)((q2 + q1 + p0 + q0 + 2) >> 2);\n                pix[ 2] = (pix_t)((2*q3 + 3*q2 + q1 + q0 + p0 + 4) >> 3);\n            } else\n            {\n                pix[ 0] = (pix_t)((2*q1 + q0 + p1 + 2) >> 2);\n            }\n        }\n        pix += stride;\n    } while (--cloop);\n}\n\nstatic void deblock_luma_h(uint8_t *pix, int stride, int alpha, int beta, const uint8_t *pthr, const uint8_t *pstr)\n{\n    int p2, p1, p0, q0, q1, q2;\n    int ap, aq, delta, i;\n    for (i = 0; i < 4; i++)\n    {\n        if (pstr[i])\n        {\n            int cloop = 4;\n            int thr = pthr[i];\n            do\n            {\n                p1 = pix[-2*stride];\n                p0 = pix[-1*stride];\n                q0 = pix[ 0*stride];\n                q1 = pix[ 1*stride];\n\n                //if (ABS(p0-q0) < alpha && ABS(p1-p0) < beta && ABS(q1-q0) < beta)\n                if (((ABS(p0-q0) - alpha) & (ABS(p1-p0) - beta) & (ABS(q1-q0) - beta)) < 0)\n                {\n                    int tC = thr;\n                    int sp, sq, d2;\n                    p2 = pix[-3*stride];\n                    q2 = pix[ 2*stride];\n                    ap = ABS(p2 - p0);\n                    aq = ABS(q2 - q0);\n                    delta = (((q0 - p0)*4) + (p1 - q1) + 4) >> 3;\n\n                    sp = (ap - beta) >> 31;\n                    d2 = (((p2 + ((p0 + q0 + 1) >> 1)) >> 1) - p1) & sp;\n                    d2 = clip_range(thr, d2);\n                    pix[-2*stride] = (pix_t)(p1 + d2);\n\n                    sq = (aq - beta) >> 31;\n                    d2 = (((q2 + ((p0 + q0 + 1) >> 1)) >> 1) - q1) & sq;\n                    d2 = clip_range(thr, d2);\n                    pix[ 1*stride] = (pix_t)(q1 + d2);\n\n                    tC = thr - sp - sq;\n                    delta = clip_range(tC, delta);\n\n                    pix[-1*stride] = byteclip_deblock(p0 + delta);\n                    pix[ 0*stride] = byteclip_deblock(q0 - delta);\n                }\n                pix += 1;\n            } while (--cloop);\n        } else\n        {\n            pix += 4;\n        }\n    }\n}\n\nstatic void deblock_chroma_v(uint8_t *pix, int32_t stride, int a, int b, const uint8_t *thr, const uint8_t *str)\n{\n    int i;\n    for (i = 0; i < 8; i++)\n    {\n        deblock_chroma(pix, 1, a, b, thr[i >> 1], str[i >> 1]);\n        pix += stride;\n    }\n}\n\nstatic void deblock_chroma_h(uint8_t *pix, int32_t stride, int a, int b, const uint8_t *thr, const uint8_t *str)\n{\n    int i;\n    for (i = 0; i < 8; i++)\n    {\n        deblock_chroma(pix, stride, a, b, thr[i >> 1], str[i >> 1]);\n        pix += 1;\n    }\n}\n\nstatic void h264e_deblock_chroma(uint8_t *pix, int32_t stride, const deblock_params_t *par)\n{\n    const uint8_t *alpha = par->alpha;\n    const uint8_t *beta  = par->beta;\n    const uint8_t *thr   = par->tc0;\n    const uint8_t *strength = (uint8_t *)par->strength32;\n    int a,b,x,y;\n    a = alpha[0];\n    b = beta[0];\n    for (x = 0; x < 16; x += 8)\n    {\n        uint32_t str = *(uint32_t*)&strength[x];\n        if (str && a)\n        {\n            deblock_chroma_v(pix + (x >> 1), stride, a, b, thr + x, strength + x);\n        }\n        a = alpha[1];\n        b = beta[1];\n    }\n    thr += 16;\n    strength += 16;\n    a = alpha[2];\n    b = beta[2];\n    for (y = 0; y < 16; y += 8)\n    {\n        uint32_t str = *(uint32_t*)&strength[y];\n        if (str && a)\n        {\n            deblock_chroma_h(pix, stride, a, b, thr + y, strength + y);\n        }\n        pix += 4*stride;\n        a = alpha[3];\n        b = beta[3];\n    }\n}\n\nstatic void h264e_deblock_luma(uint8_t *pix, int32_t stride, const deblock_params_t *par)\n{\n    const uint8_t *alpha = par->alpha;\n    const uint8_t *beta  = par->beta;\n    const uint8_t *thr   = par->tc0;\n    const uint8_t *strength = (uint8_t *)par->strength32;\n    int a = alpha[0];\n    int b = beta[0];\n    int x, y;\n    for (x = 0; x < 16; x += 4)\n    {\n        uint32_t str = *(uint32_t*)&strength[x];\n        if ((uint8_t)str == 4)\n        {\n            deblock_luma_v_s4(pix + x, stride, a, b);\n        } else if (str && a)\n        {\n            deblock_luma_v(pix + x, stride, a, b, thr + x, strength + x);\n        }\n        a = alpha[1];\n        b = beta[1];\n    }\n    a = alpha[2];\n    b = beta[2];\n    thr += 16;\n    strength += 16;\n    for (y = 0; y < 16; y += 4)\n    {\n        uint32_t str = *(uint32_t*)&strength[y];\n        if ((uint8_t)str == 4)\n        {\n            deblock_luma_h_s4(pix, stride, a, b);\n        } else if (str && a)\n        {\n            deblock_luma_h(pix, stride, a, b, thr + y, strength + y);\n        }\n        a = alpha[3];\n        b = beta[3];\n        pix += 4*stride;\n    }\n}\n\nstatic void h264e_denoise_run(unsigned char *frm, unsigned char *frmprev, int w, int h_arg, int stride_frm, int stride_frmprev)\n{\n    int cloop, h = h_arg;\n    if (w <= 2 || h <= 2)\n    {\n        return;\n    }\n    w -= 2;\n    h -= 2;\n\n    do\n    {\n        unsigned char *pf = frm += stride_frm;\n        unsigned char *pp = frmprev += stride_frmprev;\n        cloop = w;\n        pp[-stride_frmprev] = *pf++;\n        pp++;\n        do\n        {\n            int d, neighbourhood;\n            unsigned g, gd, gn, out_val;\n            d = pf[0] - pp[0];\n            neighbourhood  = pf[-1]      - pp[-1];\n            neighbourhood += pf[+1]      - pp[+1];\n            neighbourhood += pf[-stride_frm] - pp[-stride_frmprev];\n            neighbourhood += pf[+stride_frm] - pp[+stride_frmprev];\n\n            if (d < 0)\n            {\n                d = -d;\n            }\n            if (neighbourhood < 0)\n            {\n                neighbourhood = -neighbourhood;\n            }\n            neighbourhood >>= 2;\n\n            gd = g_diff_to_gainQ8[d];\n            gn = g_diff_to_gainQ8[neighbourhood];\n\n            gn <<= 2;\n            if (gn > 255)\n            {\n                gn = 255;\n            }\n\n            gn = 255 - gn;\n            gd = 255 - gd;\n            g = gn*gd;  // Q8*Q8 = Q16;\n\n            //out_val = ((pp[0]*g ) >> 16) + (((0xffff-g)*pf[0] ) >> 16);\n            //out_val = ((pp[0]*g + (1<<15)) >> 16) + (((0xffff-g)*pf[0]  + (1<<15)) >> 16);\n            out_val = (pp[0]*g + (0xffff - g)*pf[0]  + (1 << 15)) >> 16;\n\n            assert(out_val <= 255);\n\n            pp[-stride_frmprev] = (unsigned char)out_val;\n            //pp[-stride_frmprev] = gd;//(unsigned char)((neighbourhood+1)>255?255:(neighbourhood+1));\n\n            pf++, pp++;\n        } while (--cloop);\n\n        pp[-stride_frmprev] = *pf++;\n    } while(--h);\n\n    memcpy(frmprev + stride_frmprev, frm + stride_frm, w + 2);\n    h = h_arg - 2;\n    do\n    {\n        memcpy(frmprev, frmprev - stride_frmprev, w + 2);\n        frmprev -= stride_frmprev;\n    } while(--h);\n    memcpy(frmprev, frm - stride_frm*(h_arg - 2), w + 2);\n}\n\n#undef IS_NULL\n#define IS_NULL(p) ((p) < (pix_t *)32)\n\nstatic uint32_t intra_predict_dc(const pix_t *left, const pix_t *top, int log_side)\n{\n    unsigned dc = 0, side = 1u << log_side, round = 0;\n    do\n    {\n        if (!IS_NULL(left))\n        {\n            int cloop = side;\n            round += side >> 1;\n            do\n            {\n                dc += *left++;\n                dc += *left++;\n                dc += *left++;\n                dc += *left++;\n            } while(cloop -= 4);\n        }\n        left = top;\n        top = NULL;\n    } while (left);\n    dc += round;\n    if (round == side)\n        dc >>= 1;\n    dc >>= log_side;\n    if (!round) dc = 128;\n    return dc * 0x01010101;\n}\n\n/*\n * Note: To make the code more readable we refer to the neighboring pixels\n * in variables named as below:\n *\n *    UL U0 U1 U2 U3 U4 U5 U6 U7\n *    L0 xx xx xx xx\n *    L1 xx xx xx xx\n *    L2 xx xx xx xx\n *    L3 xx xx xx xx\n */\n#define UL edge[-1]\n#define U0 edge[0]\n#define T1 edge[1]\n#define U2 edge[2]\n#define U3 edge[3]\n#define U4 edge[4]\n#define U5 edge[5]\n#define U6 edge[6]\n#define U7 edge[7]\n#define L0 edge[-2]\n#define L1 edge[-3]\n#define L2 edge[-4]\n#define L3 edge[-5]\n\nstatic void h264e_intra_predict_16x16(pix_t *predict,  const pix_t *left, const pix_t *top, int mode)\n{\n    int cloop = 16;\n    uint32_t *d = (uint32_t*)predict;\n    assert(IS_ALIGNED(predict, 4));\n    assert(IS_ALIGNED(top, 4));\n    if (mode != 1)\n    {\n        uint32_t t0, t1, t2, t3;\n        if (mode < 1)\n        {\n            t0 = ((uint32_t*)top)[0];\n            t1 = ((uint32_t*)top)[1];\n            t2 = ((uint32_t*)top)[2];\n            t3 = ((uint32_t*)top)[3];\n        } else //(mode == 2)\n        {\n            t0 = t1 = t2 = t3 = intra_predict_dc(left, top, 4);\n        }\n        do\n        {\n            *d++ = t0;\n            *d++ = t1;\n            *d++ = t2;\n            *d++ = t3;\n        } while (--cloop);\n    } else //if (mode == 1)\n    {\n        do\n        {\n            uint32_t val = *left++ * 0x01010101u;\n            *d++ = val;\n            *d++ = val;\n            *d++ = val;\n            *d++ = val;\n        } while (--cloop);\n    }\n}\n\nstatic void h264e_intra_predict_chroma(pix_t *predict, const pix_t *left, const pix_t *top, int mode)\n{\n    int cloop = 8;\n    uint32_t *d = (uint32_t*)predict;\n    assert(IS_ALIGNED(predict, 4));\n    assert(IS_ALIGNED(top, 4));\n    if (mode < 1)\n    {\n        uint32_t t0, t1, t2, t3;\n        t0 = ((uint32_t*)top)[0];\n        t1 = ((uint32_t*)top)[1];\n        t2 = ((uint32_t*)top)[2];\n        t3 = ((uint32_t*)top)[3];\n        do\n        {\n            *d++ = t0;\n            *d++ = t1;\n            *d++ = t2;\n            *d++ = t3;\n        } while (--cloop);\n    } else if (mode == 1)\n    {\n        do\n        {\n            uint32_t u = left[0] * 0x01010101u;\n            uint32_t v = left[8] * 0x01010101u;\n            d[0] = u;\n            d[1] = u;\n            d[2] = v;\n            d[3] = v;\n            d += 4;\n            left++;\n        } while(--cloop);\n    } else //if (mode == 2)\n    {\n        int ccloop = 2;\n        cloop = 2;\n        do\n        {\n            d[0] = d[1] = d[16] = intra_predict_dc(left, top, 2);\n            d[17] = intra_predict_dc(left + 4, top + 4, 2);\n            if (!IS_NULL(top))\n            {\n                d[1] = intra_predict_dc(NULL, top + 4, 2);\n            }\n            if (!IS_NULL(left))\n            {\n                d[16] = intra_predict_dc(NULL, left + 4, 2);\n            }\n            d += 2;\n            left += 8;\n            top += 8;\n        } while(--cloop);\n\n        do\n        {\n            cloop = 12;\n            do\n            {\n                *d = d[-4];\n                d++;\n            } while(--cloop);\n            d += 4;\n        } while(--ccloop);\n    }\n}\n\nstatic int pix_sad_4(uint32_t r0, uint32_t r1, uint32_t r2, uint32_t r3,\n                     uint32_t x0, uint32_t x1, uint32_t x2, uint32_t x3)\n{\n#if defined(__arm__)\n    int sad = __usad8(r0, x0);\n    sad = __usada8(r1, x1, sad);\n    sad = __usada8(r2, x2, sad);\n    sad = __usada8(r3, x3, sad);\n    return sad;\n#else\n    int c, sad = 0;\n    for (c = 0; c < 4; c++)\n    {\n        int d = (r0 & 0xff) - (x0 & 0xff); r0 >>= 8; x0 >>= 8;\n        sad += ABS(d);\n    }\n    for (c = 0; c < 4; c++)\n    {\n        int d = (r1 & 0xff) - (x1 & 0xff); r1 >>= 8; x1 >>= 8;\n        sad += ABS(d);\n    }\n    for (c = 0; c < 4; c++)\n    {\n        int d = (r2 & 0xff) - (x2 & 0xff); r2 >>= 8; x2 >>= 8;\n        sad += ABS(d);\n    }\n    for (c = 0; c < 4; c++)\n    {\n        int d = (r3 & 0xff) - (x3 & 0xff); r3 >>= 8; x3 >>= 8;\n        sad += ABS(d);\n    }\n    return sad;\n#endif\n}\n\nstatic int h264e_intra_choose_4x4(const pix_t *blockin, pix_t *blockpred, int avail, const pix_t *edge, int mpred, int penalty)\n{\n    int sad, best_sad, best_m = 2;\n\n    uint32_t r0, r1, r2, r3;\n    uint32_t x0, x1, x2, x3, x;\n\n    r0 = ((uint32_t *)blockin)[ 0];\n    r1 = ((uint32_t *)blockin)[ 4];\n    r2 = ((uint32_t *)blockin)[ 8];\n    r3 = ((uint32_t *)blockin)[12];\n#undef TEST\n#define TEST(mode) sad = pix_sad_4(r0, r1, r2, r3, x0, x1, x2, x3); \\\n        if (mode != mpred) sad += penalty;    \\\n        if (sad < best_sad)                   \\\n        {                                     \\\n            ((uint32_t *)blockpred)[ 0] = x0; \\\n            ((uint32_t *)blockpred)[ 4] = x1; \\\n            ((uint32_t *)blockpred)[ 8] = x2; \\\n            ((uint32_t *)blockpred)[12] = x3; \\\n            best_sad = sad;                   \\\n            best_m = mode;                    \\\n        }\n\n    // DC\n    x0 = x1 = x2 = x3 = intra_predict_dc((avail & AVAIL_L) ? &L3 : 0, (avail & AVAIL_T) ? &U0 : 0, 2);\n    best_sad = pix_sad_4(r0, r1, r2, r3, x0, x1, x2, x3);\n    if (2 != mpred)\n    {\n        best_sad += penalty;\n    }\n    ((uint32_t *)blockpred)[ 0] = x0;\n    ((uint32_t *)blockpred)[ 4] = x1;\n    ((uint32_t *)blockpred)[ 8] = x2;\n    ((uint32_t *)blockpred)[12] = x3;\n\n\n    if (avail & AVAIL_T)\n    {\n        uint32_t save = *(uint32_t*)&U4;\n        if (!(avail & AVAIL_TR))\n        {\n            *(uint32_t*)&U4 = U3*0x01010101u;\n        }\n\n        x0 = x1 = x2 = x3 = *(uint32_t*)&U0;\n        TEST(0)\n\n        x  = ((U6 + 3u*U7      + 2u) >> 2) << 24;\n        x |= ((U5 + 2u*U6 + U7 + 2u) >> 2) << 16;\n        x |= ((U4 + 2u*U5 + U6 + 2u) >> 2) << 8;\n        x |= ((U3 + 2u*U4 + U5 + 2u) >> 2);\n\n        x3 = x;\n        x = (x << 8) | ((U2 + 2u*U3 + U4 + 2u) >> 2);\n        x2 = x;\n        x = (x << 8) | ((T1 + 2u*U2 + U3 + 2u) >> 2);\n        x1 = x;\n        x = (x << 8) | ((U0 + 2u*T1 + U2 + 2u) >> 2);\n        x0 = x;\n        TEST(3)\n\n        x3 = x1;\n        x1 = x0;\n\n        x  = ((U4 + U5 + 1u) >> 1) << 24;\n        x |= ((U3 + U4 + 1u) >> 1) << 16;\n        x |= ((U2 + U3 + 1u) >> 1) << 8;\n        x |= ((T1 + U2 + 1u) >> 1);\n        x2 = x;\n        x = (x << 8) | ((U0 + T1 + 1) >> 1);\n        x0 = x;\n        TEST(7)\n\n        *(uint32_t*)&U4 = save;\n    }\n\n    if (avail & AVAIL_L)\n    {\n        x0 = 0x01010101u * L0;\n        x1 = 0x01010101u * L1;\n        x2 = 0x01010101u * L2;\n        x3 = 0x01010101u * L3;\n        TEST(1)\n\n        x = x3;\n        x <<= 16;\n        x |= ((L2 + 3u*L3 + 2u) >> 2) << 8;\n        x |= ((L2 + L3 + 1u) >> 1);\n        x2 = x;\n        x <<= 16;\n        x |= ((L1 + 2u*L2 + L3 + 2u) >> 2) << 8;\n        x |= ((L1 + L2 + 1u) >> 1);\n        x1 = x;\n        x <<= 16;\n        x |= ((L0 + 2u*L1 + L2 + 2u) >> 2) << 8;\n        x |= ((L0 + L1 + 1u) >> 1);\n        x0 = x;\n        TEST(8)\n    }\n\n    if ((avail & (AVAIL_T | AVAIL_L | AVAIL_TL)) == (AVAIL_T | AVAIL_L | AVAIL_TL))\n    {\n        uint32_t line0, line3;\n        x  = ((U3 + 2u*U2 + T1 + 2u) >> 2) << 24;\n        x |= ((U2 + 2u*T1 + U0 + 2u) >> 2) << 16;\n        x |= ((T1 + 2u*U0 + UL + 2u) >> 2) << 8;\n        x |= ((U0 + 2u*UL + L0 + 2u) >> 2);\n        line0 = x;\n        x0 = x;\n        x = (x << 8) | ((UL + 2u*L0 + L1 + 2u) >> 2);\n        x1 = x;\n        x = (x << 8) | ((L0 + 2u*L1 + L2 + 2u) >> 2);\n        x2 = x;\n        x = (x << 8) | ((L1 + 2u*L2 + L3 + 2u) >> 2);\n        x3 = x;\n        line3 = x;\n        TEST(4)\n\n        x = x0 << 8;\n        x |= ((UL + L0 + 1u) >> 1);\n        x0 = x;\n        x <<= 8;\n        x |= (line3 >> 16) & 0xff;\n        x <<= 8;\n        x |= ((L0 + L1 + 1u) >> 1);\n        x1 = x;\n        x <<= 8;\n        x |= (line3 >> 8) & 0xff;\n        x <<= 8;\n        x |= ((L1 + L2 + 1u) >> 1);\n        x2 = x;\n        x <<= 8;\n        x |= line3 & 0xff;\n        x <<= 8;\n        x |= ((L2 + L3 + 1u) >> 1);\n        x3 = x;\n        TEST(6)\n\n        x1 = line0;\n        x3 = (x1 << 8) | ((line3 >> 8) & 0xFF);\n\n        x  = ((U2 + U3 + 1u) >> 1) << 24;\n        x |= ((T1 + U2 + 1u) >> 1) << 16;\n        x |= ((U0 + T1 + 1u) >> 1) << 8;\n        x |= ((UL + U0 + 1u) >> 1);\n        x0 = x;\n        x = (x << 8) | ((line3 >> 16) & 0xFF);\n        x2 = x;\n        TEST(5)\n    }\n    return best_m + (best_sad << 4);\n}\n\nstatic uint8_t byteclip(int x)\n{\n    if (x > 255) x = 255;\n    if (x < 0) x = 0;\n    return (uint8_t)x;\n}\n\nstatic int hpel_lpf(const uint8_t *p, int s)\n{\n    return p[0] - 5*p[s] + 20*p[2*s] + 20*p[3*s] - 5*p[4*s] + p[5*s];\n}\n\nstatic void copy_wh(const uint8_t *src, int src_stride, uint8_t *dst, int w, int h)\n{\n    int x, y;\n    for (y = 0; y < h; y++)\n    {\n        for (x = 0; x < w; x++)\n        {\n            dst [x] = src [x];\n        }\n        dst += 16;\n        src += src_stride;\n    }\n}\n\nstatic void hpel_lpf_diag(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    ALIGN(16) int16_t scratch[21 * 16] ALIGN2(16);  /* 21 rows by 16 pixels per row */\n\n    /*\n     * Intermediate values will be 1/2 pel at Horizontal direction\n     * Starting at (0.5, -2) at top extending to (0.5, height + 3) at bottom\n     * scratch contains a 2D array of size (w)X(h + 5)\n     */\n    int y, x;\n    for (y = 0; y < h + 5; y++)\n    {\n        for (x = 0; x < w; x++)\n        {\n            scratch[y * w + x] = (int16_t)hpel_lpf(src + (y - 2) * src_stride + (x - 2), 1);\n        }\n    }\n\n    /* Vertical interpolate */\n    for (y = 0; y < h; y++)\n    {\n        for (x = 0; x < w; x++)\n        {\n            int pos = y * w + x;\n            int HalfCoeff =\n                scratch [pos] -\n                5 * scratch [pos + 1 * w] +\n                20 * scratch [pos + 2 * w] +\n                20 * scratch [pos + 3 * w] -\n                5 * scratch [pos + 4 * w] +\n                scratch [pos + 5 * w];\n\n            HalfCoeff = byteclip((HalfCoeff + 512) >> 10);\n\n            dst [y * 16 + x] = (uint8_t)HalfCoeff;\n        }\n    }\n}\n\nstatic void hpel_lpf_hor(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    int x, y;\n    for (y = 0; y < h; y++)\n    {\n        for (x = 0; x < w; x++)\n        {\n            dst [y * 16 + x] = byteclip((hpel_lpf(src + y * src_stride + (x - 2), 1) + 16) >> 5);\n        }\n    }\n}\n\nstatic void hpel_lpf_ver(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, int w, int h)\n{\n    int y, x;\n    for (y = 0; y < h; y++)\n    {\n        for (x = 0; x < w; x++)\n        {\n            dst [y * 16 + x] = byteclip((hpel_lpf(src + (y - 2) * src_stride + x, src_stride) + 16) >> 5);\n        }\n    }\n}\n\nstatic void average_16x16_unalign(uint8_t *dst, const uint8_t *src1, int src1_stride)\n{\n    int x, y;\n    for (y = 0; y < 16; y++)\n    {\n        for (x = 0; x < 16; x++)\n        {\n            dst[y * 16 + x] = (uint8_t)(((uint32_t)dst [y * 16 + x] + src1[y*src1_stride + x] + 1) >> 1);\n        }\n    }\n}\n\nstatic void h264e_qpel_average_wh_align(const uint8_t *src0, const uint8_t *src1, uint8_t *dst, point_t wh)\n{\n    int w = wh.s.x;\n    int h = wh.s.y;\n    int x, y;\n    for (y = 0; y < h; y++)\n    {\n        for (x = 0; x < w; x++)\n        {\n            dst[y * 16 + x] = (uint8_t)((src0[y * 16 + x] + src1[y * 16 + x] + 1) >> 1);\n        }\n    }\n}\n\nstatic void h264e_qpel_interpolate_luma(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, point_t wh, point_t dxdy)\n{\n    ALIGN(16) uint8_t scratch[16*16] ALIGN2(16);\n    //  src += ((dx + 1) >> 2) + ((dy + 1) >> 2)*src_stride;            // dx == 3 ? next row; dy == 3 ? next line\n    //  dxdy              actions: Horizontal, Vertical, Diagonal, Average\n    //  0 1 2 3 +1        -   ha    h    ha+\n    //  1                 va  hva   hda  hv+a\n    //  2                 v   vda   d    v+da\n    //  3                 va+ h+va h+da  h+v+a\n    //  +stride\n    int32_t pos = 1 << (dxdy.s.x + 4*dxdy.s.y);\n    int dstused = 0;\n\n    if (pos == 1)\n    {\n        copy_wh(src, src_stride, dst, wh.s.x, wh.s.y);\n        return;\n    }\n    if (pos & 0xe0ee)// 1110 0000 1110 1110\n    {\n        hpel_lpf_hor(src + ((pos & 0xe000) ? src_stride : 0), src_stride, dst, wh.s.x, wh.s.y);\n        dstused++;\n    }\n    if (pos & 0xbbb0)// 1011 1011 1011 0000\n    {\n        hpel_lpf_ver(src + ((pos & 0x8880) ? 1 : 0), src_stride, dstused ? scratch : dst, wh.s.x, wh.s.y);\n        dstused++;\n    }\n    if (pos & 0x4e40)// 0100 1110 0100 0000\n    {\n        hpel_lpf_diag(src, src_stride, dstused ? scratch : dst, wh.s.x, wh.s.y);\n        dstused++;\n    }\n    if (pos & 0xfafa)// 1111 1010 1111 1010\n    {\n        assert(wh.s.x == 16 && wh.s.y == 16);\n        if (dstused == 2)\n        {\n            point_t p;\n\n            src = scratch;\n            src_stride = 16;\n            p.u32 = 16 + (16<<16);\n\n            h264e_qpel_average_wh_align(src, dst, dst, p);\n            return;\n        } else\n        {\n            src += ((dxdy.s.x + 1) >> 2) + ((dxdy.s.y + 1) >> 2)*src_stride;\n        }\n        average_16x16_unalign(dst, src, src_stride);\n    }\n}\n\nstatic void h264e_qpel_interpolate_chroma(const uint8_t *src, int src_stride, uint8_t *h264e_restrict dst, point_t wh, point_t dxdy)\n{\n    /* if fractionl mv is not (0, 0) */\n    if (dxdy.u32)\n    {\n        int a = (8 - dxdy.s.x) * (8 - dxdy.s.y);\n        int b = dxdy.s.x * (8 - dxdy.s.y);\n        int c = (8 - dxdy.s.x) * dxdy.s.y;\n        int d = dxdy.s.x * dxdy.s.y;\n        int h = wh.s.y;\n        do\n        {\n            int x;\n            for (x = 0; x < wh.s.x; x++)\n            {\n                dst[x] = (uint8_t)((\n                   a * src[             x] + b * src[             x + 1] +\n                   c * src[src_stride + x] + d * src[src_stride + x + 1] +\n                   32) >> 6);\n            }\n            dst += 16;\n            src += src_stride;\n        } while (--h);\n    } else\n    {\n        copy_wh(src, src_stride, dst, wh.s.x, wh.s.y);\n    }\n}\n\nstatic int sad_block(const pix_t *a, int a_stride, const pix_t *b, int b_stride, int w, int h)\n{\n    int r, c, sad = 0;\n    for (r = 0; r < h; r++)\n    {\n        for (c = 0; c < w; c++)\n        {\n            int d = a[c] - b[c];\n            sad += ABS(d);\n        }\n        a += a_stride;\n        b += b_stride;\n    }\n    return sad;\n}\n\nstatic int h264e_sad_mb_unlaign_8x8(const pix_t *a, int a_stride, const pix_t *b, int sad[4])\n{\n    sad[0] = sad_block(a,     a_stride, b,     16, 8, 8);\n    sad[1] = sad_block(a + 8, a_stride, b + 8, 16, 8, 8);\n    a += 8*a_stride;\n    b += 8*16;\n    sad[2] = sad_block(a,     a_stride, b,     16, 8, 8);\n    sad[3] = sad_block(a + 8, a_stride, b + 8, 16, 8, 8);\n    return sad[0] + sad[1] + sad[2] + sad[3];\n}\n\nstatic int h264e_sad_mb_unlaign_wh(const pix_t *a, int a_stride, const pix_t *b, point_t wh)\n{\n    return sad_block(a, a_stride, b, 16, wh.s.x, wh.s.y);\n}\n\nstatic void h264e_copy_8x8(pix_t *d, int d_stride, const pix_t *s)\n{\n    int cloop = 8;\n    assert(IS_ALIGNED(d, 8));\n    assert(IS_ALIGNED(s, 8));\n    do\n    {\n        int a = ((const int*)s)[0];\n        int b = ((const int*)s)[1];\n        ((int*)d)[0] = a;\n        ((int*)d)[1] = b;\n        s += 16;\n        d += d_stride;\n    } while(--cloop);\n}\n\nstatic void h264e_copy_16x16(pix_t *d, int d_stride, const pix_t *s, int s_stride)\n{\n    int cloop = 16;\n    assert(IS_ALIGNED(d, 8));\n    assert(IS_ALIGNED(s, 8));\n    do\n    {\n        int a = ((const int*)s)[0];\n        int b = ((const int*)s)[1];\n        int x = ((const int*)s)[2];\n        int y = ((const int*)s)[3];\n        ((int*)d)[0] = a;\n        ((int*)d)[1] = b;\n        ((int*)d)[2] = x;\n        ((int*)d)[3] = y;\n        s += s_stride;\n        d += d_stride;\n    } while(--cloop);\n}\n#endif /* H264E_ENABLE_PLAIN_C */\n\n#if H264E_ENABLE_PLAIN_C || (H264E_ENABLE_NEON && !defined(MINIH264_ASM))\nstatic void h264e_copy_borders(unsigned char *pic, int w, int h, int guard)\n{\n    int r, rowbytes = w + 2*guard;\n    unsigned char *d = pic - guard;\n    for (r = 0; r < h; r++, d += rowbytes)\n    {\n        memset(d, d[guard], guard);\n        memset(d + rowbytes - guard, d[rowbytes - guard - 1], guard);\n    }\n    d = pic - guard - guard*rowbytes;\n    for (r = 0; r < guard; r++)\n    {\n        memcpy(d, pic - guard, rowbytes);\n        memcpy(d + (guard + h)*rowbytes, pic - guard + (h - 1)*rowbytes, rowbytes);\n        d += rowbytes;\n    }\n}\n#endif /* H264E_ENABLE_PLAIN_C || (H264E_ENABLE_NEON && !defined(MINIH264_ASM)) */\n\n#if H264E_ENABLE_PLAIN_C\n#undef TRANSPOSE_BLOCK\n#define TRANSPOSE_BLOCK     1\n#define UNZIGSAG_IN_QUANT   0\n#define SUM_DIF(a, b) { int t = a + b; b = a - b; a = t; }\n\nstatic int clip_byte(int x)\n{\n    if (x > 255)\n    {\n        x = 255;\n    } else if (x < 0)\n    {\n        x = 0;\n    }\n    return x;\n}\n\nstatic void hadamar4_2d(int16_t *x)\n{\n    int s = 1;\n    int sback = 1;\n    int16_t tmp[16];\n    int16_t *out = tmp;\n    int16_t *p = x;\n    do\n    {\n        int cloop = 4;\n        do\n        {\n            int a, b, c, d;\n            a = *p; p += 4;//s;\n            b = *p; p += 4;//s;\n            c = *p; p += 4;//s;\n            d = *p; p -= 11;//sback;\n            SUM_DIF(a, c);\n            SUM_DIF(b, d);\n            SUM_DIF(a, b);\n            SUM_DIF(c, d);\n\n            *out = (int16_t)a; out += s;\n            *out = (int16_t)c; out += s;\n            *out = (int16_t)d; out += s;\n            *out = (int16_t)b; out += sback;\n        } while (--cloop);\n        s = 5 - s;\n        sback = -11;\n        out = x;\n        p = tmp;\n    } while (s != 1);\n}\n\nstatic void dequant_dc(quant_t *q, int16_t *qval, int dequant, int n)\n{\n    do q++->dq[0] = (int16_t)(*qval++ * (int16_t)dequant); while (--n);\n}\n\nstatic void quant_dc(int16_t *qval, int16_t *deq, int16_t quant, int n, int round_q18)\n{\n#if UNZIGSAG_IN_QUANT\n    int r_minus =  (1 << 18) - round_q18;\n    static const uint8_t iscan16[16] = {0, 1, 5, 6, 2, 4, 7, 12, 3, 8, 11, 13, 9, 10, 14, 15};\n    static const uint8_t iscan4[4] = {0, 1, 2, 3};\n    const uint8_t *scan = n == 4 ? iscan4 : iscan16;\n    do\n    {\n        int v = *qval;\n        int r = v < 0 ? r_minus : round_q18;\n        deq[*scan++] = *qval++ = (v * quant + r) >> 18;\n    } while (--n);\n#else\n    int r_minus =  (1<<18) - round_q18;\n    do\n    {\n        int v = *qval;\n        int r = v < 0 ? r_minus : round_q18;\n        *deq++ = *qval++ = (v * quant + r) >> 18;\n    } while (--n);\n#endif\n}\n\nstatic void hadamar2_2d(int16_t *x)\n{\n    int a = x[0];\n    int b = x[1];\n    int c = x[2];\n    int d = x[3];\n    x[0] = (int16_t)(a + b + c + d);\n    x[1] = (int16_t)(a - b + c - d);\n    x[2] = (int16_t)(a + b - c - d);\n    x[3] = (int16_t)(a - b - c + d);\n}\n\nstatic void h264e_quant_luma_dc(quant_t *q, int16_t *deq, const uint16_t *qdat)\n{\n    int16_t *tmp = ((int16_t*)q) - 16;\n    hadamar4_2d(tmp);\n    quant_dc(tmp, deq, qdat[0], 16, 0x20000);//0x15555);\n    hadamar4_2d(tmp);\n    assert(!(qdat[1] & 3));\n    // dirty trick here: shift w/o rounding, since it have no effect  for qp >= 10 (or, to be precise, for qp => 9)\n    dequant_dc(q, tmp, qdat[1] >> 2, 16);\n}\n\nstatic int h264e_quant_chroma_dc(quant_t *q, int16_t *deq, const uint16_t *qdat)\n{\n    int16_t *tmp = ((int16_t*)q) - 16;\n    hadamar2_2d(tmp);\n    quant_dc(tmp, deq, (int16_t)(qdat[0] << 1), 4, 0xAAAA);\n    hadamar2_2d(tmp);\n    assert(!(qdat[1] & 1));\n    dequant_dc(q, tmp, qdat[1] >> 1, 4);\n    return !!(tmp[0] | tmp[1] | tmp[2] | tmp[3]);\n}\n\nstatic const uint8_t g_idx2quant[16] =\n{\n    0, 2, 0, 2,\n    2, 4, 2, 4,\n    0, 2, 0, 2,\n    2, 4, 2, 4\n};\n\n#define TRANSFORM(x0, x1, x2, x3, p, s) { \\\n    int t0 = x0 + x3;                     \\\n    int t1 = x0 - x3;                     \\\n    int t2 = x1 + x2;                     \\\n    int t3 = x1 - x2;                     \\\n    (p)[  0] = (int16_t)(t0 + t2);        \\\n    (p)[  s] = (int16_t)(t1*2 + t3);      \\\n    (p)[2*s] = (int16_t)(t0 - t2);        \\\n    (p)[3*s] = (int16_t)(t1 - t3*2);      \\\n}\n\nstatic void FwdTransformResidual4x42(const uint8_t *inp, const uint8_t *pred,\n    uint32_t inp_stride, int16_t *out)\n{\n    int i;\n    int16_t tmp[16];\n\n#if TRANSPOSE_BLOCK\n    // Transform columns\n    for (i = 0; i < 4; i++, pred++, inp++)\n    {\n        int f0 = inp[0] - pred[0];\n        int f1 = inp[1*inp_stride] - pred[1*16];\n        int f2 = inp[2*inp_stride] - pred[2*16];\n        int f3 = inp[3*inp_stride] - pred[3*16];\n        TRANSFORM(f0, f1, f2, f3, tmp + i*4, 1);\n    }\n    // Transform rows\n    for (i = 0; i < 4; i++)\n    {\n        int d0 = tmp[i + 0];\n        int d1 = tmp[i + 4];\n        int d2 = tmp[i + 8];\n        int d3 = tmp[i + 12];\n        TRANSFORM(d0, d1, d2, d3, out + i, 4);\n    }\n\n#else\n    /* Transform rows */\n    for (i = 0; i < 16; i += 4)\n    {\n        int d0 = inp[0] - pred[0];\n        int d1 = inp[1] - pred[1];\n        int d2 = inp[2] - pred[2];\n        int d3 = inp[3] - pred[3];\n        TRANSFORM(d0, d1, d2, d3, tmp + i, 1);\n        pred += 16;\n        inp += inp_stride;\n    }\n\n    /* Transform columns */\n    for (i = 0; i < 4; i++)\n    {\n        int f0 = tmp[i + 0];\n        int f1 = tmp[i + 4];\n        int f2 = tmp[i + 8];\n        int f3 = tmp[i + 12];\n        TRANSFORM(f0, f1, f2, f3, out + i, 4);\n    }\n#endif\n}\n\nstatic void TransformResidual4x4(int16_t *pSrc)\n{\n    int i;\n    int16_t tmp[16];\n\n    /* Transform rows */\n    for (i = 0; i < 16; i += 4)\n    {\n#if TRANSPOSE_BLOCK\n        int d0 = pSrc[(i >> 2) + 0];\n        int d1 = pSrc[(i >> 2) + 4];\n        int d2 = pSrc[(i >> 2) + 8];\n        int d3 = pSrc[(i >> 2) + 12];\n#else\n        int d0 = pSrc[i + 0];\n        int d1 = pSrc[i + 1];\n        int d2 = pSrc[i + 2];\n        int d3 = pSrc[i + 3];\n#endif\n        int e0 = d0 + d2;\n        int e1 = d0 - d2;\n        int e2 = (d1 >> 1) - d3;\n        int e3 = d1 + (d3 >> 1);\n        int f0 = e0 + e3;\n        int f1 = e1 + e2;\n        int f2 = e1 - e2;\n        int f3 = e0 - e3;\n        tmp[i + 0] = (int16_t)f0;\n        tmp[i + 1] = (int16_t)f1;\n        tmp[i + 2] = (int16_t)f2;\n        tmp[i + 3] = (int16_t)f3;\n    }\n\n    /* Transform columns */\n    for (i = 0; i < 4; i++)\n    {\n        int f0 = tmp[i + 0];\n        int f1 = tmp[i + 4];\n        int f2 = tmp[i + 8];\n        int f3 = tmp[i + 12];\n        int g0 = f0 + f2;\n        int g1 = f0 - f2;\n        int g2 = (f1 >> 1) - f3;\n        int g3 = f1 + (f3 >> 1);\n        int h0 = g0 + g3;\n        int h1 = g1 + g2;\n        int h2 = g1 - g2;\n        int h3 = g0 - g3;\n        pSrc[i + 0] = (int16_t)((h0 + 32) >> 6);\n        pSrc[i + 4] = (int16_t)((h1 + 32) >> 6);\n        pSrc[i + 8] = (int16_t)((h2 + 32) >> 6);\n        pSrc[i + 12] = (int16_t)((h3 + 32) >> 6);\n    }\n}\n\nstatic int is_zero(const int16_t *dat, int i0, const uint16_t *thr)\n{\n    int i;\n    for (i = i0; i < 16; i++)\n    {\n        if ((unsigned)(dat[i] + thr[i & 7]) > (unsigned)2*thr[i & 7])\n        {\n            return 0;\n        }\n    }\n    return 1;\n}\n\nstatic int is_zero4(const quant_t *q, int i0, const uint16_t *thr)\n{\n    return is_zero(q[0].dq, i0, thr) &&\n           is_zero(q[1].dq, i0, thr) &&\n           is_zero(q[4].dq, i0, thr) &&\n           is_zero(q[5].dq, i0, thr);\n}\n\nstatic int zero_smallq(quant_t *q, int mode, const uint16_t *qdat)\n{\n    int zmask = 0;\n    int i, i0 = mode & 1, n = mode >> 1;\n    if (mode == QDQ_MODE_INTER || mode == QDQ_MODE_CHROMA)\n    {\n        for (i = 0; i < n*n; i++)\n        {\n            if (is_zero(q[i].dq, i0, qdat + OFFS_THR_1_OFF))\n            {\n                zmask |= (1 << i); //9.19\n            }\n        }\n        if (mode == QDQ_MODE_INTER)   //8.27\n        {\n            if ((~zmask & 0x0033) && is_zero4(q +  0, i0, qdat + OFFS_THR_2_OFF)) zmask |= 0x33;\n            if ((~zmask & 0x00CC) && is_zero4(q +  2, i0, qdat + OFFS_THR_2_OFF)) zmask |= (0x33 << 2);\n            if ((~zmask & 0x3300) && is_zero4(q +  8, i0, qdat + OFFS_THR_2_OFF)) zmask |= (0x33 << 8);\n            if ((~zmask & 0xCC00) && is_zero4(q + 10, i0, qdat + OFFS_THR_2_OFF)) zmask |= (0x33 << 10);\n        }\n    }\n    return zmask;\n}\n\nstatic int quantize(quant_t *q, int mode, const uint16_t *qdat, int zmask)\n{\n#if UNZIGSAG_IN_QUANT\n#if TRANSPOSE_BLOCK\n    // ; Zig-zag scan      Transposed zig-zag\n    // ;    0 1 5 6        0 2 3 9\n    // ;    2 4 7 C        1 4 8 A\n    // ;    3 8 B D        5 7 B E\n    // ;    9 A E F        6 C D F\n    static const unsigned char iscan16[16] = { 0, 2, 3, 9, 1, 4, 8, 10, 5, 7, 11, 14, 6, 12, 13, 15 };\n#else\n    static const unsigned char iscan16[16] = { 0, 1, 5, 6, 2, 4, 7, 12, 3, 8, 11, 13, 9, 10, 14, 15 };\n#endif\n#endif\n    int i, i0 = mode & 1, ccol, crow;\n    int nz_block_mask = 0;\n    ccol = mode >> 1;\n    crow = ccol;\n    do\n    {\n        do\n        {\n            int nz_mask = 0;\n\n            if (zmask & 1)\n            {\n                int32_t *p = (int32_t *)q->qv;\n                *p++ = 0; *p++ = 0; *p++ = 0; *p++ = 0;\n                *p++ = 0; *p++ = 0; *p++ = 0; *p++ = 0;\n            } else\n            {\n                for (i = i0; i < 16; i++)\n                {\n                    int off = g_idx2quant[i];\n                    int v, round = qdat[OFFS_RND_INTER];\n\n                    if (q->dq[i] < 0) round = 0xFFFF - round;\n\n                    v = (q->dq[i]*qdat[off] + round) >> 16;\n#if UNZIGSAG_IN_QUANT\n                    if (v)\n                        nz_mask |= 1 << iscan16[i];\n                    q->qv[iscan16[i]] = (int16_t)v;\n#else\n                    if (v)\n                        nz_mask |= 1 << i;\n                    q->qv[i] = (int16_t)v;\n#endif\n                    q->dq[i] = (int16_t)(v*qdat[off + 1]);\n                }\n            }\n\n            zmask >>= 1;\n            nz_block_mask <<= 1;\n            if (nz_mask)\n                nz_block_mask |= 1;\n            q++;\n        } while (--ccol);\n        ccol = mode >> 1;\n    } while (--crow);\n    return nz_block_mask;\n}\n\nstatic void transform(const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q)\n{\n    int crow = mode >> 1;\n    int ccol = crow;\n\n    do\n    {\n        do\n        {\n            FwdTransformResidual4x42(inp, pred, inp_stride, q->dq);\n            q++;\n            inp += 4;\n            pred += 4;\n        } while (--ccol);\n        ccol = mode >> 1;\n        inp += 4*(inp_stride - ccol);\n        pred += 4*(16 - ccol);\n    } while (--crow);\n}\n\nstatic int h264e_transform_sub_quant_dequant(const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q, const uint16_t *qdat)\n{\n    int zmask;\n    transform(inp, pred, inp_stride, mode, q);\n    if (mode & 1) // QDQ_MODE_INTRA_16 || QDQ_MODE_CHROMA\n    {\n        int cloop = (mode >> 1)*(mode >> 1);\n        short *dc = ((short *)q) - 16;\n        quant_t *pq = q;\n        do\n        {\n            *dc++ = pq->dq[0];\n            pq++;\n        } while (--cloop);\n    }\n    zmask = zero_smallq(q, mode, qdat);\n    return quantize(q, mode, qdat, zmask);\n}\n\nstatic void h264e_transform_add(pix_t *out, int out_stride, const pix_t *pred, quant_t *q, int side, int32_t mask)\n{\n    int crow = side;\n    int ccol = crow;\n\n    assert(IS_ALIGNED(out, 4));\n    assert(IS_ALIGNED(pred, 4));\n    assert(!(out_stride % 4));\n\n    do\n    {\n        do\n        {\n            if (mask >= 0)\n            {\n                // copy 4x4\n                pix_t *dst = out;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 0 * 16); dst += out_stride;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 1 * 16); dst += out_stride;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 2 * 16); dst += out_stride;\n                *(uint32_t*)dst = *(uint32_t*)(pred + 3 * 16);\n            } else\n            {\n                int i, j;\n                TransformResidual4x4(q->dq);\n                for (j = 0; j < 4; j++)\n                {\n                    for (i = 0; i < 4; i++)\n                    {\n                        int Value = q->dq[j * 4 + i] + pred[j * 16 + i];\n                        out[j * out_stride + i] = (pix_t)clip_byte(Value);\n                    }\n                }\n            }\n            mask = (uint32_t)mask << 1;\n            q++;\n            out += 4;\n            pred += 4;\n        } while (--ccol);\n        ccol = side;\n        out += 4*(out_stride - ccol);\n        pred += 4*(16 - ccol);\n    } while (--crow);\n}\n#endif /* H264E_ENABLE_PLAIN_C */\n\n#if H264E_ENABLE_PLAIN_C || (H264E_ENABLE_NEON && !defined(MINIH264_ASM))\n\n#define BS_BITS 32\n\nstatic void h264e_bs_put_bits(bs_t *bs, unsigned n, unsigned val)\n{\n    assert(!(val >> n));\n    bs->shift -= n;\n    assert((unsigned)n <= 32);\n    if (bs->shift < 0)\n    {\n        assert(-bs->shift < 32);\n        bs->cache |= val >> -bs->shift;\n        *bs->buf++ = SWAP32(bs->cache);\n        bs->shift = 32 + bs->shift;\n        bs->cache = 0;\n    }\n    bs->cache |= val << bs->shift;\n}\n\nstatic void h264e_bs_flush(bs_t *bs)\n{\n    *bs->buf = SWAP32(bs->cache);\n}\n\nstatic unsigned h264e_bs_get_pos_bits(const bs_t *bs)\n{\n    unsigned pos_bits = (unsigned)((bs->buf - bs->origin)*BS_BITS);\n    pos_bits += BS_BITS - bs->shift;\n    assert((int)pos_bits >= 0);\n    return pos_bits;\n}\n\nstatic unsigned h264e_bs_byte_align(bs_t *bs)\n{\n    int pos = h264e_bs_get_pos_bits(bs);\n    h264e_bs_put_bits(bs, -pos & 7, 0);\n    return pos + (-pos & 7);\n}\n\n/**\n*   Golomb code\n*   0 => 1\n*   1 => 01 0\n*   2 => 01 1\n*   3 => 001 00\n*   4 => 001 01\n*\n*   [0]     => 1\n*   [1..2]  => 01x\n*   [3..6]  => 001xx\n*   [7..14] => 0001xxx\n*\n*/\nstatic void h264e_bs_put_golomb(bs_t *bs, unsigned val)\n{\n#ifdef __arm__\n    int size = 32 - __clz(val + 1);\n#else\n    int size = 0;\n    unsigned t = val + 1;\n    do\n    {\n        size++;\n    } while (t >>= 1);\n#endif\n    h264e_bs_put_bits(bs, 2*size - 1, val + 1);\n}\n\n/**\n*   signed Golomb code.\n*   mapping to unsigned code:\n*       0 => 0\n*       1 => 1\n*      -1 => 2\n*       2 => 3\n*      -2 => 4\n*       3 => 5\n*      -3 => 6\n*/\nstatic void h264e_bs_put_sgolomb(bs_t *bs, int val)\n{\n    val = 2*val - 1;\n    val ^= val >> 31;\n    h264e_bs_put_golomb(bs, val);\n}\n\nstatic void h264e_bs_init_bits(bs_t *bs, void *data)\n{\n    bs->origin = data;\n    bs->buf = bs->origin;\n    bs->shift = BS_BITS;\n    bs->cache = 0;\n}\n\nstatic void h264e_vlc_encode(bs_t *bs, int16_t *quant, int maxNumCoeff, uint8_t *nz_ctx)\n{\n    int nnz_context, nlevels, nnz; // nnz = nlevels + trailing_ones\n    int trailing_ones = 0;\n    int trailing_ones_sign = 0;\n    uint8_t runs[16];\n    uint8_t *prun = runs;\n    int16_t *levels;\n    int cloop = maxNumCoeff;\n    BS_OPEN(bs)\n\n#if H264E_ENABLE_SSE2 || (H264E_ENABLE_PLAIN_C && !H264E_ENABLE_NEON)\n    // this branch used with SSE + C configuration\n    int16_t zzquant[16];\n    levels = zzquant + ((maxNumCoeff == 4) ? 4 : 16);\n    if (maxNumCoeff != 4)\n    {\n        int v;\n        if (maxNumCoeff == 16)\n        {\n            v = quant[15]*2; if (v) *--levels = (int16_t)v, *prun++ = 16;\n            v = quant[11]*2; if (v) *--levels = (int16_t)v, *prun++ = 15;\n            v = quant[14]*2; if (v) *--levels = (int16_t)v, *prun++ = 14;\n            v = quant[13]*2; if (v) *--levels = (int16_t)v, *prun++ = 13;\n            v = quant[10]*2; if (v) *--levels = (int16_t)v, *prun++ = 12;\n            v = quant[ 7]*2; if (v) *--levels = (int16_t)v, *prun++ = 11;\n            v = quant[ 3]*2; if (v) *--levels = (int16_t)v, *prun++ = 10;\n            v = quant[ 6]*2; if (v) *--levels = (int16_t)v, *prun++ =  9;\n            v = quant[ 9]*2; if (v) *--levels = (int16_t)v, *prun++ =  8;\n            v = quant[12]*2; if (v) *--levels = (int16_t)v, *prun++ =  7;\n            v = quant[ 8]*2; if (v) *--levels = (int16_t)v, *prun++ =  6;\n            v = quant[ 5]*2; if (v) *--levels = (int16_t)v, *prun++ =  5;\n            v = quant[ 2]*2; if (v) *--levels = (int16_t)v, *prun++ =  4;\n            v = quant[ 1]*2; if (v) *--levels = (int16_t)v, *prun++ =  3;\n            v = quant[ 4]*2; if (v) *--levels = (int16_t)v, *prun++ =  2;\n            v = quant[ 0]*2; if (v) *--levels = (int16_t)v, *prun++ =  1;\n        } else\n        {\n            v = quant[15]*2; if (v) *--levels = (int16_t)v, *prun++ = 15;\n            v = quant[11]*2; if (v) *--levels = (int16_t)v, *prun++ = 14;\n            v = quant[14]*2; if (v) *--levels = (int16_t)v, *prun++ = 13;\n            v = quant[13]*2; if (v) *--levels = (int16_t)v, *prun++ = 12;\n            v = quant[10]*2; if (v) *--levels = (int16_t)v, *prun++ = 11;\n            v = quant[ 7]*2; if (v) *--levels = (int16_t)v, *prun++ = 10;\n            v = quant[ 3]*2; if (v) *--levels = (int16_t)v, *prun++ =  9;\n            v = quant[ 6]*2; if (v) *--levels = (int16_t)v, *prun++ =  8;\n            v = quant[ 9]*2; if (v) *--levels = (int16_t)v, *prun++ =  7;\n            v = quant[12]*2; if (v) *--levels = (int16_t)v, *prun++ =  6;\n            v = quant[ 8]*2; if (v) *--levels = (int16_t)v, *prun++ =  5;\n            v = quant[ 5]*2; if (v) *--levels = (int16_t)v, *prun++ =  4;\n            v = quant[ 2]*2; if (v) *--levels = (int16_t)v, *prun++ =  3;\n            v = quant[ 1]*2; if (v) *--levels = (int16_t)v, *prun++ =  2;\n            v = quant[ 4]*2; if (v) *--levels = (int16_t)v, *prun++ =  1;\n        }\n    } else\n    {\n        int v;\n        v = quant[ 3]*2; if (v) *--levels = (int16_t)v, *prun++ = 4;\n        v = quant[ 2]*2; if (v) *--levels = (int16_t)v, *prun++ = 3;\n        v = quant[ 1]*2; if (v) *--levels = (int16_t)v, *prun++ = 2;\n        v = quant[ 0]*2; if (v) *--levels = (int16_t)v, *prun++ = 1;\n    }\n    quant = zzquant + ((maxNumCoeff == 4) ? 4 : 16);\n    nnz = (int)(quant - levels);\n#else\n    quant += (maxNumCoeff == 4) ? 4 : 16;\n    levels = quant;\n    do\n    {\n        int v = *--quant;\n        if (v)\n        {\n            *--levels = v*2;\n            *prun++ = cloop;\n        }\n    } while (--cloop);\n    quant += maxNumCoeff;\n    nnz = quant - levels;\n#endif\n\n    if (nnz)\n    {\n        cloop = MIN(3, nnz);\n        levels = quant - 1;\n        do\n        {\n            if ((unsigned)(*levels + 2) > 4u)\n            {\n                break;\n            }\n            trailing_ones_sign = (trailing_ones_sign << 1) | (*levels-- < 0);\n            trailing_ones++;\n        } while (--cloop);\n    }\n    nlevels = nnz - trailing_ones;\n\n    nnz_context = nz_ctx[-1] + nz_ctx[1];\n\n    nz_ctx[0] = (uint8_t)nnz;\n    if (nnz_context <= 34)\n    {\n        nnz_context = (nnz_context + 1) >> 1;\n    }\n    nnz_context &= 31;\n\n    // 9.2.1 Parsing process for total number of transform coefficient levels and trailing ones\n    {\n        int off = h264e_g_coeff_token[nnz_context];\n        int n = 6, val = h264e_g_coeff_token[off + trailing_ones + 4*nlevels];\n        if (off != 230)\n        {\n            n = (val & 15) + 1;\n            val >>= 4;\n        }\n        BS_PUT(n, val);\n    }\n\n    if (nnz)\n    {\n        if (trailing_ones)\n        {\n            BS_PUT(trailing_ones, trailing_ones_sign);\n        }\n        if (nlevels)\n        {\n            int vlcnum = 1;\n            int sym_len, prefix_len;\n\n            int sym = *levels-- - 2;\n            if (sym < 0) sym = -3 - sym;\n            if (sym >= 6) vlcnum++;\n            if (trailing_ones < 3)\n            {\n                sym -= 2;\n                if (nnz > 10)\n                {\n                    sym_len = 1;\n                    prefix_len = sym >> 1;\n                    if (prefix_len >= 15)\n                    {\n                        // or vlcnum = 1;  goto escape;\n                        prefix_len = 15;\n                        sym_len = 12;\n                    }\n                    sym -= prefix_len << 1;\n                    // bypass vlcnum advance due to sym -= 2; above\n                    goto loop_enter;\n                }\n            }\n\n            if (sym < 14)\n            {\n                prefix_len = sym;\n                sym = 0; // to avoid side effect in bitbuf\n                sym_len = 0;\n            } else if (sym < 30)\n            {\n                prefix_len = 14;\n                sym_len = 4;\n                sym -= 14;\n            } else\n            {\n                vlcnum = 1;\n                goto escape;\n            }\n            goto loop_enter;\n\n            for (;;)\n            {\n                sym_len = vlcnum;\n                prefix_len = sym >> vlcnum;\n                if (prefix_len >= 15)\n                {\nescape:\n                    prefix_len = 15;\n                    sym_len = 12;\n                }\n                sym -= prefix_len << vlcnum;\n\n                if (prefix_len >= 3 && vlcnum < 6)\n                    vlcnum++;\nloop_enter:\n                sym |= 1 << sym_len;\n                sym_len += prefix_len + 1;\n                BS_PUT(sym_len, sym);\n                if (!--nlevels) break;\n                sym = *levels-- - 2;\n                if (sym < 0) sym = -3 - sym;\n            }\n        }\n\n        if (nnz < maxNumCoeff)\n        {\n            const uint8_t *vlc = (maxNumCoeff == 4) ? h264e_g_total_zeros_cr_2x2 : h264e_g_total_zeros;\n            uint8_t *run = runs;\n            int run_prev = *run++;\n            int nzeros = run_prev - nnz;\n            int zeros_left = 2*nzeros - 1;\n            int ctx = nnz - 1;\n            run[nnz - 1] = (uint8_t)maxNumCoeff; // terminator\n            for (;;)\n            {\n                int t;\n\n                int val = vlc[vlc[ctx] + nzeros];\n                int n = val & 15;\n                val >>= 4;\n                BS_PUT(n, val);\n\n                zeros_left -= nzeros;\n                if (zeros_left < 0)\n                {\n                    break;\n                }\n\n                t = *run++;\n                nzeros = run_prev - t - 1;\n                if (nzeros < 0)\n                {\n                    break;\n                }\n                run_prev = t;\n                assert(zeros_left < 14);\n                vlc = h264e_g_run_before;\n                ctx = zeros_left;\n            }\n        }\n    }\n    BS_CLOSE(bs);\n}\n#endif /* H264E_ENABLE_PLAIN_C || (H264E_ENABLE_NEON && !defined(MINIH264_ASM)) */\n\n#if H264E_SVC_API\nstatic uint32_t udiv32(uint32_t n, uint32_t d)\n{\n    uint32_t q = 0, r = n, N = 16;\n    do\n    {\n        N--;\n        if ((r >> N) >= d)\n        {\n            r -= (d << N);\n            q += (1 << N);\n        }\n    } while (N);\n    return q;\n}\n\nstatic void h264e_copy_8x8_s(pix_t *d, int d_stride, const pix_t *s, int s_stride)\n{\n    int cloop = 8;\n    assert(!((unsigned)(uintptr_t)d & 7));\n    assert(!((unsigned)(uintptr_t)s & 7));\n    do\n    {\n        int a = ((const int*)s)[0];\n        int b = ((const int*)s)[1];\n        ((int*)d)[0] = a;\n        ((int*)d)[1] = b;\n        s += s_stride;\n        d += d_stride;\n    } while(--cloop);\n}\n\nstatic void h264e_frame_downsampling(uint8_t *out, int wo, int ho,\n    const uint8_t *src, int wi, int hi, int wo_Crop, int ho_Crop, int wi_Crop, int hi_Crop)\n{\n#define Q_BILIN 12\n#define ONE_BILIN (1<<Q_BILIN)\n    int r, c;\n    int scaleh = udiv32(hi_Crop<<Q_BILIN, ho_Crop);\n    int scalew = udiv32(wi_Crop<<Q_BILIN, wo_Crop);\n\n    for (r = 0; r < ho_Crop; r++)\n    {\n        int dy = r*scaleh + (scaleh >> 2);\n        int y = dy >> Q_BILIN;\n        dy = dy & (ONE_BILIN - 1);\n\n        for (c = 0; c < wo_Crop; c++)\n        {\n            int dx = c*scalew + (scalew >> 2);\n            //          int dx = c*scalew;\n            int x = dx >> Q_BILIN;\n            const uint8_t *s0, *s1;\n            uint8_t s00, s01, s10, s11;\n            dx &= (ONE_BILIN - 1);\n\n\n            s1 = s0 = src + x + y*wi;\n            if (y < hi - 1)\n            {\n                s1 = s0 + wi;\n            }\n\n            s00 = s01 = s0[0];\n            s10 = s11 = s1[0];\n            if (x < wi - 1)\n            {\n                s01 = s0[1];\n                s11 = s1[1];\n            }\n\n            *out++ =(uint8_t) ((((s11*dx + s10*(ONE_BILIN - dx)) >> (Q_BILIN - 1))*dy +\n                ((s01*dx + s00*(ONE_BILIN - dx)) >> (Q_BILIN - 1))*(ONE_BILIN - dy) + (1 << (Q_BILIN + 1 - 1))) >> (Q_BILIN + 1));\n        }\n        if (wo > wo_Crop) //copy border\n        {\n            int cloop = wo - wo_Crop;\n            uint8_t border = out[-1];\n            do\n            {\n                *out++ = border;\n            } while(--cloop);\n        }\n    }\n\n    // copy bottom\n    {\n        int cloop = (ho - ho_Crop) * wo;\n        if (cloop > 0)\n        {\n            do\n            {\n                *out = out[-wo];\n                out++;\n            } while(--cloop);\n        }\n    }\n}\n\nstatic int clip(int val, int max)\n{\n    if (val < 0) return 0;\n    if (val > max) return max;\n    return val;\n}\n\nstatic const int8_t g_filter16_luma[16][4] =\n{\n    {  0, 32,  0,  0 },\n    { -1, 32,  2, -1 },\n    { -2, 31,  4, -1 },\n    { -3, 30,  6, -1 },\n    { -3, 28,  8, -1 },\n    { -4, 26, 11, -1 },\n    { -4, 24, 14, -2 },\n    { -3, 22, 16, -3 },\n    { -3, 19, 19, -3 },\n    { -3, 16, 22, -3 },\n    { -2, 14, 24, -4 },\n    { -1, 11, 26, -4 },\n    { -1,  8, 28, -3 },\n    { -1,  6, 30, -3 },\n    { -1,  4, 31, -2 },\n    { -1,  2, 32, -1 }\n};\n\nstatic void h264e_intra_upsampling(int srcw, int srch, int dstw, int dsth, int is_chroma,\n    const uint8_t *arg_src, int src_stride, uint8_t *arg_dst, int dst_stride)\n{\n    int i, j;\n    //===== set position calculation parameters =====\n    int shift_x = 16;//(m_iLevelIdc <= 30 ? 16 : 31 - CeilLog2(iBaseW));\n    int shift_y = 16;//(m_iLevelIdc <= 30 ? 16 : 31 - CeilLog2(iBaseH));\n    int step_x  = udiv32(((unsigned int)srcw << shift_x) + (dstw >> 1), dstw);\n    int step_y  = udiv32(((unsigned int)srch << shift_y) + (dsth >> 1), dsth);\n    int start_x = udiv32((srcw << (shift_x - 1 - is_chroma)) + (dstw >> 1), dstw) + (1 << (shift_x - 5));\n    int start_y = udiv32((srch << (shift_y - 1 - is_chroma)) + (dsth >> 1), dsth) + (1 << (shift_y - 5));\n    int16_t *temp16 = (short*)(arg_dst + dst_stride*dsth) + 4;  // malloc(( iBaseH )*sizeof(short)); //ref frame have border =1 mb\n\n    if (is_chroma)\n    {\n        int xpos = start_x - (4 << 12);\n        for (i = 0; i < dstw; i++, xpos += step_x)\n        {\n            const uint8_t* src = arg_src;\n            int xfrac  = (xpos >> 12) & 15;\n            int xint = xpos >> 16;\n            int m0 = clip(xint + 0, srcw - 1);\n            int m1 = clip(xint + 1, srcw - 1);\n            for( j = 0; j < srch ; j++ )\n            {\n                temp16[j] = (int16_t)(src[m1]*xfrac + src[m0]*(16 - xfrac));\n                src += src_stride;\n            }\n            temp16[-1] = temp16[0];\n            temp16[srch] = temp16[srch-1];\n\n            //========== vertical upsampling ===========\n            {\n                int16_t* src16 = temp16;\n                uint8_t* dst = arg_dst + i;\n                int ypos = start_y - (4 << 12);\n                for (j = 0; j < dsth; j++)\n                {\n                    int yfrac = (ypos >> 12) & 15;\n                    int yint  = (ypos >> 16);\n                    int acc = yfrac*src16[yint + 1] + (16 - yfrac)*src16[yint + 0];\n                    acc = (acc + 128) >> 8;\n                    *dst = (int8_t)acc;\n                    dst += dst_stride;\n                    ypos += step_y;\n                }\n            }\n        }\n    } else\n    {\n        int xpos = start_x - (8 << 12);\n        for (i = 0; i < dstw; i++, xpos += step_x)\n        {\n            const uint8_t *src = arg_src;\n            int xfrac    = (xpos >> 12) & 15;\n            int xint   = xpos >> 16;\n            int m0 = clip(xint - 1, srcw - 1);\n            int m1 = clip(xint    , srcw - 1);\n            int m2 = clip(xint + 1, srcw - 1);\n            int m3 = clip(xint + 2, srcw - 1);\n            //========== horizontal upsampling ===========\n            for( j = 0; j < srch ; j++ )\n            {\n                int acc = 0;\n                acc += g_filter16_luma[xfrac][0] * src[m0];\n                acc += g_filter16_luma[xfrac][1] * src[m1];\n                acc += g_filter16_luma[xfrac][2] * src[m2];\n                acc += g_filter16_luma[xfrac][3] * src[m3];\n                temp16[j] = (int16_t)acc;\n                src += src_stride;\n            }\n            temp16[-2] = temp16[-1] = temp16[0];\n            temp16[srch + 1] = temp16[srch] = temp16[srch - 1];\n\n            //========== vertical upsampling ===========\n            {\n                int16_t *src16 = temp16;\n                uint8_t *dst = arg_dst + i;\n                int ypos = start_y - (8 << 12);\n\n                for (j = 0; j < dsth; j++)\n                {\n                    int yfrac = (ypos >> 12) & 15;\n                    int yint = ypos >> 16;\n                    int acc = 512;\n                    acc += g_filter16_luma[yfrac][0] * src16[yint + 0 - 1];\n                    acc += g_filter16_luma[yfrac][1] * src16[yint + 1 - 1];\n                    acc += g_filter16_luma[yfrac][2] * src16[yint + 2 - 1];\n                    acc += g_filter16_luma[yfrac][3] * src16[yint + 3 - 1];\n                    acc >>= 10;\n                    if (acc < 0)\n                    {\n                        acc = 0;\n                    }\n                    if (acc > 255)\n                    {\n                        acc = 255;\n                    }\n                    *dst = (int8_t)acc;\n                    dst += dst_stride;\n                    ypos += step_y;\n                }\n            }\n        }\n    }\n}\n#endif /* H264E_SVC_API */\n\n// Experimental code branch:\n// Rate-control takes into account that long-term references compresses worser than short-term\n#define H264E_RATE_CONTROL_GOLDEN_FRAMES 1\n\n/************************************************************************/\n/*      Constants (can't be changed)                                    */\n/************************************************************************/\n\n#define MIN_QP          10   // Minimum QP\n\n#define MVPRED_MEDIAN   1\n#define MVPRED_L        2\n#define MVPRED_U        3\n#define MVPRED_UR       4\n#define MV_NA           0x8000\n#define AVAIL(mv)       ((mv).u32 != MV_NA)\n\n#define SLICE_TYPE_P    0\n#define SLICE_TYPE_I    2\n\n#define NNZ_NA          64\n\n#define MAX_MV_CAND     20\n\n#define STARTCODE_4BYTES 4\n\n#define SCALABLE_BASELINE 83\n\n/************************************************************************/\n/*      Hardcoded params (can be changed at compile time)               */\n/************************************************************************/\n#define ALPHA_OFS       0       // Deblock alpha offset\n#define BETA_OFS        0       // Deblock beta offset\n#define DQP_CHROMA      0       // chroma delta QP\n\n#define MV_RANGE        32      // Motion vector search range, pixels\n#define MV_GUARD        14      // Out-of-frame MV's restriction, pixels\n\n/************************************************************************/\n/*      Code shortcuts                                                  */\n/************************************************************************/\n#define U(n,v) h264e_bs_put_bits(enc->bs, n, v)\n#define U1(v)  h264e_bs_put_bits(enc->bs, 1, v)\n#define UE(v)  h264e_bs_put_golomb(enc->bs, v)\n#define SE(v)  h264e_bs_put_sgolomb(enc->bs, v)\n#define SWAP(datatype, a, b) { datatype _ = a; a = b; b = _; }\n#define SQR(x) ((x)*(x))\n#define SQRP(pnt) SQR(pnt.s.x) + SQR(pnt.s.y)\n#define SMOOTH(smth, p) smth.s.x = (63*smth.s.x + p.s.x + 32) >> 6;  smth.s.y = (63*smth.s.y + p.s.y + 32) >> 6;\n#define MUL_LAMBDA(x, lambda) ((x)*(lambda) >> 4)\n\n/************************************************************************/\n/*      Optimized code fallback                                         */\n/************************************************************************/\n\n#if defined(MINIH264_ASM)\n#include \"asm/minih264e_asm.h\"\n#endif\n#if H264E_ENABLE_NEON && defined(MINIH264_ASM)\n#   define h264e_bs_put_bits_neon      h264e_bs_put_bits_arm11\n#   define h264e_bs_flush_neon         h264e_bs_flush_arm11\n#   define h264e_bs_get_pos_bits_neon  h264e_bs_get_pos_bits_arm11\n#   define h264e_bs_byte_align_neon    h264e_bs_byte_align_arm11\n#   define h264e_bs_put_golomb_neon    h264e_bs_put_golomb_arm11\n#   define h264e_bs_put_sgolomb_neon   h264e_bs_put_sgolomb_arm11\n#   define h264e_bs_init_bits_neon     h264e_bs_init_bits_arm11\n#   define h264e_vlc_encode_neon       h264e_vlc_encode_arm11\n#elif H264E_ENABLE_NEON\n#   define h264e_bs_put_bits_neon      h264e_bs_put_bits\n#   define h264e_bs_flush_neon         h264e_bs_flush\n#   define h264e_bs_get_pos_bits_neon  h264e_bs_get_pos_bits\n#   define h264e_bs_byte_align_neon    h264e_bs_byte_align\n#   define h264e_bs_put_golomb_neon    h264e_bs_put_golomb\n#   define h264e_bs_put_sgolomb_neon   h264e_bs_put_sgolomb\n#   define h264e_bs_init_bits_neon     h264e_bs_init_bits\n#   define h264e_vlc_encode_neon       h264e_vlc_encode\n#   define h264e_copy_borders_neon     h264e_copy_borders\n#endif\n\n/************************************************************************/\n/*      Declare exported functions for each configuration               */\n/************************************************************************/\n#if !H264E_CONFIGS_COUNT\n#   error no build configuration defined\n#elif H264E_CONFIGS_COUNT == 1\n//  Exactly one configuration: append config suffix to exported names\n#   if H264E_ENABLE_NEON\n#       define MAP_NAME(name) name##_neon\n#   endif\n#   if H264E_ENABLE_SSE2\n#       define MAP_NAME(name) name##_sse2\n#   endif\n#else //if H264E_CONFIGS_COUNT > 1\n//  Several configurations: use Virtual Functions Table (VFT)\ntypedef struct\n{\n#   define  H264E_API(type, name, args) type (*name) args;\n// h264e_qpel\nH264E_API(void, h264e_qpel_interpolate_chroma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_interpolate_luma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_average_wh_align, (const uint8_t *p0, const uint8_t *p1, uint8_t *h264e_restrict d, point_t wh))\n// h264e_deblock\nH264E_API(void, h264e_deblock_chroma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\nH264E_API(void, h264e_deblock_luma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\n// h264e_intra\nH264E_API(void, h264e_intra_predict_chroma,  (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(void, h264e_intra_predict_16x16, (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(int,  h264e_intra_choose_4x4, (const pix_t *blockin, pix_t *blockpred, int avail, const pix_t *edge, int mpred, int penalty))\n// h264e_cavlc\nH264E_API(void,     h264e_bs_put_bits, (bs_t *bs, unsigned n, unsigned val))\nH264E_API(void,     h264e_bs_flush, (bs_t *bs))\nH264E_API(unsigned, h264e_bs_get_pos_bits, (const bs_t *bs))\nH264E_API(unsigned, h264e_bs_byte_align, (bs_t *bs))\nH264E_API(void,     h264e_bs_put_golomb, (bs_t *bs, unsigned val))\nH264E_API(void,     h264e_bs_put_sgolomb, (bs_t *bs, int val))\nH264E_API(void,     h264e_bs_init_bits, (bs_t *bs, void *data))\nH264E_API(void,     h264e_vlc_encode, (bs_t *bs, int16_t *quant, int maxNumCoeff, uint8_t *nz_ctx))\n// h264e_sad\nH264E_API(int,  h264e_sad_mb_unlaign_8x8, (const pix_t *a, int a_stride, const pix_t *b, int sad[4]))\nH264E_API(int,  h264e_sad_mb_unlaign_wh, (const pix_t *a, int a_stride, const pix_t *b, point_t wh))\nH264E_API(void, h264e_copy_8x8, (pix_t *d, int d_stride, const pix_t *s))\nH264E_API(void, h264e_copy_16x16, (pix_t *d, int d_stride, const pix_t *s, int s_stride))\nH264E_API(void, h264e_copy_borders, (unsigned char *pic, int w, int h, int guard))\n// h264e_transform\nH264E_API(void, h264e_transform_add, (pix_t *out, int out_stride, const pix_t *pred, quant_t *q, int side, int32_t mask))\nH264E_API(int,  h264e_transform_sub_quant_dequant, (const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q, const uint16_t *qdat))\nH264E_API(void, h264e_quant_luma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\nH264E_API(int,  h264e_quant_chroma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\n// h264e_denoise\nH264E_API(void, h264e_denoise_run, (unsigned char *frm, unsigned char *frmprev, int w, int h, int stride_frm, int stride_frmprev))\n#   undef H264E_API\n} vft_t;\n\n// non-const VFT, run-time initialized\nstatic const vft_t *g_vft;\n\n// const VFT for each supported build config\n#if H264E_ENABLE_PLAIN_C\nstatic const vft_t g_vft_plain_c =\n{\n#define  H264E_API(type, name, args) name,\n// h264e_qpel\nH264E_API(void, h264e_qpel_interpolate_chroma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_interpolate_luma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_average_wh_align, (const uint8_t *p0, const uint8_t *p1, uint8_t *h264e_restrict d, point_t wh))\n// h264e_deblock\nH264E_API(void, h264e_deblock_chroma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\nH264E_API(void, h264e_deblock_luma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\n// h264e_intra\nH264E_API(void, h264e_intra_predict_chroma,  (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(void, h264e_intra_predict_16x16, (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(int,  h264e_intra_choose_4x4, (const pix_t *blockin, pix_t *blockpred, int avail, const pix_t *edge, int mpred, int penalty))\n// h264e_cavlc\nH264E_API(void,     h264e_bs_put_bits, (bs_t *bs, unsigned n, unsigned val))\nH264E_API(void,     h264e_bs_flush, (bs_t *bs))\nH264E_API(unsigned, h264e_bs_get_pos_bits, (const bs_t *bs))\nH264E_API(unsigned, h264e_bs_byte_align, (bs_t *bs))\nH264E_API(void,     h264e_bs_put_golomb, (bs_t *bs, unsigned val))\nH264E_API(void,     h264e_bs_put_sgolomb, (bs_t *bs, int val))\nH264E_API(void,     h264e_bs_init_bits, (bs_t *bs, void *data))\nH264E_API(void,     h264e_vlc_encode, (bs_t *bs, int16_t *quant, int maxNumCoeff, uint8_t *nz_ctx))\n// h264e_sad\nH264E_API(int,  h264e_sad_mb_unlaign_8x8, (const pix_t *a, int a_stride, const pix_t *b, int sad[4]))\nH264E_API(int,  h264e_sad_mb_unlaign_wh, (const pix_t *a, int a_stride, const pix_t *b, point_t wh))\nH264E_API(void, h264e_copy_8x8, (pix_t *d, int d_stride, const pix_t *s))\nH264E_API(void, h264e_copy_16x16, (pix_t *d, int d_stride, const pix_t *s, int s_stride))\nH264E_API(void, h264e_copy_borders, (unsigned char *pic, int w, int h, int guard))\n// h264e_transform\nH264E_API(void, h264e_transform_add, (pix_t *out, int out_stride, const pix_t *pred, quant_t *q, int side, int32_t mask))\nH264E_API(int,  h264e_transform_sub_quant_dequant, (const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q, const uint16_t *qdat))\nH264E_API(void, h264e_quant_luma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\nH264E_API(int,  h264e_quant_chroma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\n// h264e_denoise\nH264E_API(void, h264e_denoise_run, (unsigned char *frm, unsigned char *frmprev, int w, int h, int stride_frm, int stride_frmprev))\n#undef H264E_API\n};\n#endif\n#if H264E_ENABLE_NEON\nstatic const vft_t g_vft_neon =\n{\n#define  H264E_API(type, name, args) name##_neon,\n// h264e_qpel\nH264E_API(void, h264e_qpel_interpolate_chroma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_interpolate_luma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_average_wh_align, (const uint8_t *p0, const uint8_t *p1, uint8_t *h264e_restrict d, point_t wh))\n// h264e_deblock\nH264E_API(void, h264e_deblock_chroma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\nH264E_API(void, h264e_deblock_luma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\n// h264e_intra\nH264E_API(void, h264e_intra_predict_chroma,  (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(void, h264e_intra_predict_16x16, (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(int,  h264e_intra_choose_4x4, (const pix_t *blockin, pix_t *blockpred, int avail, const pix_t *edge, int mpred, int penalty))\n// h264e_cavlc\nH264E_API(void,     h264e_bs_put_bits, (bs_t *bs, unsigned n, unsigned val))\nH264E_API(void,     h264e_bs_flush, (bs_t *bs))\nH264E_API(unsigned, h264e_bs_get_pos_bits, (const bs_t *bs))\nH264E_API(unsigned, h264e_bs_byte_align, (bs_t *bs))\nH264E_API(void,     h264e_bs_put_golomb, (bs_t *bs, unsigned val))\nH264E_API(void,     h264e_bs_put_sgolomb, (bs_t *bs, int val))\nH264E_API(void,     h264e_bs_init_bits, (bs_t *bs, void *data))\nH264E_API(void,     h264e_vlc_encode, (bs_t *bs, int16_t *quant, int maxNumCoeff, uint8_t *nz_ctx))\n// h264e_sad\nH264E_API(int,  h264e_sad_mb_unlaign_8x8, (const pix_t *a, int a_stride, const pix_t *b, int sad[4]))\nH264E_API(int,  h264e_sad_mb_unlaign_wh, (const pix_t *a, int a_stride, const pix_t *b, point_t wh))\nH264E_API(void, h264e_copy_8x8, (pix_t *d, int d_stride, const pix_t *s))\nH264E_API(void, h264e_copy_16x16, (pix_t *d, int d_stride, const pix_t *s, int s_stride))\nH264E_API(void, h264e_copy_borders, (unsigned char *pic, int w, int h, int guard))\n// h264e_transform\nH264E_API(void, h264e_transform_add, (pix_t *out, int out_stride, const pix_t *pred, quant_t *q, int side, int32_t mask))\nH264E_API(int,  h264e_transform_sub_quant_dequant, (const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q, const uint16_t *qdat))\nH264E_API(void, h264e_quant_luma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\nH264E_API(int,  h264e_quant_chroma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\n// h264e_denoise\nH264E_API(void, h264e_denoise_run, (unsigned char *frm, unsigned char *frmprev, int w, int h, int stride_frm, int stride_frmprev))\n#undef H264E_API\n};\n#endif\n#if H264E_ENABLE_SSE2\nstatic const vft_t g_vft_sse2 =\n{\n#define  H264E_API(type, name, args) name##_sse2,\n// h264e_qpel\nH264E_API(void, h264e_qpel_interpolate_chroma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_interpolate_luma, (const uint8_t *src,int src_stride, uint8_t *h264e_restrict dst,point_t wh, point_t dxdy))\nH264E_API(void, h264e_qpel_average_wh_align, (const uint8_t *p0, const uint8_t *p1, uint8_t *h264e_restrict d, point_t wh))\n// h264e_deblock\nH264E_API(void, h264e_deblock_chroma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\nH264E_API(void, h264e_deblock_luma, (uint8_t *pSrcDst, int32_t srcdstStep, const deblock_params_t *par))\n// h264e_intra\nH264E_API(void, h264e_intra_predict_chroma,  (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(void, h264e_intra_predict_16x16, (pix_t *predict, const pix_t *left, const pix_t *top, int mode))\nH264E_API(int,  h264e_intra_choose_4x4, (const pix_t *blockin, pix_t *blockpred, int avail, const pix_t *edge, int mpred, int penalty))\n// h264e_cavlc\nH264E_API(void,     h264e_bs_put_bits, (bs_t *bs, unsigned n, unsigned val))\nH264E_API(void,     h264e_bs_flush, (bs_t *bs))\nH264E_API(unsigned, h264e_bs_get_pos_bits, (const bs_t *bs))\nH264E_API(unsigned, h264e_bs_byte_align, (bs_t *bs))\nH264E_API(void,     h264e_bs_put_golomb, (bs_t *bs, unsigned val))\nH264E_API(void,     h264e_bs_put_sgolomb, (bs_t *bs, int val))\nH264E_API(void,     h264e_bs_init_bits, (bs_t *bs, void *data))\nH264E_API(void,     h264e_vlc_encode, (bs_t *bs, int16_t *quant, int maxNumCoeff, uint8_t *nz_ctx))\n// h264e_sad\nH264E_API(int,  h264e_sad_mb_unlaign_8x8, (const pix_t *a, int a_stride, const pix_t *b, int sad[4]))\nH264E_API(int,  h264e_sad_mb_unlaign_wh, (const pix_t *a, int a_stride, const pix_t *b, point_t wh))\nH264E_API(void, h264e_copy_8x8, (pix_t *d, int d_stride, const pix_t *s))\nH264E_API(void, h264e_copy_16x16, (pix_t *d, int d_stride, const pix_t *s, int s_stride))\nH264E_API(void, h264e_copy_borders, (unsigned char *pic, int w, int h, int guard))\n// h264e_transform\nH264E_API(void, h264e_transform_add, (pix_t *out, int out_stride, const pix_t *pred, quant_t *q, int side, int32_t mask))\nH264E_API(int,  h264e_transform_sub_quant_dequant, (const pix_t *inp, const pix_t *pred, int inp_stride, int mode, quant_t *q, const uint16_t *qdat))\nH264E_API(void, h264e_quant_luma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\nH264E_API(int,  h264e_quant_chroma_dc, (quant_t *q, int16_t *deq, const uint16_t *qdat))\n// h264e_denoise\nH264E_API(void, h264e_denoise_run, (unsigned char *frm, unsigned char *frmprev, int w, int h, int stride_frm, int stride_frmprev))\n#undef H264E_API\n};\n#endif\n\n/************************************************************************/\n/*      Code to detect CPU features and init VFT                        */\n/************************************************************************/\n\n#if H264E_ENABLE_SSE2\n#if defined(_MSC_VER)\n#define minih264_cpuid __cpuid\n#else\nstatic __inline__ __attribute__((always_inline)) void minih264_cpuid(int CPUInfo[], const int InfoType)\n{\n#if defined(__PIC__)\n    __asm__ __volatile__(\n#if defined(__x86_64__)\n        \"push %%rbx\\n\"\n        \"cpuid\\n\"\n        \"xchgl %%ebx, %1\\n\"\n        \"pop  %%rbx\\n\"\n#else /* defined(__x86_64__) */\n        \"xchgl %%ebx, %1\\n\"\n        \"cpuid\\n\"\n        \"xchgl %%ebx, %1\\n\"\n#endif /* defined(__x86_64__) */\n        : \"=a\" (CPUInfo[0]), \"=r\" (CPUInfo[1]), \"=c\" (CPUInfo[2]), \"=d\" (CPUInfo[3])\n        : \"a\" (InfoType));\n#else /* defined(__PIC__) */\n    __asm__ __volatile__(\n        \"cpuid\"\n        : \"=a\" (CPUInfo[0]), \"=b\" (CPUInfo[1]), \"=c\" (CPUInfo[2]), \"=d\" (CPUInfo[3])\n        : \"a\" (InfoType));\n#endif /* defined(__PIC__)*/\n}\n#endif /* defined(_MSC_VER) */\n\nstatic int CPU_have_SSE2()\n{\n    int CPUInfo[4];\n    minih264_cpuid(CPUInfo, 0);\n    if (CPUInfo[0] > 0)\n    {\n        minih264_cpuid(CPUInfo, 1);\n        if (CPUInfo[3] & (1 << 26))\n            return 1;\n    }\n    return 0;\n}\n#endif\n\nstatic void init_vft(int enableNEON)\n{\n#if H264E_ENABLE_PLAIN_C\n    g_vft = &g_vft_plain_c;\n#endif\n    (void)enableNEON;\n#if H264E_ENABLE_NEON\n    if (enableNEON)\n        g_vft = &g_vft_neon;\n    else\n        g_vft = &g_vft_plain_c;\n#endif\n#if H264E_ENABLE_SSE2\n    if (CPU_have_SSE2())\n    {\n        g_vft = &g_vft_sse2;\n    }\n#endif\n}\n\n#define MAP_NAME(name) g_vft->name\n\n#endif\n\n#ifdef MAP_NAME\n#   define h264e_qpel_interpolate_chroma     MAP_NAME(h264e_qpel_interpolate_chroma)\n#   define h264e_qpel_interpolate_luma       MAP_NAME(h264e_qpel_interpolate_luma)\n#   define h264e_qpel_average_wh_align       MAP_NAME(h264e_qpel_average_wh_align)\n#   define h264e_deblock_chroma              MAP_NAME(h264e_deblock_chroma)\n#   define h264e_deblock_luma                MAP_NAME(h264e_deblock_luma)\n#   define h264e_intra_predict_chroma        MAP_NAME(h264e_intra_predict_chroma)\n#   define h264e_intra_predict_16x16         MAP_NAME(h264e_intra_predict_16x16)\n#   define h264e_intra_choose_4x4            MAP_NAME(h264e_intra_choose_4x4)\n#   define h264e_bs_put_bits                 MAP_NAME(h264e_bs_put_bits)\n#   define h264e_bs_flush                    MAP_NAME(h264e_bs_flush)\n#   define h264e_bs_get_pos_bits             MAP_NAME(h264e_bs_get_pos_bits)\n#   define h264e_bs_byte_align               MAP_NAME(h264e_bs_byte_align)\n#   define h264e_bs_put_golomb               MAP_NAME(h264e_bs_put_golomb)\n#   define h264e_bs_put_sgolomb              MAP_NAME(h264e_bs_put_sgolomb)\n#   define h264e_bs_init_bits                MAP_NAME(h264e_bs_init_bits)\n#   define h264e_vlc_encode                  MAP_NAME(h264e_vlc_encode)\n#   define h264e_sad_mb_unlaign_8x8          MAP_NAME(h264e_sad_mb_unlaign_8x8)\n#   define h264e_sad_mb_unlaign_wh           MAP_NAME(h264e_sad_mb_unlaign_wh)\n#   define h264e_copy_8x8                    MAP_NAME(h264e_copy_8x8)\n#   define h264e_copy_16x16                  MAP_NAME(h264e_copy_16x16)\n#   define h264e_copy_borders                MAP_NAME(h264e_copy_borders)\n#   define h264e_transform_add               MAP_NAME(h264e_transform_add)\n#   define h264e_transform_sub_quant_dequant MAP_NAME(h264e_transform_sub_quant_dequant)\n#   define h264e_quant_luma_dc               MAP_NAME(h264e_quant_luma_dc)\n#   define h264e_quant_chroma_dc             MAP_NAME(h264e_quant_chroma_dc)\n#   define h264e_denoise_run                 MAP_NAME(h264e_denoise_run)\n#endif\n\n/************************************************************************/\n/*      Arithmetics                                                     */\n/************************************************************************/\n\n#ifndef __arm__\n/**\n*   Count of leading zeroes\n*/\nstatic unsigned __clz(unsigned v)\n{\n#if defined(_MSC_VER)\n    unsigned long nbit;\n    _BitScanReverse(&nbit, v);\n    return 31 - nbit;\n#elif defined(__GNUC__) || defined(__clang__) || defined(__aarch64__)\n    return __builtin_clz(v);\n#else\n    unsigned clz = 32;\n    assert(v);\n    do\n    {\n        clz--;\n    } while (v >>= 1);\n    return clz;\n#endif\n}\n#endif\n\n/**\n*   Size of unsigned Golomb code\n*/\nstatic int bitsize_ue(int v)\n{\n    return 2*(32 - __clz(v + 1)) - 1;\n}\n\n/**\n*   Size of signed Golomb code\n*/\nstatic int bits_se(int v)\n{\n    v = 2*v - 1;\n    v ^= v >> 31;\n    return bitsize_ue(v);\n}\n\n/**\n*   Multiply 32x32 Q16\n*/\nstatic uint32_t mul32x32shr16(uint32_t x, uint32_t y)\n{\n    uint32_t r = (x >> 16) * (y & 0xFFFFu) + x * (y >> 16) + ((y & 0xFFFFu) * (x & 0xFFFFu) >> 16);\n    //assert(r == (uint32_t)((__int64)x*y>>16));\n    return r;\n}\n\n/**\n*   Integer division, producing Q16 output\n*/\nstatic uint32_t div_q16(uint32_t numer, uint32_t denum)\n{\n    unsigned f = 1 << __clz(denum);\n    do\n    {\n        denum = denum * f >> 16;\n        numer = mul32x32shr16(numer, f);\n        f = ((1 << 17) - denum);\n    } while (denum  != 0xffff);\n    return numer;\n}\n\n/************************************************************************/\n/*      Motion Vector arithmetics                                       */\n/************************************************************************/\n\nstatic point_t point(int x, int y)\n{\n    point_t p;\n    p.u32 = ((unsigned)y << 16) | ((unsigned)x & 0xFFFF);    // assumes little-endian\n    return p;\n}\n\nstatic int mv_is_zero(point_t p)\n{\n    return !p.u32;\n}\n\nstatic int mv_equal(point_t p0, point_t p1)\n{\n    return (p0.u32 == p1.u32);\n}\n\n/**\n*   check that difference between given MV's components is greater than 3\n*/\nstatic int mv_differs3(point_t p0, point_t p1)\n{\n    return ABS(p0.s.x - p1.s.x) > 3 || ABS(p0.s.y - p1.s.y) > 3;\n}\n\nstatic point_t mv_add(point_t a, point_t b)\n{\n#if defined(__arm__)\n    a.u32 = __sadd16(a.u32, b.u32);\n#elif H264E_ENABLE_SSE2 && (H264E_CONFIGS_COUNT == 1)\n    a.u32 = _mm_cvtsi128_si32(_mm_add_epi16(_mm_cvtsi32_si128(a.u32), _mm_cvtsi32_si128(b.u32)));\n#else\n    a.s.x += b.s.x;\n    a.s.y += b.s.y;\n#endif\n    return a;\n}\n\nstatic point_t mv_sub(point_t a, point_t b)\n{\n#if defined(__arm__)\n    a.u32 = __ssub16(a.u32, b.u32);\n#elif H264E_ENABLE_SSE2 && (H264E_CONFIGS_COUNT == 1)\n    a.u32 = _mm_cvtsi128_si32(_mm_sub_epi16(_mm_cvtsi32_si128(a.u32), _mm_cvtsi32_si128(b.u32)));\n#else\n    a.s.x -= b.s.x;\n    a.s.y -= b.s.y;\n#endif\n    return a;\n}\n\nstatic void mv_clip(point_t *h264e_restrict p, const rectangle_t *range)\n{\n    p->s.x = MAX(p->s.x, range->tl.s.x);\n    p->s.x = MIN(p->s.x, range->br.s.x);\n    p->s.y = MAX(p->s.y, range->tl.s.y);\n    p->s.y = MIN(p->s.y, range->br.s.y);\n}\n\nstatic int mv_in_rect(point_t p, const rectangle_t *r)\n{\n    return (p.s.y >= r->tl.s.y && p.s.y <= r->br.s.y && p.s.x >= r->tl.s.x && p.s.x <= r->br.s.x);\n}\n\nstatic point_t mv_round_qpel(point_t p)\n{\n    return point((p.s.x + 1) & ~3, (p.s.y + 1) & ~3);\n}\n\n/************************************************************************/\n/*      Misc macroblock helper functions                                */\n/************************************************************************/\n/**\n*   @return current macroblock input luma pixels\n*/\nstatic pix_t *mb_input_luma(h264e_enc_t *enc)\n{\n    return enc->inp.yuv[0] + (enc->mb.x + enc->mb.y*enc->inp.stride[0])*16;\n}\n\n/**\n*   @return current macroblock input chroma pixels\n*/\nstatic pix_t *mb_input_chroma(h264e_enc_t *enc, int uv)\n{\n    return enc->inp.yuv[uv] + (enc->mb.x + enc->mb.y*enc->inp.stride[uv])*8;\n}\n\n/**\n*   @return absolute MV for current macroblock for given MV\n*/\nstatic point_t mb_abs_mv(h264e_enc_t *enc, point_t mv)\n{\n    return mv_add(mv, point(enc->mb.x*64, enc->mb.y*64));\n}\n\n/************************************************************************/\n/*      Pixel copy functions                                            */\n/************************************************************************/\n/**\n*   Copy incomplete (cropped) macroblock pixels with borders extension\n*/\nstatic void pix_copy_cropped_mb(pix_t *d, int d_stride, const pix_t *s, int s_stride, int w, int h)\n{\n    int nbottom = d_stride - h; // assume dst = square d_strideXd_stride\n    s_stride -= w;\n    do\n    {\n        int cloop = w;\n        pix_t last = 0;\n        do\n        {\n            last = *s++;\n            *d++ = last;\n        } while (--cloop);\n        cloop = d_stride - w;\n        if (cloop) do\n        {\n            *d++ = last;    // extend row\n        } while (--cloop);\n        s += s_stride;\n    } while (--h);\n    s = d - d_stride;\n    if (nbottom) do\n    {\n        memcpy(d, s, d_stride);  // extend columns\n        d += d_stride;\n    } while (--nbottom);\n}\n\n/**\n*   Copy one image component\n*/\nstatic void pix_copy_pic(pix_t *dst, int dst_stride, pix_t *src, int src_stride, int w, int h)\n{\n    do\n    {\n        memcpy(dst, src, w);\n        dst += dst_stride;\n        src += src_stride;\n    } while (--h);\n}\n\n/**\n*   Copy reconstructed frame to reference buffer, with borders extensionn\n*/\nstatic void pix_copy_recon_pic_to_ref(h264e_enc_t *enc)\n{\n    int c, h = enc->frame.h, w = enc->frame.w, guard = 16;\n    for (c = 0; c < 3; c++)\n    {\n        if (enc->param.const_input_flag)\n        {\n            SWAP(pix_t*, enc->ref.yuv[c], enc->dec.yuv[c]);\n        } else\n        {\n            pix_copy_pic(enc->ref.yuv[c], w + 2*guard, enc->dec.yuv[c], w, w, h);\n        }\n\n        h264e_copy_borders(enc->ref.yuv[c], w, h, guard);\n        if (!c) guard >>= 1, w >>= 1, h >>= 1;\n    }\n}\n\n/************************************************************************/\n/*      Median MV predictor                                             */\n/************************************************************************/\n\n/**\n*   @return neighbors availability flags for current macroblock\n*/\nstatic int mb_avail_flag(const h264e_enc_t *enc)\n{\n    int nmb = enc->mb.num;\n    int flag = nmb >= enc->slice.start_mb_num + enc->frame.nmbx;\n    if (nmb >= enc->slice.start_mb_num + enc->frame.nmbx - 1 && enc->mb.x != enc->frame.nmbx-1)\n    {\n        flag += AVAIL_TR;\n    }\n    if (nmb != enc->slice.start_mb_num && enc->mb.x)\n    {\n        flag += AVAIL_L;\n    }\n    if (nmb > enc->slice.start_mb_num + enc->frame.nmbx && enc->mb.x)\n    {\n        flag += AVAIL_TL;\n    }\n    return flag;\n}\n\n/**\n*   @return median of 3 given integers\n*/\n#if !(H264E_ENABLE_SSE2 && (H264E_CONFIGS_COUNT == 1))\nstatic int me_median_of_3(int a, int b, int c)\n{\n    return MAX(MIN(MAX(a, b), c), MIN(a, b));\n}\n#endif\n\n/**\n*   @return median of 3 given motion vectors\n*/\nstatic point_t point_median_of_3(point_t a, point_t b, point_t c)\n{\n#if H264E_ENABLE_SSE2 && (H264E_CONFIGS_COUNT == 1)\n    __m128i a2 = _mm_cvtsi32_si128(a.u32);\n    __m128i b2 = _mm_cvtsi32_si128(b.u32);\n    point_t med;\n    med.u32 = _mm_cvtsi128_si32(_mm_max_epi16(_mm_min_epi16(_mm_max_epi16(a2, b2), _mm_cvtsi32_si128(c.u32)), _mm_min_epi16(a2, b2)));\n    return med;\n#else\n    return point(me_median_of_3(a.s.x, b.s.x, c.s.x),\n                 me_median_of_3(a.s.y, b.s.y, c.s.y));\n#endif\n}\n\n/**\n*   Save state of the MV predictor\n*/\nstatic void me_mv_medianpredictor_save_ctx(h264e_enc_t *enc, point_t *ctx)\n{\n    int i;\n    point_t *mvtop = enc->mv_pred + 8 + enc->mb.x*4;\n    for (i = 0; i < 4; i++)\n    {\n        *ctx++ = enc->mv_pred[i];\n        *ctx++ = enc->mv_pred[4 + i];\n        *ctx++ = mvtop[i];\n    }\n}\n\n/**\n*   Restore state of the MV predictor\n*/\nstatic void me_mv_medianpredictor_restore_ctx(h264e_enc_t *enc, const point_t *ctx)\n{\n    int i;\n    point_t *mvtop = enc->mv_pred + 8 + enc->mb.x*4;\n    for (i = 0; i < 4; i++)\n    {\n        enc->mv_pred[i] = *ctx++;\n        enc->mv_pred[4 + i] = *ctx++;\n        mvtop[i] = *ctx++;\n    }\n}\n\n/**\n*   Put motion vector to the deblock filter matrix.\n*   x,y,w,h refers to 4x4 blocks within 16x16 macroblock, and should be in the range [0,4]\n*/\nstatic void me_mv_dfmatrix_put(point_t *dfmv, int x, int y, int w, int h, point_t mv)\n{\n    int i;\n    assert(y < 4 && x < 4);\n\n    dfmv += y*5 + x + 5;   // 5x5 matrix without left-top cell\n    do\n    {\n        for (i = 0; i < w; i++)\n        {\n            dfmv[i] = mv;\n        }\n        dfmv += 5;\n    } while (--h);\n}\n\n/**\n*   Use given motion vector for prediction\n*/\nstatic void me_mv_medianpredictor_put(h264e_enc_t *enc, int x, int y, int w, int h, point_t mv)\n{\n    int i;\n    point_t *mvtop = enc->mv_pred + 8 + enc->mb.x*4;\n    assert(y < 4 && x < 4);\n\n    enc->mv_pred[4 + y] = mvtop[x + w-1]; // top-left corner = top-right corner\n    for (i = 1; i < h; i++)\n    {\n        enc->mv_pred[4 + y + i] = mv;     // top-left corner(s) for next row(s) = this\n    }\n    for (i = 0; i < h; i++)\n    {\n        enc->mv_pred[y + i] = mv;         // left = this\n    }\n    for (i = 0; i < w; i++)\n    {\n        mvtop[x + i] = mv;                // top = this\n    }\n}\n\n/**\n*   Motion vector median predictor for non-skip macroblock, as defined in the standard\n*/\nstatic point_t me_mv_medianpredictor_get(const h264e_enc_t *enc, point_t xy, point_t wh)\n{\n    int x = xy.s.x >> 2;\n    int y = xy.s.y >> 2;\n    int w = wh.s.x >> 2;\n    int h = wh.s.y >> 2;\n    int mvPredType = MVPRED_MEDIAN;\n    point_t a, b, c, d, ret = point(0, 0);\n    point_t *mvtop = enc->mv_pred + 8 + enc->mb.x*4;\n    int flag = enc->mb.avail;\n\n    assert(y < 4);\n    assert(x < 4);\n    assert(w <= 4);\n    assert(h <= 4);\n\n    a = enc->mv_pred[y];\n    b = mvtop[x];\n    c = mvtop[x + w];\n    d = enc->mv_pred[4 + y];\n\n    if (!x)\n    {\n        if (!(flag & AVAIL_L))\n        {\n            a.u32 = MV_NA;\n        }\n        if (!(flag & AVAIL_TL))\n        {\n            d.u32 = MV_NA;\n        }\n    }\n    if (!y)\n    {\n        if (!(flag & AVAIL_T))\n        {\n            b.u32 = MV_NA;\n            if (x + w < 4)\n            {\n                c.u32 = MV_NA;\n            }\n            if (x > 0)\n            {\n                d.u32 = MV_NA;\n            }\n        }\n        if (!(flag & AVAIL_TL) && !x)\n        {\n            d.u32 = MV_NA;\n        }\n        if (!(flag & AVAIL_TR) && x + w == 4)\n        {\n            c.u32 = MV_NA;\n        }\n    }\n\n    if (x + w == 4 && (!(flag & AVAIL_TR) || y))\n    {\n        c = d;\n    }\n\n    if (AVAIL(a) && !AVAIL(b) && !AVAIL(c))\n    {\n        mvPredType = MVPRED_L;\n    } else if (!AVAIL(a) && AVAIL(b) && !AVAIL(c))\n    {\n        mvPredType = MVPRED_U;\n    } else if (!AVAIL(a) && !AVAIL(b) && AVAIL(c))\n    {\n        mvPredType = MVPRED_UR;\n    }\n\n    // Directional predictions\n    if (w == 2 && h == 4)\n    {\n        if (x == 0)\n        {\n            if (AVAIL(a))\n            {\n                mvPredType = MVPRED_L;\n            }\n        } else\n        {\n            if (AVAIL(c))\n            {\n                mvPredType = MVPRED_UR;\n            }\n        }\n    } else if (w == 4 && h == 2)\n    {\n        if (y == 0)\n        {\n            if (AVAIL(b))\n            {\n                mvPredType = MVPRED_U;\n            }\n        } else\n        {\n            if (AVAIL(a))\n            {\n                mvPredType = MVPRED_L;\n            }\n        }\n    }\n\n    switch(mvPredType)\n    {\n    default:\n    case MVPRED_MEDIAN:\n        if (!(AVAIL(b) || AVAIL(c)))\n        {\n            if (AVAIL(a))\n            {\n                ret = a;\n            }\n        } else\n        {\n            if (!AVAIL(a))\n            {\n                a = ret;\n            }\n            if (!AVAIL(b))\n            {\n                b = ret;\n            }\n            if (!AVAIL(c))\n            {\n                c = ret;\n            }\n            ret = point_median_of_3(a, b, c);\n        }\n        break;\n    case MVPRED_L:\n        if (AVAIL(a))\n        {\n            ret = a;\n        }\n        break;\n    case MVPRED_U:\n        if (AVAIL(b))\n        {\n            ret = b;\n        }\n        break;\n    case MVPRED_UR:\n        if (AVAIL(c))\n        {\n            ret = c;\n        }\n        break;\n    }\n    return ret;\n}\n\n/**\n*   Motion vector median predictor for skip macroblock\n*/\nstatic point_t me_mv_medianpredictor_get_skip(h264e_enc_t *enc)\n{\n    point_t pred_16x16 = me_mv_medianpredictor_get(enc, point(0, 0),  point(16, 16));\n    enc->mb.mv_skip_pred = point(0, 0);\n    if (!(~enc->mb.avail & (AVAIL_L | AVAIL_T)))\n    {\n        point_t *mvtop = enc->mv_pred + 8 + enc->mb.x*4;\n        if (!mv_is_zero(enc->mv_pred[0]) && !mv_is_zero(mvtop[0]))\n        {\n            enc->mb.mv_skip_pred = pred_16x16;\n        }\n    }\n    return pred_16x16;\n}\n\n/**\n*   Get starting points candidates for MV search\n*/\nstatic int me_mv_medianpredictor_get_cand(const h264e_enc_t *enc, point_t *mv)\n{\n    point_t *mv0 = mv;\n    point_t *mvtop = enc->mv_pred + 8 + enc->mb.x*4;\n    int flag = enc->mb.avail;\n    *mv++ = point(0, 0);\n    if ((flag & AVAIL_L) && AVAIL(enc->mv_pred[0]))\n    {\n        *mv++ = enc->mv_pred[0];\n    }\n    if ((flag & AVAIL_T) && AVAIL(mvtop[0]))\n    {\n        *mv++ = mvtop[0];\n    }\n    if ((flag & AVAIL_TR) && AVAIL(mvtop[4]))\n    {\n        *mv++ = mvtop[4];\n    }\n    return (int)(mv - mv0);\n}\n\n\n/************************************************************************/\n/*      NAL encoding                                                    */\n/************************************************************************/\n\n/**\n*   Count ## of escapes, i.e. binary strings 0000 0000  0000 0000  0000 00xx\n*   P(escape) = 2^-22\n*   E(run_between_escapes) = 2^21 ~= 2 MB\n*/\nstatic int nal_count_esc(const uint8_t *s, int n)\n{\n    int i, cnt_esc = 0, cntz = 0;\n    for (i = 0; i < n; i++)\n    {\n        uint8_t byte = *s++;\n        if (cntz == 2 && byte <= 3)\n        {\n            cnt_esc++;\n            cntz = 0;\n        }\n\n        if (byte)\n        {\n            cntz = 0;\n        } else\n        {\n            cntz++;\n        }\n    }\n    return cnt_esc;\n}\n\n/**\n*   Put NAL escape codes to the output bitstream\n*/\nstatic int nal_put_esc(uint8_t *d, const uint8_t *s, int n)\n{\n    int i, j = 0, cntz = 0;\n    for (i = 0; i < n; i++)\n    {\n        uint8_t byte = *s++;\n        if (cntz == 2 && byte <= 3)\n        {\n            d[j++] = 3;\n            cntz = 0;\n        }\n\n        if (byte)\n        {\n            cntz = 0;\n        } else\n        {\n            cntz++;\n        }\n        d[j++] = byte;\n    }\n    assert(d + j <= s);\n    return j;\n}\n\n/**\n*   Init NAL encoding\n*/\nstatic void nal_start(h264e_enc_t *enc, int nal_hdr)\n{\n    uint8_t *d = enc->out + enc->out_pos;\n    d[0] = d[1] = d[2] = 0; d[3] = 1; // start code\n    enc->out_pos += STARTCODE_4BYTES;\n    d += STARTCODE_4BYTES + (-(int)enc->out_pos & 3);   // 4-bytes align for bitbuffer\n    assert(IS_ALIGNED(d, 4));\n    h264e_bs_init_bits(enc->bs, d);\n    U(8, nal_hdr);\n}\n\n/**\n*   Finalize NAL encoding\n*/\nstatic void nal_end(h264e_enc_t *enc)\n{\n    int cnt_esc, bs_bytes;\n    uint8_t *nal = enc->out + enc->out_pos;\n\n    U1(1); // stop bit\n    bs_bytes = h264e_bs_byte_align(enc->bs) >> 3;\n    h264e_bs_flush(enc->bs);\n\n    // count # of escape bytes to insert\n    cnt_esc = nal_count_esc((unsigned char*)enc->bs->origin, bs_bytes);\n\n    if ((uint8_t *)enc->bs->origin != nal + cnt_esc)\n    {\n        // make free space for escapes and remove align bytes\n        memmove(nal + cnt_esc, enc->bs->origin, bs_bytes);\n    }\n    if (cnt_esc)\n    {\n        // insert escape bytes\n        bs_bytes = nal_put_esc(nal, nal + cnt_esc, bs_bytes);\n    }\n    if (enc->run_param.nalu_callback)\n    {\n        // Call application-supplied callback\n        enc->run_param.nalu_callback(nal, bs_bytes, enc->run_param.nalu_callback_token);\n    }\n    enc->out_pos += bs_bytes;\n}\n\n\n/************************************************************************/\n/*      Top-level syntax elements (SPS,PPS,Slice)                       */\n/************************************************************************/\n\n/**\n*   Encode Sequence Parameter Set (SPS)\n*   ref: [1] 7.3.2.1.1\n*/\n\n//temp global\n#define dependency_id 1\n#define quality_id 0\n#define default_base_mode_flag 0\n#define log2_max_frame_num_minus4 1\n\nstatic void encode_sps(h264e_enc_t *enc, int profile_idc)\n{\n    struct limit_t\n    {\n        uint8_t level;\n        uint8_t constrains;\n        uint16_t max_fs;\n        uint16_t max_vbvdiv5;\n        uint32_t max_dpb;\n    };\n    static const struct limit_t limit [] = {\n        {10, 0xE0, 99,    175/5, 396},\n        {10, 0xF0, 99,    350/5, 396},\n        {11, 0xE0, 396,   500/5, 900},\n        {12, 0xE0, 396,   1000/5, 2376},\n        {13, 0xE0, 396,   2000/5, 2376},\n        {20, 0xE0, 396,   2000/5, 2376},\n        {21, 0xE0, 792,   4000/5, 4752},\n        {22, 0xE0, 1620,  4000/5, 8100},\n        {30, 0xE0, 1620,  10000/5, 8100},\n        {31, 0xE0, 3600,  14000/5, 18000},\n        {32, 0xE0, 5120,  20000/5, 20480},\n        {40, 0xE0, 8192,  25000/5, 32768},\n        {41, 0xE0, 8192,  62500/5, 32768},\n        {42, 0xE0, 8704,  62500/5, 34816},\n        {50, 0xE0, 22080, 135000/5, 110400},\n        {51, 0xE0, 36864, 240000/5, 184320}\n    };\n    const struct limit_t *plim = limit;\n\n    while (plim->level < 51 && (enc->frame.nmb > plim->max_fs ||\n        enc->param.vbv_size_bytes > plim->max_vbvdiv5*(5*1000/8) ||\n        (unsigned)(enc->frame.nmb*(enc->param.max_long_term_reference_frames + 1)) > plim->max_dpb))\n    {\n        plim++;\n    }\n\n    nal_start(enc, 0x67 | (profile_idc == SCALABLE_BASELINE)*8);\n    U(8, profile_idc);  // profile, 66 = baseline\n    U(8, plim->constrains & ((profile_idc!= SCALABLE_BASELINE)*4));     // no constrains\n    U(8, plim->level);\n    //U(5, 0x1B);       // sps_id|log2_max_frame_num_minus4|pic_order_cnt_type\n    //UE(0);  // sps_id 1\n    UE(enc->param.sps_id);\n\n#if H264E_SVC_API\n    if(profile_idc== SCALABLE_BASELINE)\n    {\n        UE(1); //chroma_format_idc\n        UE(0); //bit_depth_luma_minus8\n        UE(0); //bit_depth_chroma_minus8)\n        U1(0); //qpprime_y_zero_transform_bypass_flag\n        U1(0); //seq_scaling_matrix_present_flag\n    }\n#endif\n    UE(log2_max_frame_num_minus4);  // log2_max_frame_num_minus4  1 UE(0);  // log2_max_frame_num_minus4  1\n    UE(2);  // pic_order_cnt_type         011\n    UE(1 + enc->param.max_long_term_reference_frames);  // num_ref_frames\n    U1(0);                                      // gaps_in_frame_num_value_allowed_flag);\n    UE(((enc->param.width + 15) >> 4) - 1);     // pic_width_in_mbs_minus1\n    UE(((enc->param.height + 15) >> 4) - 1);    // pic_height_in_map_units_minus1\n    U(3, 6 + enc->frame.cropping_flag);         // frame_mbs_only_flag|direct_8x8_inference_flag|frame_cropping_flag\n//    U1(1);  // frame_mbs_only_flag\n//    U1(1);  // direct_8x8_inference_flag\n//    U1(frame_cropping_flag);  // frame_cropping_flag\n    if (enc->frame.cropping_flag)\n    {\n        UE(0);                                          // frame_crop_left_offset\n        UE((enc->frame.w - enc->param.width) >> 1);     // frame_crop_right_offset\n        UE(0);                                          // frame_crop_top_offset\n        UE((enc->frame.h - enc->param.height) >> 1);    // frame_crop_bottom_offset\n    }\n    U1(0);      // vui_parameters_present_flag\n\n#if H264E_SVC_API\n    if(profile_idc == SCALABLE_BASELINE)\n    {\n        U1(1);  //(inter_layer_deblocking_filter_control_present_flag); //inter_layer_deblocking_filter_control_present_flag\n        U(2,0); //extended_spatial_scalability\n        U1(0);  //chroma_phase_x_plus1_flag\n        U(2,0); //chroma_phase_y_plus1\n\n    /*    if( sps->sps_ext.extended_spatial_scalability == 1 )\n        {\n            //if( ChromaArrayType > 0 )\n            {\n                put_bits( s, 1,0);\n                put_bits( s, 2,0); ///\n            }\n            put_bits_se( s, sps->sps_ext.seq_scaled_ref_layer_left_offset );\n            put_bits_se( s, sps->sps_ext.seq_scaled_ref_layer_top_offset );\n            put_bits_se( s, sps->sps_ext.seq_scaled_ref_layer_right_offset );\n            put_bits_se( s, sps->sps_ext.seq_scaled_ref_layer_bottom_offset );\n        }*/\n        U1(0); //seq_tcoeff_level_prediction_flag\n        U1(1); //slice_header_restriction_flag\n        U1(0); //svc_vui_parameters_present_flag\n        U1(0); //additional_extension2_flag\n    }\n#endif\n    nal_end(enc);\n}\n\n/**\n*   Encode Picture Parameter Set (SPS)\n*   ref: [1] 7.3.2.2\n*/\nstatic void encode_pps(h264e_enc_t *enc, int pps_id)\n{\n    nal_start(enc, 0x68);\n //   U(10, 0x338);       // constant shortcut:\n    UE(enc->param.sps_id*4 + pps_id);  // pic_parameter_set_id         1\n    UE(enc->param.sps_id);  // seq_parameter_set_id         1\n    U1(0);  // entropy_coding_mode_flag     0\n    U1(0);  // pic_order_present_flag       0\n    UE(0);  // num_slice_groups_minus1      1\n    UE(0);  // num_ref_idx_l0_active_minus1 1\n    UE(0);  // num_ref_idx_l1_active_minus1 1\n    U1(0);  // weighted_pred_flag           0\n    U(2,0); // weighted_bipred_idc          00\n    SE(enc->sps.pic_init_qp - 26);  // pic_init_qp_minus26\n#if DQP_CHROMA\n    SE(0);  // pic_init_qs_minus26                    1\n    SE(DQP_CHROMA);  // chroma_qp_index_offset        1\n    U1(1);  // deblocking_filter_control_present_flag 1\n    U1(0);  // constrained_intra_pred_flag            0\n    U1(0);  // redundant_pic_cnt_present_flag         0\n#else\n    U(5, 0x1C);         // constant shortcut:\n//     SE(0);  // pic_init_qs_minus26                    1\n//     SE(0);  // chroma_qp_index_offset                 1\n//     U1(1);  // deblocking_filter_control_present_flag 1\n//     U1(0);  // constrained_intra_pred_flag            0\n//     U1(0);  // redundant_pic_cnt_present_flag         0\n#endif\n    nal_end(enc);\n}\n\n/**\n*   Encode Slice Header\n*   ref: [1] 7.3.3\n*/\nstatic void encode_slice_header(h264e_enc_t *enc, int frame_type, int long_term_idx_use, int long_term_idx_update, int pps_id, int enc_type)\n{\n    // slice reset\n    enc->slice.start_mb_num = enc->mb.num;\n    enc->mb.skip_run = 0;\n    memset(enc->i4x4mode, -1, (enc->frame.nmbx + 1)*4);\n    memset(enc->nnz, NNZ_NA, (enc->frame.nmbx + 1)*8);    // DF ignore slice borders, but uses it's own nnz's\n\n    if (enc_type == 0)\n    {\n#if H264E_SVC_API\n        if (enc->param.num_layers > 1)\n        {\n            //need prefix nal for compatibility base layer with h264\n            nal_start(enc, 14 | 0x40);\n            //if((nal_unit_type == NAL_UNIT_TYPE_PREFIX_SCALABLE_EXT ) ||nal_unit_type == NAL_UNIT_TYPE_RBSP_SCALABLE_EXT))\n            {\n                //reserved_one_bit = 1    idr_flag                    priority_id\n                U(8, (1 << 7) | ((frame_type == H264E_FRAME_TYPE_KEY) << 6) | 0);\n                U1(1);   //no_inter_layer_pred_flag\n                U(3, 0); //dependency_id\n                U(4, quality_id); //quality_id\n                //reserved_three_2bits = 3!\n                U(3, 0); //temporal_id\n                U1(1); //use_ref_base_pic_flag\n                U1(0); //discardable_flag\n                U1(1); //output_flag\n                U(2, 3);\n\n                U1(0); //store_ref_base_pic_flag\n                if (!(frame_type == H264E_FRAME_TYPE_KEY))\n                {\n                    U1(0); //adaptive_ref_base_pic_marking_mode_flag  u(1)\n                }\n\n                U1(0); //prefix_nal_unit_additional_extension_flag 2 u(1)\n\n                //put_bits_rbsp_trailing( s );\n            }\n            nal_end(enc);\n        }\n#endif //#if H264E_SVC_API\n        nal_start(enc, (frame_type == H264E_FRAME_TYPE_KEY ? 5 : 1) | (long_term_idx_update >= 0 ? 0x60 : 0));\n    }\n#if H264E_SVC_API\n    else\n    {\n        nal_start(enc, (20 | (long_term_idx_update >= 0 ? 0x60 : 0)));  //RBSP_SCALABLE_EXT = 20\n        //nal_unit_type 20 or 14\n        {\n            //reserved_one_bit = 1    idr_flag                    priority_id\n            U(8, (1 << 7) | ((frame_type == H264E_FRAME_TYPE_KEY) << 6) | 0);\n            U1(!enc->param.inter_layer_pred_flag); //no_inter_layer_pred_flag\n            U(3, dependency_id); //dependency_id\n            U(4, quality_id);    //quality_id\n            //reserved_three_2bits = 3!!!\n            U(3, 0); //temporal_id\n            U1(0); //use_ref_base_pic_flag\n            U1(1); //discardable_flag\n            U1(1); //output_flag\n            U(2, 3);\n        }\n    }\n#endif\n\n    UE(enc->slice.start_mb_num);        // first_mb_in_slice\n    UE(enc->slice.type);                // slice_type\n    //U(1+4, 16 + (enc->frame.num&15));   // pic_parameter_set_id | frame_num\n    UE(pps_id);                           // pic_parameter_set_id\n    U(4 + log2_max_frame_num_minus4, enc->frame.num & ((1 << (log2_max_frame_num_minus4 + 4)) - 1)); // frame_num U(4, enc->frame.num&15);            // frame_num\n    if (frame_type == H264E_FRAME_TYPE_KEY)\n    {\n        UE(enc->next_idr_pic_id);       // idr_pic_id\n    }\n    //!!!  if !quality_id && enc->slice.type == SLICE_TYPE_P  put_bit(s, 0); // num_ref_idx_active_override_flag = 0\n    if(!quality_id)\n    {\n        if (((enc_type != 0)) && enc->slice.type == SLICE_TYPE_P)\n        {\n            //U1(0);\n        }\n        if (enc->slice.type == SLICE_TYPE_P)// if( slice_type == P  | |  slice_type ==  SP  | |  slice_type  = =  B )\n        {\n            int ref_pic_list_modification_flag_l0 = long_term_idx_use > 0;\n            //U1(0);                      // num_ref_idx_active_override_flag\n            // ref_pic_list_modification()\n            U(2, ref_pic_list_modification_flag_l0); // num_ref_idx_active_override_flag | ref_pic_list_modification_flag_l0\n            if (ref_pic_list_modification_flag_l0)\n            {\n                // Table 7-7\n                UE(2);      // long_term_pic_num is present and specifies the long-term picture number for a reference picture\n                UE(long_term_idx_use - 1); // long_term_pic_num\n                UE(3);      // End loop\n            }\n        }\n\n        if (long_term_idx_update >= 0)\n        {\n            //dec_ref_pic_marking( )\n            if (frame_type == H264E_FRAME_TYPE_KEY)\n            {\n                //U1(0);                                      // no_output_of_prior_pics_flag\n                //U1(enc->param.enable_golden_frames_flag);   // long_term_reference_flag\n                U(2, enc->param.max_long_term_reference_frames > 0);   // no_output_of_prior_pics_flag | long_term_reference_flag\n            } else\n            {\n                int adaptive_ref_pic_marking_mode_flag = long_term_idx_update > 0;//(frame_type == H264E_FRAME_TYPE_GOLDEN);\n                U1(adaptive_ref_pic_marking_mode_flag);\n                if (adaptive_ref_pic_marking_mode_flag)\n                {\n                    // Table 7-9\n                    if (enc->short_term_used)\n                    {\n                        UE(1);  // unmark short\n                        UE(0);  // unmark short\n                    }\n                    if (enc->lt_used[long_term_idx_update - 1])\n                    {\n                        UE(2);  // Mark a long-term reference picture as \"unused for reference\"\n                        UE(long_term_idx_update - 1); // index\n                    } else\n                    {\n                        UE(4);  // Specify the maximum long-term frame index\n                        UE(enc->param.max_long_term_reference_frames);    // [0,max-1]+1\n                    }\n                    UE(6);  // Mark the current picture as \"used for long-term reference\"\n                    UE(long_term_idx_update - 1);   // index\n                    UE(0);  // End loop\n                }\n            }\n        }\n    }\n    SE(enc->rc.prev_qp - enc->sps.pic_init_qp);     // slice_qp_delta\n#if H264E_MAX_THREADS\n    if (enc->param.max_threads > 1)\n    {\n        UE(enc->speed.disable_deblock ? 1 : 2);\n    } else\n#endif\n    {\n        UE(enc->speed.disable_deblock);             // disable deblock\n    }\n\n    if (enc->speed.disable_deblock != 1)\n    {\n#if ALPHA_OFS || BETA_OFS\n        SE(ALPHA_OFS/2);                            // slice_alpha_c0_offset_div2\n        SE(BETA_OFS/2);                             // slice_beta_offset_div2\n#else\n        U(2, 3);\n#endif\n    }\n\n#if H264E_SVC_API\n    if (enc_type != 0)\n    {\n        enc->adaptive_base_mode_flag = enc->param.inter_layer_pred_flag;\n        if (enc->param.inter_layer_pred_flag && !quality_id)\n        {\n            UE(16*(dependency_id - 1));\n            //if(1)//(inter_layer_deblocking_filter_control_present_flag)\n            {\n                UE(0);//disable_inter_layer_deblocking_filter_idc\n                UE(0);\n                UE(0);\n            }\n            /*if( sh->disable_inter_layer_deblocking_filter_idc != 1 )\n            {\n                put_bits_se(s, sh->slice_alpha_c0_offset_div2);\n                put_bits_se(s, sh->slice_beta_offset_div2);\n            }*/\n            U1(0); // constrained_intra_resampling_flag 2 u(1)\n        }\n        if (enc->param.inter_layer_pred_flag)\n        {\n            U1(0); //slice_skip_flag u(1)\n            {\n                U1(enc->adaptive_base_mode_flag); // 2 u(1)\n                if (!enc->adaptive_base_mode_flag)\n                    U1(default_base_mode_flag); // 2 u(1)\n                if (!default_base_mode_flag)\n                {\n                    U1(0); //adaptive_motion_prediction_flag) // 2 u(1)\n                    U1(0); //sh->default_motion_prediction_flag// 2 u(1)\n                }\n                U1(0); //adaptive_residual_prediction_flag // 2 u(1)\n                U1(0); //default_residual_prediction_flag // 2 u(1)\n            }\n        }\n    }\n#endif // #if H264E_SVC_API\n}\n\n/**\n*   Macroblock transform, quantization and bitstream encoding\n*/\nstatic void mb_write(h264e_enc_t *enc, int enc_type, int base_mode)\n{\n    int i, uv, mb_type, cbpc, cbpl, cbp;\n    scratch_t *qv = enc->scratch;\n    //int base_mode = enc_type > 0 ? 1 : 0;\n    int mb_type_svc = base_mode ? -2 : enc->mb.type;\n    int intra16x16_flag = mb_type_svc >= 6;// && !base_mode;\n    uint8_t nz[9];\n    uint8_t *nnz_top = enc->nnz + 8 + enc->mb.x*8;\n    uint8_t *nnz_left = enc->nnz;\n\n    if (enc->mb.type != 5)\n    {\n        enc->i4x4mode[0] = enc->i4x4mode[enc->mb.x + 1] = 0x02020202;\n    }\n\n    enc->df.nzflag = ((enc->df.nzflag >> 4) & 0x84210) | enc->df.df_nzflag[enc->mb.x];\n    for (i = 0; i < 4; i++)\n    {\n        nz[5 + i] = nnz_top[i];\n        nnz_top[i] = 0;\n        nz[3 - i] = nnz_left[i];\n        nnz_left[i] = 0;\n    }\n\nl_skip:\n    if (enc->mb.type == -1)\n    {\n        // encode skip macroblock\n        assert(enc->slice.type != SLICE_TYPE_I);\n\n        // Increment run count\n        enc->mb.skip_run++;\n\n        // Update predictors\n        *(uint32_t*)(nnz_top + 4) = *(uint32_t*)(nnz_left + 4) = 0; // set chroma NNZ to 0\n        me_mv_medianpredictor_put(enc, 0, 0, 4, 4, enc->mb.mv[0]);\n        me_mv_dfmatrix_put(enc->df.df_mv, 0, 0, 4, 4, enc->mb.mv[0]);\n\n        // Update reference with reconstructed pixels\n        h264e_copy_16x16(enc->dec.yuv[0], enc->dec.stride[0], enc->pbest, 16);\n        h264e_copy_8x8(enc->dec.yuv[1], enc->dec.stride[1], enc->ptest);\n        h264e_copy_8x8(enc->dec.yuv[2], enc->dec.stride[2], enc->ptest + 8);\n    } else\n    {\n        if (enc->mb.type != 5)\n        {\n            unsigned nz_mask;\n            nz_mask = h264e_transform_sub_quant_dequant(qv->mb_pix_inp, enc->pbest, 16, intra16x16_flag ? QDQ_MODE_INTRA_16 : QDQ_MODE_INTER, qv->qy, enc->rc.qdat[0]);\n            enc->scratch->nz_mask = (uint16_t)nz_mask;\n            if (intra16x16_flag)\n            {\n                h264e_quant_luma_dc(qv->qy, qv->quant_dc, enc->rc.qdat[0]);\n                nz_mask = 0xFFFF;\n            }\n            h264e_transform_add(enc->dec.yuv[0], enc->dec.stride[0], enc->pbest, qv->qy, 4, nz_mask << 16);\n        }\n\n        // Coded Block Pattern for luma\n        cbpl = 0;\n        if (enc->scratch->nz_mask & 0xCC00) cbpl |= 1;\n        if (enc->scratch->nz_mask & 0x3300) cbpl |= 2;\n        if (enc->scratch->nz_mask & 0x00CC) cbpl |= 4;\n        if (enc->scratch->nz_mask & 0x0033) cbpl |= 8;\n\n        // Coded Block Pattern for chroma\n        cbpc = 0;\n        for (uv = 1; uv < 3; uv++)\n        {\n            pix_t *pred = enc->ptest + (uv - 1)*8;\n            pix_t *pix_mb_uv = mb_input_chroma(enc, uv);\n            int dc_flag, inp_stride = enc->inp.stride[uv];\n            unsigned nz_mask;\n            quant_t *pquv = (uv == 1) ? qv->qu : qv->qv;\n\n            if (enc->frame.cropping_flag && ((enc->mb.x + 1)*16  > enc->param.width || (enc->mb.y + 1)*16  > enc->param.height))\n            {\n                pix_copy_cropped_mb(enc->scratch->mb_pix_inp, 8, pix_mb_uv, enc->inp.stride[uv],\n                    MIN(8, enc->param.width/2  - enc->mb.x*8),\n                    MIN(8, enc->param.height/2 - enc->mb.y*8)\n                    );\n                pix_mb_uv = enc->scratch->mb_pix_inp;\n                inp_stride = 8;\n            }\n\n            nz_mask = h264e_transform_sub_quant_dequant(pix_mb_uv, pred, inp_stride, QDQ_MODE_CHROMA, pquv, enc->rc.qdat[1]);\n\n            if (nz_mask)\n            {\n                cbpc = 2;\n            }\n\n            cbpc |= dc_flag = h264e_quant_chroma_dc(pquv, uv == 1 ? qv->quant_dc_u : qv->quant_dc_v, enc->rc.qdat[1]);\n\n            if (!(dc_flag | nz_mask))\n            {\n                h264e_copy_8x8(enc->dec.yuv[uv], enc->dec.stride[uv], pred);\n            } else\n            {\n                if (dc_flag)\n                {\n                    for (i = 0; i < 4; i++)\n                    {\n                        if (~nz_mask & (8 >> i))\n                        {\n                            memset(pquv[i].dq + 1, 0, (16 - 1)*sizeof(int16_t));\n                        }\n                    }\n                    nz_mask = 15;\n                }\n                h264e_transform_add(enc->dec.yuv[uv], enc->dec.stride[uv], pred, pquv, 2, nz_mask << 28);\n            }\n        }\n        cbpc = MIN(cbpc, 2);\n\n        // Rollback to skip\n        if (!(enc->mb.type | cbpl | cbpc) && // Inter prediction, all-zero after quantization\n            mv_equal(enc->mb.mv[0], enc->mb.mv_skip_pred)) // MV == MV preditor for skip\n        {\n            enc->mb.type = -1;\n            goto l_skip;\n        }\n\n        mb_type = enc->mb.type;\n        if (mb_type_svc >= 6)   // intra 16x16\n        {\n            if (cbpl)\n            {\n                cbpl = 15;\n            }\n            mb_type += enc->mb.i16.pred_mode_luma + cbpc*4 + (cbpl ? 12 : 0);\n        }\n        if (mb_type >= 5 && enc->slice.type == SLICE_TYPE_I)    // Intra in I slice\n        {\n            mb_type -= 5;\n        }\n\n        if (enc->slice.type != SLICE_TYPE_I)\n        {\n            UE(enc->mb.skip_run);\n            enc->mb.skip_run = 0;\n        }\n\n        (void)enc_type;\n#if H264E_SVC_API\n        if (enc->adaptive_base_mode_flag && enc_type > 0)\n            U1(base_mode);\n#endif\n\n        if (!base_mode)\n            UE(mb_type);\n\n        if (enc->mb.type == 3) // 8x8\n        {\n            for (i = 0; i < 4; i++)\n            {\n                UE(0);\n            }\n            // 0 = 8x8\n            // 1 = 8x4\n            // 2 = 4x8\n            // 3 = 4x4\n        }\n\n        if (!base_mode)\n        {\n            if (enc->mb.type >= 5)   // intra\n            {\n                int pred_mode_chroma;\n                if (enc->mb.type == 5)  // intra 4x4\n                {\n                    for (i = 0; i < 16; i++)\n                    {\n                        int m = enc->mb.i4x4_mode[decode_block_scan[i]];\n                        int nbits =  4;\n                        if (m < 0)\n                        {\n                            m = nbits = 1;\n                        }\n                        U(nbits, m);\n                    }\n                }\n                pred_mode_chroma = enc->mb.i16.pred_mode_luma;\n                if (!(pred_mode_chroma&1))\n                {\n                    pred_mode_chroma ^= 2;\n                }\n                UE(pred_mode_chroma);\n                me_mv_medianpredictor_put(enc, 0, 0, 4, 4, point(MV_NA,0));\n            } else\n            {\n                int part, x = 0, y = 0;\n                int dx = (enc->mb.type & 2) ? 2 : 4;\n                int dy = (enc->mb.type & 1) ? 2 : 4;\n                for (part = 0;;part++)\n                {\n                    SE(enc->mb.mvd[part].s.x);\n                    SE(enc->mb.mvd[part].s.y);\n                    me_mv_medianpredictor_put(enc, x, y, dx, dy, enc->mb.mv[part]);\n                    me_mv_dfmatrix_put(enc->df.df_mv, x, y, dx, dy, enc->mb.mv[part]);\n                    x = (x + dx) & 3;\n                    if (!x)\n                    {\n                        y = (y + dy) & 3;\n                        if (!y)\n                        {\n                            break;\n                        }\n                    }\n                }\n            }\n        }\n        cbp = cbpl + (cbpc << 4);\n        /*temp for test up-sample filter*/\n        /*if(base_mode)\n        {\n            cbp = 0;\n            cbpl=0;\n            cbpc = 0;\n        }*/\n        if (mb_type_svc < 6)\n        {\n            // encode cbp 9.1.2 Mapping process for coded block pattern\n            static const uint8_t cbp2code[2][48] = {\n                {3, 29, 30, 17, 31, 18, 37,  8, 32, 38, 19,  9, 20, 10, 11,  2, 16, 33, 34, 21, 35, 22, 39,  4,\n                36, 40, 23,  5, 24,  6,  7,  1, 41, 42, 43, 25, 44, 26, 46, 12, 45, 47, 27, 13, 28, 14, 15,  0},\n                {0,  2,  3,  7,  4,  8, 17, 13,  5, 18,  9, 14, 10, 15, 16, 11,  1, 32, 33, 36, 34, 37, 44, 40,\n                35, 45, 38, 41, 39, 42, 43, 19,  6, 24, 25, 20, 26, 21, 46, 28, 27, 47, 22, 29, 23, 30, 31, 12}\n            };\n            UE(cbp2code[mb_type_svc < 5][cbp]);\n        }\n\n        if (cbp || (mb_type_svc >= 6))\n        {\n            SE(enc->rc.qp - enc->rc.prev_qp);\n            enc->rc.prev_qp = enc->rc.qp;\n        }\n\n        // *** Huffman encoding ***\n\n        // 1. Encode Luma DC (intra 16x16 only)\n        if (intra16x16_flag)\n        {\n            h264e_vlc_encode(enc->bs, qv->quant_dc, 16, nz + 4);\n        }\n\n        // 2. Encode luma residual (only if CBP non-zero)\n        if (cbpl)\n        {\n            for (i = 0; i < 16; i++)\n            {\n                int j = decode_block_scan[i];\n                if (cbp & (1 << (i >> 2)))\n                {\n                    uint8_t *pnz = nz + 4 + (j & 3) - (j >> 2);\n                    h264e_vlc_encode(enc->bs, qv->qy[j].qv, 16 - intra16x16_flag, pnz);\n                    if (*pnz)\n                    {\n                        enc->df.nzflag |= 1 << (5 + (j & 3) + 5*(j >> 2));\n                    }\n                } else\n                {\n                    nz[4 + (j & 3) - (j >> 2)] = 0;\n                }\n            }\n            for (i = 0; i < 4; i++)\n            {\n                nnz_top[i] = nz[1 + i];\n                nnz_left[i] = nz[7 - i];\n            }\n        }\n\n        // 2. Encode chroma\n        if (cbpc)\n        {\n            uint8_t nzcdc[3];\n            nzcdc[0] = nzcdc[2] = 17;   // dummy neighbors, indicating chroma DC\n            // 2.1. Encode chroma DC\n            for (uv = 1; uv < 3; uv++)\n            {\n                h264e_vlc_encode(enc->bs, uv == 1 ? qv->quant_dc_u : qv->quant_dc_v, 4, nzcdc + 1);\n            }\n\n            // 2.2. Encode chroma residual\n            if (cbpc > 1)\n            {\n                for (uv = 1; uv < 3; uv++)\n                {\n                    uint8_t nzc[5];\n                    int nnz_off = (uv == 1 ? 4 : 6);\n                    quant_t *pquv = uv == 1 ? qv->qu : qv->qv;\n                    for (i = 0; i < 2; i++)\n                    {\n                        nzc[3 + i] = nnz_top[nnz_off + i] ;\n                        nzc[1 - i] = nnz_left[nnz_off + i];\n                    }\n                    for (i = 0; i < 4; i++)\n                    {\n                        int k = 2 + (i & 1) - (i >> 1);\n                        h264e_vlc_encode(enc->bs, pquv[i].qv, 15, nzc + k);\n                    }\n                    for (i = 0; i < 2; i++)\n                    {\n                        nnz_top[nnz_off + i]  = nzc[1 + i];\n                        nnz_left[nnz_off + i] = nzc[3 - i];\n                    }\n                }\n            }\n        }\n        if (cbpc !=2)\n        {\n            *(uint32_t*)(nnz_top+4) = *(uint32_t*)(nnz_left+4) = 0; // set chroma NNZ to 0\n        }\n    }\n\n    // Save top & left lines\n    for (uv = 0; uv < 3; uv++)\n    {\n        int off = 0, n = uv ? 8 : 16;\n        pix_t *top = enc->top_line + 48 + enc->mb.x*32;\n        pix_t *left = enc->top_line;\n        pix_t *mb = enc->dec.yuv[uv];\n\n        if (uv)\n        {\n            off = 8 + uv*8;\n        }\n        top  += off;\n        left += off;\n\n        enc->top_line[32 + uv] = top[n - 1];\n        for (i = 0; i < n; i++)\n        {\n            left[i] = mb[n - 1 + i*enc->dec.stride[uv]];\n            top[i] = mb[(n - 1)*enc->dec.stride[uv] + i];\n        }\n    }\n}\n\n/************************************************************************/\n/*      Intra mode encoding                                             */\n/************************************************************************/\n/**\n*   Estimate cost of 4x4 intra predictor\n*/\nstatic void intra_choose_4x4(h264e_enc_t *enc)\n{\n    int i, n, a, nz_mask = 0, avail = mb_avail_flag(enc);\n    scratch_t *qv = enc->scratch;\n    pix_t *mb_dec = enc->dec.yuv[0];\n    pix_t *dec = enc->ptest;\n    int cost =  g_lambda_i4_q4[enc->rc.qp];// + MUL_LAMBDA(16, g_lambda_q4[enc->rc.qp]);    // 4x4 cost: at least 16 bits + penalty\n\n    uint32_t edge_store[(3 + 16 + 1 + 16 + 4)/4 + 2]; // pad for SSE\n    pix_t *edge = ((pix_t*)edge_store) + 3 + 16 + 1;\n    uint32_t *edge32 = (uint32_t *)edge;              // alias\n    const uint32_t *top32 = (const uint32_t*)(enc->top_line + 48 + enc->mb.x*32);\n    pix_t *left = enc->top_line;\n\n    edge[-1] = enc->top_line[32];\n    for (i = 0; i < 16; i++)\n    {\n        edge[-2 - i] = left[i];\n    }\n    for (i = 0; i < 4; i++)\n    {\n        edge32[i] = top32[i];\n    }\n    edge32[4] = top32[8];\n\n    for (n = 0; n < 16; n++)\n    {\n        static const uint8_t block2avail[16] = {\n            0x07, 0x23, 0x23, 0x2b, 0x9b, 0x77, 0xff, 0x77, 0x9b, 0xff, 0xff, 0x77, 0x9b, 0x77, 0xff, 0x77,\n        };\n        pix_t *block;\n        pix_t *blockin;\n        int sad, mpred, mode;\n        int r = n >> 2;\n        int c = n & 3;\n        int8_t *ctx_l = (int8_t *)enc->i4x4mode + r;\n        int8_t *ctx_t = (int8_t *)enc->i4x4mode + 4 + enc->mb.x*4 + c;\n        edge = ((pix_t*)edge_store) + 3 + 16 + 1 + 4*c - 4*r;\n\n        a = avail;\n        a &= block2avail[n];\n        a |= block2avail[n] >> 4;\n\n        if (!(block2avail[n] & AVAIL_TL)) // TL replace\n        {\n            if ((n <= 3 && (avail & AVAIL_T)) ||\n                (n  > 3 && (avail & AVAIL_L)))\n            {\n                a |= AVAIL_TL;\n            }\n        }\n        if (n < 3 && (avail & AVAIL_T))\n        {\n            a |= AVAIL_TR;\n        }\n\n        blockin = enc->scratch->mb_pix_inp + (c + r*16)*4;\n        block = dec + (c + r*16)*4;\n\n        mpred = MIN(*ctx_l, *ctx_t);\n        if (mpred < 0)\n        {\n            mpred = 2;\n        }\n\n        sad = h264e_intra_choose_4x4(blockin, block, a, edge, mpred, MUL_LAMBDA(3, g_lambda_q4[enc->rc.qp]));\n        mode = sad & 15;\n        sad >>= 4;\n\n        *ctx_l = *ctx_t = (int8_t)mode;\n        if (mode == mpred)\n        {\n            mode = -1;\n        } else if (mode > mpred)\n        {\n            mode--;\n        }\n        enc->mb.i4x4_mode[n] = (int8_t)mode;\n\n        nz_mask <<= 1;\n        if (sad > g_skip_thr_i4x4[enc->rc.qp])\n        {\n            //  skip transform on low SAD gains just about 2% for all-intra coding at QP40,\n            //  for other QP gain is minimal, so SAD check do not used\n            nz_mask |= h264e_transform_sub_quant_dequant(blockin, block, 16, QDQ_MODE_INTRA_4, qv->qy + n, enc->rc.qdat[0]);\n\n            if (nz_mask & 1)\n            {\n                h264e_transform_add(block, 16, block, qv->qy + n, 1, ~0);\n            }\n        } else\n        {\n            memset((qv->qy+n), 0, sizeof(qv->qy[0]));\n        }\n\n        cost += sad;\n\n        edge[2] = block[3];\n        edge[1] = block[3 + 16];\n        edge[0] = block[3 + 16*2];\n        *(uint32_t*)&edge[-4] = *(uint32_t*)&block[16*3];\n    }\n    enc->scratch->nz_mask = (uint16_t)nz_mask;\n\n    if (cost < enc->mb.cost)\n    {\n        enc->mb.cost = cost;\n        enc->mb.type = 5;   // intra 4x4\n        h264e_copy_16x16(mb_dec, enc->dec.stride[0], dec, 16);  // restore reference\n    }\n}\n\n/**\n*   Choose 16x16 prediction mode, most suitable for given gradient\n*/\nstatic int intra_estimate_16x16(pix_t *p, int s, int avail, int qp)\n{\n    static const uint8_t mode_i16x16_valid[8] = { 4, 5, 6, 7, 4, 5, 6, 15 };\n    pix_t p00 = p[0];\n    pix_t p01 = p[15];\n    pix_t p10 = p[15*s + 0];\n    pix_t p11 = p[15*s + 15];\n    int v = mode_i16x16_valid[avail & (AVAIL_T + AVAIL_L + AVAIL_TL)];\n    // better than above on low bitrates\n    int dx = ABS(p00 - p01) + ABS(p10 - p11) + ABS(p[8*s] - p[8*s + 15]);\n    int dy = ABS(p00 - p10) + ABS(p01 - p11) + ABS(p[8] - p[15*s + 8]);\n\n    if ((dx > 30 + 3*dy && dy < (100 + 50 - qp)\n        //|| (/*dx < 50 &&*/ dy <= 12)\n        ) && (v & 1))\n        return 0;\n    else if (dy > 30 + 3*dx && dx < (100 + 50 - qp) && (v & (1 << 1)))\n        return 1;\n    else\n        return 2;\n}\n\n/**\n*   Estimate cost of 16x16 intra predictor\n*\n*   for foreman@qp10\n*\n*   12928 - [0-3], [0]\n*   12963 - [0-2], [0]\n*   12868 - [0-2], [0-3]\n*   12878 - [0-2], [0-2]\n*   12834 - [0-3], [0-3]\n*sad\n*   13182\n*heuristic\n*   13063\n*\n*/\nstatic void intra_choose_16x16(h264e_enc_t *enc, pix_t *left, pix_t *top, int avail)\n{\n    int sad, sad4[4];\n    // heuristic mode decision\n    enc->mb.i16.pred_mode_luma = intra_estimate_16x16(enc->scratch->mb_pix_inp, 16, avail, enc->rc.qp);\n\n    // run chosen predictor\n    h264e_intra_predict_16x16(enc->ptest, left, top, enc->mb.i16.pred_mode_luma);\n\n    // coding cost\n    sad = h264e_sad_mb_unlaign_8x8(enc->scratch->mb_pix_inp, 16, enc->ptest, sad4)        // SAD\n        + MUL_LAMBDA(bitsize_ue(enc->mb.i16.pred_mode_luma + 1), g_lambda_q4[enc->rc.qp]) // side-info penalty\n        + g_lambda_i16_q4[enc->rc.qp];                                                    // block kind penalty\n\n    if (sad < enc->mb.cost)\n    {\n        enc->mb.cost = sad;\n        enc->mb.type = 6;\n        SWAP(pix_t*, enc->pbest, enc->ptest);\n    }\n}\n\n/************************************************************************/\n/*      Inter mode encoding                                             */\n/************************************************************************/\n\n/**\n*   Sub-pel luma interpolation\n*/\nstatic void interpolate_luma(const pix_t *ref, int stride, point_t mv, point_t wh, pix_t *dst)\n{\n    ref += (mv.s.y >> 2) * stride + (mv.s.x >> 2);\n    mv.u32 &= 0x000030003;\n    h264e_qpel_interpolate_luma(ref, stride, dst, wh, mv);\n}\n\n/**\n*   Sub-pel chroma interpolation\n*/\nstatic void interpolate_chroma(h264e_enc_t *enc, point_t mv)\n{\n    int i;\n    for (i = 1; i < 3; i++)\n    {\n        point_t wh;\n        int part = 0, x = 0, y = 0;\n        wh.s.x = (enc->mb.type & 2) ? 4 : 8;\n        wh.s.y = (enc->mb.type & 1) ? 4 : 8;\n        if (enc->mb.type == -1) // skip\n        {\n            wh.s.x = wh.s.y = 8;\n        }\n\n        for (;;part++)\n        {\n            pix_t *ref;\n            mv = mb_abs_mv(enc, enc->mb.mv[part]);\n            ref = enc->ref.yuv[i] + ((mv.s.y >> 3) + y)*enc->ref.stride[i] + (mv.s.x >> 3) + x;\n            mv.u32 &= 0x00070007;\n            h264e_qpel_interpolate_chroma(ref, enc->ref.stride[i], enc->ptest + (i - 1)*8 + 16*y + x, wh, mv);\n            x = (x + wh.s.x) & 7;\n            if (!x)\n            {\n                y = (y + wh.s.y) & 7;\n                if (!y)\n                {\n                    break;\n                }\n            }\n        }\n    }\n}\n\n/**\n*   RD cost of given MV\n*/\nstatic int me_mv_cost(point_t mv, point_t mv_pred, int qp)\n{\n    int nb = bits_se(mv.s.x - mv_pred.s.x) + bits_se(mv.s.y - mv_pred.s.y);\n    return MUL_LAMBDA(nb, g_lambda_mv_q4[qp]);\n}\n\n/**\n*   RD cost of given MV candidate (TODO)\n*/\n#define me_mv_cand_cost me_mv_cost\n//static int me_mv_cand_cost(point_t mv, point_t mv_pred, int qp)\n//{\n//    int nb = bits_se(mv.s.x - mv_pred.s.x) + bits_se(mv.s.y - mv_pred.s.y);\n//    return MUL_LAMBDA(nb, g_lambda_mv_q4[qp]);\n//}\n\n\n/**\n*   Modified full-pel motion search with small diamond algorithm\n*   note: diamond implemented with small modifications, trading speed for precision\n*/\nstatic int me_search_diamond(h264e_enc_t *enc, const pix_t *ref, const pix_t *b, int rowbytes, point_t *mv,\n    const rectangle_t *range, int qp, point_t mv_pred, int min_sad, point_t wh, pix_t *scratch, pix_t **ppbest, int store_bytes)\n{\n    // cache map           cache moves\n    //      3              0   x->1\n    //      *              1   x->0\n    //  1 * x * 0          2   x->3\n    //      *              3   x->2\n    //      2                   ^1\n\n    //   cache double moves:\n    //           prev               prev\n    //      x ->   0   ->   3   ==>   3   =>   1\n    //      x ->   0   ->   2   ==>   2   =>   1\n    //      x ->   0   ->   0   ==>   0   =>   1\n    //      x ->   0   ->   1   - impossible\n    //   prev SAD(n) is (n+4)\n    //\n\n    static const point_t dir2mv[] = {{{4, 0}},{{-4, 0}},{{0, 4}},{{0, -4}}};\n    union\n    {\n        uint16_t cache[8];\n        uint32_t cache32[4];\n    } sad;\n\n    int dir, cloop, dir_prev, cost;\n    point_t v;\n\n    assert(mv_in_rect(*mv, range));\n\nrestart:\n    dir = 0;                // start gradient descend with direction dir2mv[0]\n    cloop = 4;              // try 4 directions\n    dir_prev = -1;          // not yet moved\n\n    // reset SAD cache\n    sad.cache32[0] = sad.cache32[1] = sad.cache32[2] = sad.cache32[3] = ~0u;\n\n    // 1. Full-pel ME with small diamond modification:\n    // center point moved immediately as soon as new minimum found\n    do\n    {\n        assert(dir >= 0 && dir < 4);\n\n        // Try next point. Avoid out-of-range moves\n        v = mv_add(*mv, dir2mv[dir]);\n        //if (mv_in_rect(v, range) && sad.cache[dir] == (uint16_t)~0u)\n        if (mv_in_rect(v, range) && sad.cache[dir] == 0xffffu)\n        {\n            cost = h264e_sad_mb_unlaign_wh(ref + ((v.s.y*rowbytes + v.s.x) >> 2), rowbytes, b, wh);\n            //cost += me_mv_cost(*mv, mv_pred, qp);\n            cost += me_mv_cost(v, mv_pred, qp);\n            sad.cache[dir] = (uint16_t)cost;\n            if (cost < min_sad)\n            {\n                // This point is better than center: move this point to center and continue\n                int corner = ~0;\n                if (dir_prev >= 0)                      // have previous move\n                {                                       // save cache point, which can be used in next iteration\n                    corner = sad.cache[4 + dir];        // see \"cache double moves\" above\n                }\n                sad.cache32[2] = sad.cache32[0];        // save current cache to 'previous'\n                sad.cache32[3] = sad.cache32[1];\n                sad.cache32[0] = sad.cache32[1] = ~0u;  // reset current cache\n                if (dir_prev >= 0)                      // but if have previous move\n                {                                       // one cache point can be reused from previous iteration\n                    sad.cache[dir_prev^1] = (uint16_t)corner; // see \"cache double moves\" above\n                }\n                sad.cache[dir^1] = (uint16_t)min_sad;   // previous center become a neighbor's\n                dir_prev = dir;                         // save this direction\n                dir--;                                  // start next iteration with the same direction\n                cloop = 4 + 1;                          // and try 4 directions (+1 for do-while loop)\n                *mv = v;                                // Save best point found\n                min_sad = cost;                         // and it's SAD\n            }\n        }\n        dir = (dir + 1) & 3;                            // cycle search directions\n    } while(--cloop);\n\n    // 2. Optional: Try diagonal step\n    //if (1)\n    {\n        int primary_dir   = sad.cache[3] >= sad.cache[2] ? 2 : 3;\n        int secondary_dir = sad.cache[1] >= sad.cache[0] ? 0 : 1;\n        if (sad.cache[primary_dir] < sad.cache[secondary_dir])\n        {\n            SWAP(int, secondary_dir, primary_dir);\n        }\n\n        v = mv_add(dir2mv[secondary_dir], dir2mv[primary_dir]);\n        v = mv_add(*mv, v);\n        //cost = (uint16_t)~0u;\n        if (mv_in_rect(v, range))\n        {\n            cost = h264e_sad_mb_unlaign_wh(ref + ((v.s.y*rowbytes + v.s.x) >> 2), rowbytes, b, wh);\n            cost += me_mv_cost(v, mv_pred, qp);\n            if (cost < min_sad)\n            {\n                *mv = v;//mv_add(*mv, v);\n                min_sad = cost;\n                goto restart;\n            }\n        }\n    }\n\n    interpolate_luma(ref, rowbytes, *mv, wh, scratch);    // Plain NxM copy can be used\n    *ppbest = scratch;\n\n    // 3. Fractional pel search\n    if (enc->run_param.encode_speed < 9 && mv_in_rect(*mv, &enc->frame.mv_qpel_limit))\n    {\n        point_t vbest = *mv;\n        pix_t *pbest = scratch;\n        pix_t *hpel  = scratch + store_bytes;\n        pix_t *hpel1 = scratch + ((store_bytes == 8) ? 256 : 2*store_bytes);\n        pix_t *hpel2 = hpel1 + store_bytes;\n\n        int i, sad_test;\n        point_t primary_qpel, secondary_qpel, vdiag;\n\n        unsigned minsad1 = sad.cache[1];\n        unsigned minsad2 = sad.cache[3];\n        secondary_qpel = point(-1, 0);\n        primary_qpel = point(0, -1);\n        if (sad.cache[3] >= sad.cache[2])\n            primary_qpel = point(0, 1), minsad2 = sad.cache[2];\n        if (sad.cache[1] >= sad.cache[0])\n            secondary_qpel = point(1, 0), minsad1 = sad.cache[0];\n\n        if (minsad2 > minsad1)\n        {\n            SWAP(point_t, secondary_qpel, primary_qpel);\n        }\n\n        //     ============> primary\n        //     |00 01 02\n        //     |10 11 12\n        //     |20    22\n        //     V\n        //     secondary\n        vdiag = mv_add(primary_qpel, secondary_qpel);\n\n        for (i = 0; i < 7; i++)\n        {\n            pix_t *ptest;\n            switch(i)\n            {\n            case 0:\n                // 02 = interpolate primary half-pel\n                v = mv_add(*mv, mv_add(primary_qpel, primary_qpel));\n                interpolate_luma(ref, rowbytes, v, wh, ptest = hpel1);\n                break;\n            case 1:\n                // 01 q-pel = (00 + 02)/2\n                v = mv_add(*mv, primary_qpel);\n                h264e_qpel_average_wh_align(scratch, hpel1, ptest = hpel, wh);\n                break;\n            case 2:\n                // 20 = interpolate secondary half-pel\n                v = mv_add(*mv, mv_add(secondary_qpel, secondary_qpel));\n                interpolate_luma(ref, rowbytes, v, wh, ptest = hpel2);\n                break;\n            case 3:\n                // 10 q-pel = (00 + 20)/2\n                hpel  = scratch + store_bytes; if (pbest == hpel) hpel = scratch;\n                v = mv_add(*mv, secondary_qpel);\n                h264e_qpel_average_wh_align(scratch, hpel2, ptest = hpel, wh);\n                break;\n            case 4:\n                // 11 q-pel = (02 + 20)/2\n                hpel  = scratch + store_bytes; if (pbest == hpel) hpel = scratch;\n                v = mv_add(*mv, vdiag);\n                h264e_qpel_average_wh_align(hpel1, hpel2, ptest = hpel, wh);\n                break;\n            case 5:\n                // 22 = interpolate center half-pel\n                if (pbest == hpel2) hpel2 = scratch, hpel = scratch + store_bytes;\n                v = mv_add(*mv, mv_add(vdiag, vdiag));\n                interpolate_luma(ref, rowbytes, v, wh, ptest = hpel2);\n                break;\n            case 6:\n            default:\n                // 12 q-pel = (02 + 22)/2\n                hpel  = scratch + store_bytes; if (pbest == hpel) hpel = scratch;\n                v = mv_add(*mv, mv_add(primary_qpel, vdiag));\n                h264e_qpel_average_wh_align(hpel2, hpel1, ptest = hpel, wh);\n                break;\n            }\n\n            sad_test = h264e_sad_mb_unlaign_wh(ptest, 16, b, wh) + me_mv_cost(v, mv_pred, qp);\n            if (sad_test < min_sad)\n            {\n                min_sad = sad_test;\n                vbest = v;\n                pbest = ptest;\n            }\n        }\n\n        *mv = vbest;\n        *ppbest = pbest;\n    }\n    return min_sad;\n}\n\n/**\n*   Set range for MV search\n*/\nstatic void me_mv_set_range(point_t *pnt, rectangle_t *range, const rectangle_t *mv_limit, int mby)\n{\n    // clip start point\n    rectangle_t r = *mv_limit;\n    r.tl.s.y = (int16_t)(MAX(r.tl.s.y, mby - 63*4));\n    r.br.s.y = (int16_t)(MIN(r.br.s.y, mby + 63*4));\n    mv_clip(pnt, &r);\n    range->tl = mv_add(*pnt, point(-MV_RANGE*4, -MV_RANGE*4));\n    range->br = mv_add(*pnt, point(+MV_RANGE*4, +MV_RANGE*4));\n    // clip search range\n    mv_clip(&range->tl, &r);\n    mv_clip(&range->br, &r);\n}\n\n/**\n*   Remove duplicates from MV candidates list\n*/\nstatic int me_mv_refine_cand(point_t *p, int n)\n{\n    int i, j, k;\n    p[0] = mv_round_qpel(p[0]);\n    for (j = 1, k = 1; j < n; j++)\n    {\n        point_t mv = mv_round_qpel(p[j]);\n        for (i = 0; i < k; i++)\n        {\n            // TODO\n            //if (!mv_differs3(mv, p[i], 3*4))\n            //if (!mv_differs3(mv, p[i], 1*4))\n            //if (!mv_differs3(mv, p[i], 3))\n            if (mv_equal(mv, p[i]))\n                break;\n        }\n        if (i == k)\n            p[k++] = mv;\n    }\n    return k;\n}\n\n/**\n*   Choose candidates for inter MB partitioning (16x8,8x16 or 8x8),\n*   using SAD's for 8x8 sub-blocks\n*/\nstatic void mb_inter_partition(/*const */int sad[4], int mode[4])\n{\n/*\n    slope\n        |[ 1  1]| _ |[ 1 -1]|\n        |[-1 -1]|   |[ 1 -1]|\n        indicates v/h gradient: big negative = vertical prediction; big positive = horizontal\n\n    skew\n        |[ 1  0]| _ |[ 0 -1]|\n        |[ 0 -1]|   |[ 1  0]|\n        indicates diagonal gradient: big negative = diagonal down right\n*/\n    int p00 = sad[0];\n    int p01 = sad[1];\n    int p10 = sad[2];\n    int p11 = sad[3];\n    int sum = p00 + p01 + p10 + p11;\n    int slope = ABS((p00 - p10) + (p01 - p11)) - ABS((p00 - p01) + (p10 - p11));\n    int skew = ABS(p11 - p00) - ABS(p10 - p01);\n\n    if (slope >  (sum >> 4))\n    {\n        mode[1] = 1;    // try 8x16 partition\n    }\n    if (slope < -(sum >> 4))\n    {\n        mode[2] = 1;    // try 16x8 partition\n    }\n    if (ABS(skew) > (sum >> 4) && ABS(slope) <= (sum >> 4))\n    {\n        mode[3] = 1;    // try 8x8 partition\n    }\n}\n\n/**\n*   Online MV clustering to \"long\" and \"short\" clusters\n*   Estimate mean \"long\" and \"short\" vectors\n*/\nstatic void mv_clusters_update(h264e_enc_t *enc, point_t mv)\n{\n    int mv_norm = SQRP(mv);\n    int n0 = SQRP(enc->mv_clusters[0]);\n    int n1 = SQRP(enc->mv_clusters[1]);\n    if (mv_norm < n1)\n    {\n        // \"short\" is shorter than \"long\"\n        SMOOTH(enc->mv_clusters[0], mv);\n    }\n    if (mv_norm >= n0)\n    {\n        // \"long\" is longer than \"short\"\n        SMOOTH(enc->mv_clusters[1], mv);\n    }\n}\n\n/**\n*   Choose inter mode: skip/coded, ME partition, find MV\n*/\nstatic void inter_choose_mode(h264e_enc_t *enc)\n{\n    int prefered_modes[4] = { 1, 0, 0, 0 };\n    point_t mv_skip, mv_skip_a, mv_cand[MAX_MV_CAND];\n    point_t mv_pred_16x16 = me_mv_medianpredictor_get_skip(enc);\n    point_t mv_best = point(MV_NA, 0); // avoid warning\n\n    int sad, sad_skip = 0x7FFFFFFF, sad_best = 0x7FFFFFFF;\n    int off, i, j = 0, ncand = 0;\n    int cand_sad4[MAX_MV_CAND][4];\n    const pix_t *ref_yuv = enc->ref.yuv[0];\n    int ref_stride = enc->ref.stride[0];\n    int mv_cand_cost_best = 0;\n    mv_skip = enc->mb.mv_skip_pred;\n    mv_skip_a = mb_abs_mv(enc, mv_skip);\n\n    for (i = 0; i < 4; i++)\n    {\n        enc->df.df_mv[4 + 5*i].u32 = enc->mv_pred[i].u32;\n        enc->df.df_mv[i].u32       = enc->mv_pred[8 + 4*enc->mb.x + i].u32;\n    }\n\n    // Try skip mode\n    if (mv_in_rect(mv_skip_a, &enc->frame.mv_qpel_limit))\n    {\n        int *sad4 = cand_sad4[0];\n        interpolate_luma(ref_yuv, ref_stride, mv_skip_a, point(16, 16), enc->ptest);\n        sad_skip = h264e_sad_mb_unlaign_8x8(enc->scratch->mb_pix_inp, 16, enc->ptest, sad4);\n\n        if (MAX(MAX(sad4[0], sad4[1]), MAX(sad4[2], sad4[3])) < g_skip_thr_inter[enc->rc.qp])\n        {\n            int uv, sad_uv;\n\n            SWAP(pix_t*, enc->pbest, enc->ptest);\n            enc->mb.type = -1;\n            enc->mb.mv[0] = mv_skip;\n            enc->mb.cost = 0;\n            interpolate_chroma(enc, mv_skip_a);\n\n            // Check that chroma SAD is not too big for the skip\n            for (uv = 1; uv <= 2; uv++)\n            {\n                pix_t *pred = enc->ptest + (uv - 1)*8;\n                pix_t *pix_mb_uv = mb_input_chroma(enc, uv);\n                int inp_stride = enc->inp.stride[uv];\n\n                if (enc->frame.cropping_flag && ((enc->mb.x + 1)*16  > enc->param.width || (enc->mb.y + 1)*16  > enc->param.height))\n                {\n                    // Speculative read beyond frame borders: make local copy of the macroblock.\n                    // TODO: same code used in mb_write() and mb_encode()\n                    pix_copy_cropped_mb(enc->scratch->mb_pix_store, 8, pix_mb_uv, enc->inp.stride[uv],\n                        MIN(8, enc->param.width/2  - enc->mb.x*8),\n                        MIN(8, enc->param.height/2 - enc->mb.y*8));\n                    pix_mb_uv = enc->scratch->mb_pix_store;\n                    inp_stride = 8;\n                }\n\n                sad_uv = h264e_sad_mb_unlaign_wh(pix_mb_uv, inp_stride, pred, point(8, 8));\n                if (sad_uv >= g_skip_thr_inter[enc->rc.qp])\n                {\n                    break;\n                }\n            }\n            if (uv == 3)\n            {\n                return;\n            }\n        }\n\n        if (enc->run_param.encode_speed < 1) // enable 8x16, 16x8 and 8x8 partitions\n        {\n            mb_inter_partition(sad4, prefered_modes);\n        }\n\n        //sad_skip += me_mv_cost(mv_skip, mv_pred_16x16, enc->rc.qp);\n\n        // Too big skip SAD. Use skip predictor as a diamond start point candidate\n        mv_best = mv_cand[ncand++] = mv_round_qpel(mv_skip);\n        if (!((mv_skip.s.x | mv_skip.s.y) & 3))\n        {\n            sad_best = sad_skip;//+ me_mv_cost(mv_best, mv_pred_16x16, enc->rc.qp)\n            mv_cand_cost_best = me_mv_cand_cost(mv_skip, mv_pred_16x16, enc->rc.qp);\n            //mv_cand_cost_best = me_mv_cand_cost(mv_skip, point(0,0), enc->rc.qp);\n            j = 1;\n        }\n    }\n\n    mv_cand[ncand++] = mv_pred_16x16;\n    ncand += me_mv_medianpredictor_get_cand(enc, mv_cand + ncand);\n\n    if (enc->mb.x <= 0)\n    {\n        mv_cand[ncand++] = point(8*4, 0);\n    }\n    if (enc->mb.y <= 0)\n    {\n        mv_cand[ncand++] = point(0, 8*4);\n    }\n\n    mv_cand[ncand++] = enc->mv_clusters[0];\n    mv_cand[ncand++] = enc->mv_clusters[1];\n\n    assert(ncand <= MAX_MV_CAND);\n    ncand = me_mv_refine_cand(mv_cand, ncand);\n\n    for (/*j = 0*/; j < ncand; j++)\n    {\n        point_t mv = mb_abs_mv(enc, mv_cand[j]);\n        if (mv_in_rect(mv, &enc->frame.mv_limit))\n        {\n            int mv_cand_cost = me_mv_cand_cost(mv_cand[j], mv_pred_16x16, enc->rc.qp);\n\n            int *sad4 = cand_sad4[j];\n            off = ((mv.s.y + 0) >> 2)*ref_stride + ((mv.s.x + 0) >> 2);\n            sad = h264e_sad_mb_unlaign_8x8(ref_yuv + off, ref_stride, enc->scratch->mb_pix_inp, sad4);\n\n            if (enc->run_param.encode_speed < 1) // enable 8x16, 16x8 and 8x8 partitions\n            {\n                mb_inter_partition(sad4, prefered_modes);\n            }\n\n            if (sad + mv_cand_cost < sad_best + mv_cand_cost_best)\n            //if (sad < sad_best)\n            {\n                mv_cand_cost_best = mv_cand_cost;\n                sad_best = sad;\n                mv_best = mv_cand[j];\n            }\n        }\n    }\n\n    sad_best += me_mv_cost(mv_best, mv_pred_16x16, enc->rc.qp);\n\n    {\n        int mb_type;\n        point_t wh, part, mvpred_ctx[12], part_mv[4][16], part_mvd[4][16];\n        pix_t *store = enc->scratch->mb_pix_store;\n        pix_t *pred_best = store, *pred_test = store + 256;\n\n#define MAX8X8_MODES 4\n        me_mv_medianpredictor_save_ctx(enc, mvpred_ctx);\n        enc->mb.cost = 0xffffff;\n        for (mb_type = 0; mb_type < MAX8X8_MODES; mb_type++)\n        {\n            static const int nbits[4] = { 1, 4, 4, 12 };\n            int imv = 0;\n            int part_sad = MUL_LAMBDA(nbits[mb_type], g_lambda_q4[enc->rc.qp]);\n\n            if (!prefered_modes[mb_type]) continue;\n\n            wh.s.x = (mb_type & 2) ? 8 : 16;\n            wh.s.y = (mb_type & 1) ? 8 : 16;\n            part = point(0, 0);\n            for (;;)\n            {\n                rectangle_t range;\n                pix_t *diamond_out;\n                point_t mv, mv_pred, mvabs = mb_abs_mv(enc, mv_best);\n                me_mv_set_range(&mvabs, &range, &enc->frame.mv_limit, enc->mb.y*16*4 + part.s.y*4);\n\n                mv_pred = me_mv_medianpredictor_get(enc, part, wh);\n\n                if (mb_type)\n                {\n                    mvabs = mv_round_qpel(mb_abs_mv(enc, mv_pred));\n                    me_mv_set_range(&mvabs, &range, &enc->frame.mv_limit, enc->mb.y*16*4 + part.s.y*4);\n                    off = ((mvabs.s.y >> 2) + part.s.y)*ref_stride + ((mvabs.s.x >> 2) + part.s.x);\n                    sad_best = h264e_sad_mb_unlaign_wh(ref_yuv + off, ref_stride, enc->scratch->mb_pix_inp + part.s.y*16 + part.s.x, wh)\n                        + me_mv_cost(mvabs,\n                        //mv_pred,\n                        mb_abs_mv(enc, mv_pred),\n                        enc->rc.qp);\n                }\n\n                part_sad += me_search_diamond(enc, ref_yuv + part.s.y*ref_stride + part.s.x,\n                    enc->scratch->mb_pix_inp + part.s.y*16 + part.s.x, ref_stride, &mvabs, &range, enc->rc.qp,\n                    mb_abs_mv(enc, mv_pred), sad_best, wh,\n                    store, &diamond_out, mb_type ? (mb_type == 2 ? 8 : 128) : 256);\n\n                if (!mb_type)\n                {\n                    pred_test = diamond_out;\n                    if (pred_test < store + 2*256)\n                    {\n                        pred_best = (pred_test == store ? store + 256 : store);\n                        store += 2*256;\n                    } else\n                    {\n                        pred_best = (pred_test == (store + 512) ? store + 512 + 256 : store + 512);\n                    }\n                } else\n                {\n                    h264e_copy_8x8(pred_test + part.s.y*16 + part.s.x, 16, diamond_out);\n                    if (mb_type < 3)\n                    {\n                        int part_off = (wh.s.x >> 4)*8 + (wh.s.y >> 4)*8*16;\n                        h264e_copy_8x8(pred_test + part_off + part.s.y*16 + part.s.x, 16, diamond_out + part_off);\n                    }\n                }\n\n                mv = mv_sub(mvabs, point(enc->mb.x*16*4, enc->mb.y*16*4));\n\n                part_mvd[mb_type][imv] = mv_sub(mv, mv_pred);\n                part_mv[mb_type][imv++] = mv;\n\n                me_mv_medianpredictor_put(enc, part.s.x >> 2, part.s.y >> 2, wh.s.x >> 2, wh.s.y >> 2, mv);\n\n                part.s.x = (part.s.x + wh.s.x) & 15;\n                if (!part.s.x)\n                {\n                    part.s.y = (part.s.y + wh.s.y) & 15;\n                    if (!part.s.y) break;\n                }\n            }\n\n            me_mv_medianpredictor_restore_ctx(enc, mvpred_ctx);\n\n            if (part_sad < enc->mb.cost)\n            {\n                SWAP(pix_t*, pred_best, pred_test);\n                enc->mb.cost = part_sad;\n                enc->mb.type = mb_type;\n            }\n        }\n        enc->pbest = pred_best;\n        enc->ptest = pred_test;\n        memcpy(enc->mb.mv,  part_mv [enc->mb.type], 16*sizeof(point_t));\n        memcpy(enc->mb.mvd, part_mvd[enc->mb.type], 16*sizeof(point_t));\n\n        if (enc->mb.cost > sad_skip)\n        {\n            enc->mb.type = 0;\n            enc->mb.cost = sad_skip + me_mv_cand_cost(mv_skip, mv_pred_16x16, enc->rc.qp);\n            enc->mb.mv [0] = mv_skip;\n            enc->mb.mvd[0] = mv_sub(mv_skip, mv_pred_16x16);\n\n            assert(mv_in_rect(mv_skip_a, &enc->frame.mv_qpel_limit)) ;\n            interpolate_luma(ref_yuv, ref_stride, mv_skip_a, point(16, 16), enc->pbest);\n            interpolate_chroma(enc, mv_skip_a);\n        }\n    }\n}\n\n/************************************************************************/\n/*      Deblock filter                                                  */\n/************************************************************************/\n#define MB_FLAG_SVC_INTRA 1\n#define MB_FLAG_SLICE_START_DEBLOCK_2 2\n\n/**\n*   Set deblock filter strength\n*/\nstatic void df_strength(deblock_filter_t *df, int mb_type, int mbx, uint8_t *strength, int IntraBLFlag)\n{\n    uint8_t *sv = strength;\n    uint8_t *sh = strength + 16;\n    int flag = df->nzflag;\n    df->df_nzflag[mbx] = (uint8_t)(flag >> 20);\n    /*\n        nzflag represents macroblock and it's neighbors with 24 bit flags:\n        0 1 2 3\n      4 5 6 7 8\n      A B C D E\n      F G H I J\n      K L K N O\n    */\n    (void)IntraBLFlag;\n#if H264E_SVC_API\n    if (IntraBLFlag & MB_FLAG_SVC_INTRA)\n    {\n        int ccloop = 4;\n        do\n        {\n            int cloop = 4;\n            do\n            {\n                int v = 0;\n                if (flag & 3 << 4)\n                {\n                    v = 1;\n                }\n\n                *sv = (uint8_t)v; sv += 4;\n\n                v = 0;\n                if (flag & 33)\n                {\n                    v = 1;\n                }\n\n                *sh++ = (uint8_t)v;\n\n                flag >>= 1;\n\n            } while(--cloop);\n            flag >>= 1;\n            sv -= 15;\n\n        } while(--ccloop);\n    } else\n#endif\n    {\n        if (mb_type < 5)\n        {\n            int ccloop = 4;\n            point_t *mv = df->df_mv;\n            do\n            {\n                int cloop = 4;\n                do\n                {\n                    int v = 0;\n                    if (flag & 3 << 4)\n                    {\n                        v = 2;\n                    } else if (mv_differs3(mv[4], mv[5]))\n                    {\n                        v = 1;\n                    }\n                    *sv = (uint8_t)v; sv += 4;\n\n                    v = 0;\n                    if (flag & 33)\n                    {\n                        v = 2;\n                    } else if (mv_differs3(mv[0], mv[5]))\n                    {\n                        v = 1;\n                    }\n                    *sh++ = (uint8_t)v;\n\n                    flag >>= 1;\n                    mv++;\n                } while(--cloop);\n                flag >>= 1;\n                sv -= 15;\n                mv++;\n            } while(--ccloop);\n        } else\n        {\n            // Deblock mode #3 (intra)\n            ((uint32_t*)(sv))[1] = ((uint32_t*)(sv))[2] = ((uint32_t*)(sv))[3] =             // for inner columns\n            ((uint32_t*)(sh))[1] = ((uint32_t*)(sh))[2] = ((uint32_t*)(sh))[3] = 0x03030303; // for inner rows\n        }\n        if ((mb_type >= 5 || df->mb_type[mbx - 1] >= 5)) // speculative read\n        {\n            ((uint32_t*)(strength))[0] = 0x04040404;    // Deblock mode #4 (strong intra) for left column\n        }\n        if ((mb_type >= 5 || df->mb_type[mbx    ] >= 5))\n        {\n            ((uint32_t*)(strength))[4] = 0x04040404;    // Deblock mode #4 (strong intra) for top row\n        }\n    }\n    df->mb_type[mbx] = (int8_t)mb_type;\n}\n\n/**\n*   Run deblock for current macroblock\n*/\nstatic void mb_deblock(deblock_filter_t *df, int mb_type, int qp_this, int mbx, int mby, H264E_io_yuv_t *mbyuv, int IntraBLFlag)\n{\n    int i, cr, qp, qp_left, qp_top;\n    deblock_params_t par;\n    uint8_t *alpha = par.alpha; //[2*2];\n    uint8_t *beta  = par.beta;  //[2*2];\n    uint32_t *strength32  = par.strength32; //[4*2]; // == uint8_t strength[16*2];\n    uint8_t *strength = (uint8_t *)strength32;\n    uint8_t *tc0 = par.tc0; //[16*2];\n\n    df_strength(df, mb_type, mbx, strength, IntraBLFlag);\n    if (!mbx || (IntraBLFlag & MB_FLAG_SLICE_START_DEBLOCK_2))\n    {\n        strength32[0] = 0;\n    }\n\n    if (!mby)\n    {\n        strength32[4] = 0;\n    }\n\n    qp_top = df->df_qp[mbx];\n    qp_left = df->df_qp[mbx - 1];\n    df->df_qp[mbx] = (uint8_t)qp_this;\n\n    cr = 0;\n    for (;;)\n    {\n        const uint8_t *lut;\n        if (*((uint32_t*)strength))\n        {\n            qp = (qp_left + qp_this + 1) >> 1;\n            lut = g_a_tc0_b[-10 + qp + ALPHA_OFS];\n            alpha[0] = lut[0];\n            beta[0]  = lut[4 + (BETA_OFS - ALPHA_OFS)*5];\n            for (i = 0; i < 4; i++) tc0[i] = lut[strength[i]];\n        }\n        if (*((uint32_t*)(strength + 16)))\n        {\n            qp = (qp_top + qp_this + 1) >> 1;\n            lut = g_a_tc0_b[-10 + qp + ALPHA_OFS];\n\n            alpha[2]  = lut[0];\n            beta[2] = lut[4 + (BETA_OFS - ALPHA_OFS)*5];\n            for (i = 0; i < 4; i++) tc0[16 + i] = lut[strength[16 + i]];\n        }\n\n        lut = g_a_tc0_b[-10 + qp_this + ALPHA_OFS];\n        alpha[3] = alpha[1] = lut[0];\n        beta[3] = beta[1] = lut[4 + (BETA_OFS - ALPHA_OFS)*5];\n        for (i = 4; i < 16; i++)\n        {\n            tc0[i] = lut[strength[i]];\n            tc0[16 + i] = lut[strength[16 + i]];\n        }\n        if (cr)\n        {\n            int *t = (int *)tc0;\n            t[1] = t[2];         // TODO: need only for OMX\n            t[5] = t[6];\n            i = 2;\n            do\n            {\n                h264e_deblock_chroma(mbyuv->yuv[i], mbyuv->stride[i], &par);\n            } while (--i);\n            break;\n        }\n        h264e_deblock_luma(mbyuv->yuv[0], mbyuv->stride[0], &par);\n\n        qp_this = qpy2qpc[qp_this + DQP_CHROMA];\n        qp_left = qpy2qpc[qp_left + DQP_CHROMA];\n        qp_top = qpy2qpc[qp_top + DQP_CHROMA];\n        cr++;\n    }\n}\n\n/************************************************************************/\n/*      Macroblock encoding                                             */\n/************************************************************************/\n/**\n*   Macroblock encoding\n*/\nstatic void mb_encode(h264e_enc_t *enc, int enc_type)\n{\n    pix_t *top = enc->top_line + 48 + enc->mb.x*32;\n    pix_t *left = enc->top_line;\n    int avail = enc->mb.avail = mb_avail_flag(enc);\n    int base_mode = 0;\n\n    if (enc->frame.cropping_flag && ((enc->mb.x + 1)*16 > enc->param.width || (enc->mb.y + 1)*16 > enc->param.height))\n    {\n        pix_copy_cropped_mb(enc->scratch->mb_pix_inp, 16, mb_input_luma(enc), enc->inp.stride[0],\n             MIN(16, enc->param.width  - enc->mb.x*16),\n             MIN(16, enc->param.height - enc->mb.y*16));\n    } else\n    {\n        // cache input macroblock\n        h264e_copy_16x16(enc->scratch->mb_pix_inp, 16, mb_input_luma(enc), enc->inp.stride[0]);\n    }\n\n    if (!(avail & AVAIL_L)) left = NULL;\n    if (!(avail & AVAIL_T)) top  = NULL;\n\n    enc->pbest = enc->scratch->mb_pix_store;\n    enc->ptest = enc->pbest + 256;\n    enc->mb.type = 0;\n    enc->mb.cost = 0x7FFFFFFF;\n\n    if (enc->slice.type == SLICE_TYPE_P)\n    {\n        inter_choose_mode(enc);\n    }\n#if H264E_SVC_API\n    else if (enc_type > 0 && enc->param.inter_layer_pred_flag)\n    {\n        base_mode = 1;\n        enc->mb.type = 6;\n        h264e_copy_16x16(enc->pbest, 16, (enc->ref.yuv[0] + (enc->mb.x + enc->mb.y*enc->ref.stride[0])*16), enc->ref.stride[0]);\n        h264e_copy_8x8_s(enc->ptest, 16, (enc->ref.yuv[1] + (enc->mb.x + enc->mb.y*enc->ref.stride[1])*8), enc->ref.stride[1]);\n        h264e_copy_8x8_s(enc->ptest + 8, 16, (enc->ref.yuv[2] + (enc->mb.x + enc->mb.y*enc->ref.stride[2])*8), enc->ref.stride[2]);\n\n        goto _WRITE_MB;\n    }\n#endif\n\n    if (enc->mb.type >= 0)\n    {\n        intra_choose_16x16(enc, left, top, avail);\n        if (enc->run_param.encode_speed < 2 || enc->slice.type != SLICE_TYPE_P) // enable intra4x4 on P slices\n        {\n            intra_choose_4x4(enc);\n        }\n    }\n\n    if (enc->mb.type < 5)\n    {\n        mv_clusters_update(enc, enc->mb.mv[0]);\n    }\n\n    if (enc->mb.type >= 5)\n    {\n        pix_t *pred = enc->ptest;\n        h264e_intra_predict_chroma(pred, left + 16, top + 16, enc->mb.i16.pred_mode_luma);\n    } else\n    {\n        interpolate_chroma(enc, mb_abs_mv(enc, enc->mb.mv[0]));\n    }\n\n#if H264E_SVC_API\n_WRITE_MB:\n#endif\n    mb_write(enc, enc_type, base_mode);\n\n    if (!enc->speed.disable_deblock)\n    {\n        int mbx = enc->mb.x;\n        int mby = enc->mb.y;\n#if H264E_MAX_THREADS\n        if (enc->param.max_threads > 1)\n        {   // Avoid deblock across slice border\n            if (enc->mb.num < enc->slice.start_mb_num + enc->frame.nmbx)\n                mby = 0;\n            if (enc->mb.num == enc->slice.start_mb_num)\n            {\n                base_mode |= MB_FLAG_SLICE_START_DEBLOCK_2;\n            }\n        }\n#endif\n        mb_deblock(&enc->df, enc->mb.type, enc->rc.prev_qp, mbx, mby, &enc->dec, base_mode);\n    }\n}\n\n\n/************************************************************************/\n/*      Rate-control                                                    */\n/************************************************************************/\n\n/**\n*   @return zero threshold for given rounding offset\n*/\nstatic uint16_t rc_rnd2thr(int round, int q)\n{\n    int b, thr = 0;\n    for (b = 0x8000; b; b >>= 1)\n    {\n        int t = (thr | b)*q;\n        if (t <= 0x10000 - round)  // TODO: error: < !!!!!!!\n        {\n            thr |= b;\n        }\n    }\n    return (uint16_t)thr;\n}\n\n/**\n*   Set quantizer constants (deadzone and rounding) for given QP\n*/\nstatic void rc_set_qp(h264e_enc_t *enc, int qp)\n{\n    qp = MIN(qp, enc->run_param.qp_max);\n    qp = MAX(qp, enc->run_param.qp_min);\n    qp = MIN(qp, 51);   // avoid VC2010 static analyzer warning\n\n    if (enc->rc.qp != qp)\n    {\n        static const int16_t g_quant_coeff[6*6] =\n        {\n            //    0         2         1\n            13107, 10, 8066, 13, 5243, 16,\n            11916, 11, 7490, 14, 4660, 18,\n            10082, 13, 6554, 16, 4194, 20,\n             9362, 14, 5825, 18, 3647, 23,\n             8192, 16, 5243, 20, 3355, 25,\n             7282, 18, 4559, 23, 2893, 29\n            // 0 2 0 2\n            // 2 1 2 1\n            // 0 2 0 2\n            // 2 1 2 1\n        };\n\n        int cloop = 2;\n        enc->rc.qp = qp;\n\n        do\n        {\n            uint16_t *qdat0 = enc->rc.qdat[2 - cloop];\n            uint16_t *qdat  = enc->rc.qdat[2 - cloop];\n            int qp_div6 = qp*86 >> 9;\n            int qp_mod6 = qp - qp_div6*6;\n            const int16_t *quant_coeff = g_quant_coeff + qp_mod6*6; // TODO: need calculate qp%6*6\n            int i = 3;\n\n            // Quant/dequant multiplier\n            do\n            {\n                *qdat++ = *quant_coeff++ << 1 >> qp_div6;\n                *qdat++ = *quant_coeff++ << qp_div6;\n            } while(--i);\n\n            // quantizer deadzone for P & chroma\n            *qdat++ = enc->slice.type == SLICE_TYPE_P ? g_rnd_inter[qp] : g_deadzonei[qp];\n            // quantizer deadzone for I\n            *qdat++ = g_deadzonei[qp];\n\n            *qdat++ = g_thr_inter[qp]  - 0x7fff;\n            *qdat++ = g_thr_inter2[qp] - 0x7fff;\n\n            qdat[0] = qdat[2] = rc_rnd2thr(g_thr_inter[qp] - 0x7fff, qdat0[0]);\n            qdat[1] = qdat[3] =\n            qdat[4] = qdat[6] = rc_rnd2thr(g_thr_inter[qp] - 0x7fff, qdat0[2]);\n            qdat[5] = qdat[7] = rc_rnd2thr(g_thr_inter[qp] - 0x7fff, qdat0[4]);\n            qdat += 8;\n            qdat[0] = qdat[2] = rc_rnd2thr(g_thr_inter2[qp] - 0x7fff, qdat0[0]);\n            qdat[1] = qdat[3] =\n            qdat[4] = qdat[6] = rc_rnd2thr(g_thr_inter2[qp] - 0x7fff, qdat0[2]);\n            qdat[5] = qdat[7] = rc_rnd2thr(g_thr_inter2[qp] - 0x7fff, qdat0[4]);\n            qdat += 8;\n            qdat[0] = qdat[2] = qdat0[0];\n            qdat[1] = qdat[3] =\n            qdat[4] = qdat[6] = qdat0[2];\n            qdat[5] = qdat[7] = qdat0[4];\n            qdat += 8;\n            qdat[0] = qdat[2] = qdat0[1];\n            qdat[1] = qdat[3] =\n            qdat[4] = qdat[6] = qdat0[3];\n            qdat[5] = qdat[7] = qdat0[5];\n\n            qp = qpy2qpc[qp + DQP_CHROMA];\n        } while (--cloop);\n    }\n}\n\n/**\n*   Estimate frame bit budget and QP\n*\n*   How bit budget allocated?\n*   ~~~~~~~~~~~~~~~~~~~~~~~~~\n*   1. Estimate target size of I and P macroblock, assuming same quality\n*   2. Estimate I peak size\n*   3. Estimate desired stationary VBV level\n*\n*/\nstatic int rc_frame_start(h264e_enc_t *enc, int is_intra, int is_refers_to_long_term)\n{\n    unsigned np = MIN(enc->param.gop - 1u, 63u);\n    int nmb = enc->frame.nmb;\n\n    int qp = -1, add_bits, bit_budget = enc->run_param.desired_frame_bytes*8;\n    int nominal_p, gop_bits, stationary_vbv_level;\n    uint32_t peak_factor_q16;\n\n    // Estimate QP\n    do\n    {\n        qp++;\n        gop_bits = bits_per_mb[0][qp]*np + bits_per_mb[1][qp];\n    } while (gop_bits*nmb > (int)(np + 1)*enc->run_param.desired_frame_bytes*8 && qp < 40);\n\n    /*\n    *   desired*gop = i + p*(gop-1);   i/p = alpha;\n    *   p = desired * gop / (gop-1+alpha) and i = p*alpha or i = (desired-p)*gop + p;\n    */\n    peak_factor_q16 = div_q16(bits_per_mb[1][qp] << 16, bits_per_mb[0][qp] << 16);\n    if (np)\n    {\n        uint32_t ratio_q16 = div_q16((np + 1) << 16, (np << 16) + peak_factor_q16);\n        nominal_p = mul32x32shr16(enc->run_param.desired_frame_bytes*8, ratio_q16);\n    } else\n    {\n        nominal_p = 0;\n    }\n\n    stationary_vbv_level = MIN(enc->param.vbv_size_bytes*8 >> 4, enc->run_param.desired_frame_bytes*8);\n\n    if (is_intra)\n    {\n        int nominal_i = mul32x32shr16(nominal_p, peak_factor_q16);\n        add_bits = nominal_i - bit_budget;\n    }\n#if H264E_RATE_CONTROL_GOLDEN_FRAMES\n    else if (is_refers_to_long_term)\n    {\n        int d_qp = enc->rc.max_dqp - enc->rc.dqp_smooth;\n        unsigned peak_factor_golden_q16;\n        int nominal_golden;\n        d_qp = MAX(d_qp, 2);\n        d_qp = MIN(d_qp, 12);\n        d_qp = d_qp * 4 * 85 >> 8;//* 16 / 12;\n\n        peak_factor_golden_q16 = (peak_factor_q16 - (1 << 16)) * d_qp >> 4;\n        nominal_golden = nominal_p + mul32x32shr16(nominal_p, peak_factor_golden_q16);\n        add_bits = nominal_golden - bit_budget;\n    }\n#endif\n    else\n    {\n        add_bits = nominal_p - bit_budget;\n\n        // drift to stationary level\n        if (enc->param.vbv_size_bytes)\n        {\n            add_bits += (enc->rc.vbv_target_level - enc->rc.vbv_bits) >> 4;\n        }\n    }\n    if (enc->param.vbv_size_bytes)\n    {\n        add_bits = MIN(add_bits, (enc->param.vbv_size_bytes*8*7 >> 3) - enc->rc.vbv_bits);\n    }\n\n    bit_budget += add_bits;\n    bit_budget = MIN(bit_budget, enc->run_param.desired_frame_bytes*8*16);\n    bit_budget = MAX(bit_budget, enc->run_param.desired_frame_bytes*8 >> 2);\n\n#if H264E_RATE_CONTROL_GOLDEN_FRAMES\n    if (is_intra || is_refers_to_long_term)\n#else\n    if (is_intra)\n#endif\n    {\n        // Increase VBV target level due to to I-frame load: this avoids QP adaptation after I-frame\n        enc->rc.vbv_target_level = enc->rc.vbv_bits + bit_budget - enc->run_param.desired_frame_bytes*8;\n    }\n\n    // Slow drift of VBV target to stationary level...\n    enc->rc.vbv_target_level -= enc->run_param.desired_frame_bytes*8 - nominal_p;\n\n    // ...until stationary level reached\n    enc->rc.vbv_target_level = MAX(enc->rc.vbv_target_level, stationary_vbv_level);\n\n    enc->rc.bit_budget = bit_budget;\n\n    if (enc->param.fine_rate_control_flag && enc->frame.num)\n    {\n        qp = enc->rc.qp_smooth >> 8;\n    } else\n    {\n\n#if H264E_RATE_CONTROL_GOLDEN_FRAMES\n        if (is_refers_to_long_term)\n        {\n            for (qp = 0; qp < 42 - 1; qp++)\n            {\n                //if (((bits_per_mb[0][qp] + bits_per_mb[1][qp]) >> 1)*nmb < bit_budget)\n                if (((bits_per_mb[0][qp] + bits_per_mb[1][qp]) >> 1)*nmb < bit_budget)\n                    break;\n            }\n        } else\n#endif\n        {\n            const uint16_t *bits = bits_per_mb[!!is_intra];\n            for (qp = 0; qp < 42 - 1; qp++)\n            {\n                if (bits[qp]*nmb < bit_budget)\n                {\n                    break;\n                }\n            }\n        }\n        qp += MIN_QP;\n\n#if H264E_RATE_CONTROL_GOLDEN_FRAMES\n        if (is_refers_to_long_term)\n        {\n            int dqp = MAX(enc->rc.max_dqp, enc->rc.dqp_smooth);\n            dqp  = MIN(dqp, enc->rc.dqp_smooth + 6);\n            qp += dqp;\n            qp = MAX(enc->rc.prev_qp, qp);\n        } else\n#endif\n        {\n            qp += enc->rc.dqp_smooth;\n        }\n\n        // If reference frame has high qp, motion compensation is less effective, so qp should be increased\n        if (enc->rc.prev_qp > qp + 1)\n        {\n            qp = (enc->rc.prev_qp + qp + 1)/2;\n        }\n    }\n\n    enc->rc.qp = 0; // force\n    rc_set_qp(enc, qp);\n    qp = enc->rc.qp;\n\n    enc->rc.qp_smooth = qp << 8;\n    enc->rc.prev_qp = qp;\n\n    return (enc->rc.vbv_bits > enc->param.vbv_size_bytes*8);\n}\n\n/**\n*   Update rate-control state after frame encode\n*/\nstatic void rc_frame_end(h264e_enc_t *enc, int intra_flag, int skip_flag, int is_refers_to_long_term)\n{\n    // 1. Update QP offset adaptive adjustment\n    if (!skip_flag /*&& !is_refers_to_long_term*/)\n    {\n        int qp, nmb = enc->frame.nmb;\n        // a posterior qp estimation\n        for (qp = 0; qp != 41 && bits_per_mb[intra_flag][qp]*nmb > (int)enc->out_pos*8 - 32; qp++) {/*no action*/}\n\n        qp += MIN_QP;\n\n        if (!is_refers_to_long_term)\n        {\n            if ((enc->rc.qp_smooth >> 8) - enc->rc.dqp_smooth < qp - 1)\n            {\n                enc->rc.dqp_smooth--;\n            } else if ((enc->rc.qp_smooth >> 8) - enc->rc.dqp_smooth > qp + 1)\n            {\n                enc->rc.dqp_smooth++;\n            }\n        }\n        if (intra_flag || is_refers_to_long_term)\n        {\n            enc->rc.max_dqp = enc->rc.dqp_smooth;\n        } else\n        {\n            enc->rc.max_dqp = MAX(enc->rc.max_dqp, (enc->rc.qp_smooth >> 8) - qp);\n        }\n    }\n\n    // 2. Update VBV model state\n    enc->rc.vbv_bits += enc->out_pos*8 - enc->run_param.desired_frame_bytes*8;\n\n    // 3. If VBV model used, handle overflow/underflow\n    if (enc->param.vbv_size_bytes)\n    {\n        if (enc->rc.vbv_bits < 0)       // VBV underflow\n        {\n            if (enc->param.vbv_underflow_stuffing_flag)\n            {\n                // put stuffing ('filler data')\n                nal_start(enc, 12); // filler_data_rbsp\n                do\n                {\n                    U(8, 0xFF);\n                    enc->rc.vbv_bits += 8;\n                } while (enc->rc.vbv_bits < 0);\n                nal_end(enc);\n            } else\n            {\n                // ignore underflow\n                enc->rc.vbv_bits = 0;\n            }\n        }\n        if (enc->rc.vbv_bits > enc->param.vbv_size_bytes*8) // VBV overflow\n        {\n            if (!enc->param.vbv_overflow_empty_frame_flag)\n            {\n                // ignore overflow\n                enc->rc.vbv_bits = enc->param.vbv_size_bytes*8;\n            }\n        }\n    } else\n    {\n        enc->rc.vbv_bits = 0;\n    }\n}\n\n/**\n*   Update rate-control state after macroblock encode, set QP for next MB\n*/\nstatic void rc_mb_end(h264e_enc_t *enc)\n{\n    // used / ncoded = budget/total\n    int bits_coded = h264e_bs_get_pos_bits(enc->bs) +  enc->out_pos*8 + 1;\n    int mb_coded = enc->mb.num; // after increment: 1, 2....\n    int err = bits_coded *enc->frame.nmb - enc->rc.bit_budget*mb_coded;\n    int d_err = err - enc->rc.prev_err;\n    int qp = enc->rc.qp;\n    assert(enc->mb.num);\n    enc->rc.prev_err = err;\n\n    if (err > 0 && d_err > 0)\n    {   // Increasing risk of overflow\n        if (enc->rc.stable_count < 3)\n        {\n            qp++;                       // State not stable: increase QP\n        }\n        enc->rc.stable_count = 0;       // Set state to \"not stable\"\n    } else if (err < 0 && d_err < 0)\n    {   // Increasing risk of underlow\n        if (enc->rc.stable_count < 3)\n        {\n            qp--;\n        }\n        enc->rc.stable_count = 0;\n    } else\n    {   // Stable state\n        enc->rc.stable_count++;\n    }\n    enc->rc.qp_smooth += qp - (enc->rc.qp_smooth >> 8);\n    qp = MIN(qp, enc->rc.prev_qp + 3);\n    qp = MAX(qp, enc->rc.prev_qp - 3);\n    rc_set_qp(enc, qp);\n}\n\n/************************************************************************/\n/*      Top-level API                                                   */\n/************************************************************************/\n\n#define ALIGN_128BIT(p) (void *)((uintptr_t)(((char*)(p)) + 15) & ~(uintptr_t)15)\n#define ALLOC(ptr, size) p = ALIGN_128BIT(p); if (enc) ptr = (void *)p; p += size;\n\n/**\n*   Internal allocator for persistent RAM\n*/\nstatic int enc_alloc(h264e_enc_t *enc, const H264E_create_param_t *par, unsigned char *p, int inp_buf_flag)\n{\n    unsigned char *p0 = p;\n    int nmbx = (par->width  + 15) >> 4;\n    int nmby = (par->height + 15) >> 4;\n    int nref_frames = 1 + par->max_long_term_reference_frames + par->const_input_flag;\n#if H264E_ENABLE_DENOISE\n    nref_frames += !!par->temporal_denoise_flag;\n#endif\n    ALLOC(enc->ref.yuv[0], ((nmbx + 2) * (nmby + 2) * 384) * nref_frames);\n    (void)inp_buf_flag;\n#if H264E_SVC_API\n    if (inp_buf_flag)\n    {\n        ALLOC(enc->inp.yuv[0], ((nmbx)*(nmby)*384)); /* input buffer for base laeyr */\n    }\n#endif\n    return (int)((p - p0) + 15) & ~15u;\n}\n\n/**\n*   Internal allocator for scratch RAM\n*/\nstatic int enc_alloc_scratch(h264e_enc_t *enc, const H264E_create_param_t *par, unsigned char *p)\n{\n    unsigned char *p0 = p;\n    int nmbx = (par->width  + 15) >> 4;\n    int nmby = (par->height + 15) >> 4;\n    ALLOC(enc->scratch, sizeof(scratch_t));\n    ALLOC(enc->out, nmbx * nmby * (384 + 2 + 10) * 3/2);\n\n    ALLOC(enc->nnz, nmbx*8 + 8);\n    ALLOC(enc->mv_pred, (nmbx*4 + 8)*sizeof(point_t));\n    ALLOC(enc->i4x4mode, nmbx*4 + 4);\n    ALLOC(enc->df.df_qp, nmbx);\n    ALLOC(enc->df.mb_type, nmbx);\n    ALLOC(enc->df.df_nzflag, nmbx);\n    ALLOC(enc->top_line, nmbx*32 + 32 + 16);\n    return (int)(p - p0);\n}\n\n/**\n*   Setup H264E_io_yuv_t structures\n*/\nstatic pix_t *io_yuv_set_pointers(pix_t *base, H264E_io_yuv_t *frm, int w, int h)\n{\n    int s = w + (16 + 16);    // guards\n    int i, guard = 16;\n    for (i = 0; i < 3; i++)\n    {\n        frm->stride[i] = s;\n        frm->yuv[i] = base + (s + 1)*guard;\n        base += s*(h + 2*guard);\n        if (!i) guard >>= 1, s >>= 1, h >>= 1;\n    }\n    return base;\n}\n\n/**\n*   Verify encoder creation parameters. Return error code, or 0 if prameters\n*/\nstatic int enc_check_create_params(const H264E_create_param_t *par)\n{\n    if (!par)\n    {\n        return H264E_STATUS_BAD_ARGUMENT;   // NULL argument\n    }\n    if ((int)(par->vbv_size_bytes | par->gop) < 0)\n    {\n        return H264E_STATUS_BAD_PARAMETER;  // negative GOP or VBV size\n    }\n    if (par->width <= 0 || par->height <= 0)\n    {\n        return H264E_STATUS_BAD_PARAMETER;  // non-positive frame size\n    }\n    if ((unsigned)(par->const_input_flag | par->fine_rate_control_flag |\n        par->vbv_overflow_empty_frame_flag | par->vbv_underflow_stuffing_flag) > 1)\n    {\n        return H264E_STATUS_BAD_PARAMETER;  // Any flag is not 0 or 1\n    }\n    if ((unsigned)par->max_long_term_reference_frames > MAX_LONG_TERM_FRAMES)\n    {\n        return H264E_STATUS_BAD_PARAMETER;  // Too many long-term reference frames requested\n    }\n    if ((par->width | par->height) & 1)\n    {\n        return H264E_STATUS_SIZE_NOT_MULTIPLE_2; // frame size must be multiple of 2\n    }\n    if (((par->width | par->height) & 15) && !par->const_input_flag)\n    {\n        // if input buffer reused as scratch (par->const_input_flag == 0)\n        // frame size must be multiple of 16\n        return H264E_STATUS_SIZE_NOT_MULTIPLE_16;\n    }\n    return H264E_STATUS_SUCCESS;\n};\n\nstatic int H264E_sizeof_one(const H264E_create_param_t *par, int *sizeof_persist, int *sizeof_scratch, int inp_buf_flag)\n{\n    int error = enc_check_create_params(par);\n    if (!sizeof_persist || !sizeof_scratch)\n    {\n        error = H264E_STATUS_BAD_ARGUMENT;\n    }\n    if (error)\n    {\n        return error;\n    }\n\n    *sizeof_persist = enc_alloc(NULL, par, (void*)(uintptr_t)1, inp_buf_flag) + sizeof(h264e_enc_t);\n#if H264E_MAX_THREADS > 1\n    *sizeof_scratch = enc_alloc_scratch(NULL, par, (void*)(uintptr_t)1) * (par->max_threads + 1);\n#else\n    *sizeof_scratch = enc_alloc_scratch(NULL, par, (void*)(uintptr_t)1);\n#endif\n    return error;\n}\n\nstatic int H264E_init_one(h264e_enc_t *enc, const H264E_create_param_t *opt, int inp_buf_flag)\n{\n    pix_t *base;\n#if H264E_CONFIGS_COUNT > 1\n    init_vft(opt->enableNEON);\n#endif\n    memset(enc, 0, sizeof(*enc));\n\n    enc->frame.nmbx = (opt->width  + 15) >> 4;\n    enc->frame.nmby = (opt->height + 15) >> 4;\n    enc->frame.nmb = enc->frame.nmbx*enc->frame.nmby;\n    enc->frame.w = enc->frame.nmbx*16;\n    enc->frame.h = enc->frame.nmby*16;\n    enc->frame.mv_limit.tl = point(-MV_GUARD*4, -MV_GUARD*4);\n    enc->frame.mv_qpel_limit.tl = mv_add(enc->frame.mv_limit.tl, point(4*4, 4*4));\n    enc->frame.mv_limit.br = point((enc->frame.nmbx*16 - (16 - MV_GUARD))*4, (enc->frame.nmby*16 - (16 - MV_GUARD))*4);\n    enc->frame.mv_qpel_limit.br = mv_add(enc->frame.mv_limit.br, point(-4*4, -4*4));\n    enc->frame.cropping_flag = !!((opt->width | opt->height) & 15);\n    enc->param = *opt;\n\n    enc_alloc(enc, opt, (void*)(enc + 1), inp_buf_flag);\n\n#if H264E_SVC_API\n    if (inp_buf_flag)\n    {\n        enc->inp.yuv[1] = enc->inp.yuv[0] + enc->frame.w*enc->frame.h;\n        enc->inp.yuv[2] = enc->inp.yuv[1] + enc->frame.w*enc->frame.h/4;\n        enc->inp.stride[0] = enc->frame.w;\n        enc->inp.stride[1] = enc->frame.w/2;\n        enc->inp.stride[2] = enc->frame.w/2;\n        enc->dec = enc->inp;\n    }\n#endif\n\n    base = io_yuv_set_pointers(enc->ref.yuv[0], &enc->ref, enc->frame.nmbx*16, enc->frame.nmby*16);\n#if H264E_ENABLE_DENOISE\n    if (enc->param.temporal_denoise_flag)\n    {\n        pix_t *p = base;\n        base = io_yuv_set_pointers(base, &enc->denoise, enc->frame.nmbx*16, enc->frame.nmby*16);\n        while (p < base) *p++ = 0;\n    }\n#endif\n    if (enc->param.const_input_flag)\n    {\n        base = io_yuv_set_pointers(base, &enc->dec, enc->frame.nmbx*16, enc->frame.nmby*16);\n    }\n    if (enc->param.max_long_term_reference_frames)\n    {\n        H264E_io_yuv_t t;\n        int i;\n        for (i = 0; i < enc->param.max_long_term_reference_frames; i++)\n        {\n            base = io_yuv_set_pointers(base, &t, enc->frame.nmbx*16, enc->frame.nmby*16);\n            enc->lt_yuv[i][0] = t.yuv[0];\n            enc->lt_yuv[i][1] = t.yuv[1];\n            enc->lt_yuv[i][2] = t.yuv[2];\n        }\n    }\n    return H264E_STATUS_SUCCESS;\n}\n\n/**\n*   Encoder initialization\n*   See header file for details.\n*/\nint H264E_init(h264e_enc_t *enc, const H264E_create_param_t *opt)\n{\n    h264e_enc_t *enc_curr = enc;\n    int i, ret;\n    (void)i;\n\n    ret = H264E_init_one(enc_curr, opt, 0);\n\n#if H264E_SVC_API\n    for (i = opt->num_layers; i > 1; i--)\n    {\n        H264E_create_param_t opt_next = enc_curr->param;\n        int sizeof_persist = 0, sizeof_scratch = 0;\n\n        opt_next.const_input_flag = 0;\n        opt_next.temporal_denoise_flag = 0;\n        opt_next.width =  opt_next.width >> 1;\n        opt_next.width += opt_next.width & 1;\n        opt_next.height = opt_next.height >> 1;\n        opt_next.height+= opt_next.height & 1;\n\n        opt_next.vbv_size_bytes <<= 2;\n\n        H264E_sizeof_one(&enc_curr->param, &sizeof_persist, &sizeof_scratch, 1);\n        enc_curr = enc_curr->enc_next = (char *)enc_curr + sizeof_persist;\n\n        ret = H264E_init_one(enc_curr, &opt_next, 1);\n        if (ret)\n            break;\n    }\n#endif\n    return ret;\n}\n\nstatic void encode_slice(h264e_enc_t *enc, int frame_type, int long_term_idx_use, int long_term_idx_update, int pps_id, int enc_type)\n{\n    int i, k;\n    encode_slice_header(enc, frame_type, long_term_idx_use, long_term_idx_update, pps_id,enc_type);\n    // encode frame\n    do\n    {   // encode row\n        do\n        {   // encode macroblock\n            if (enc->run_param.desired_nalu_bytes &&\n                h264e_bs_get_pos_bits(enc->bs) > enc->run_param.desired_nalu_bytes*8u)\n            {\n                // start new slice\n                nal_end(enc);\n                encode_slice_header(enc, frame_type, long_term_idx_use, long_term_idx_update, pps_id, enc_type);\n            }\n\n            mb_encode(enc, enc_type);\n\n            enc->dec.yuv[0] += 16;\n            enc->dec.yuv[1] += 8;\n            enc->dec.yuv[2] += 8;\n\n            enc->mb.num++;  // before rc_mb_end\n            if (enc->param.fine_rate_control_flag)\n            {\n                rc_mb_end(enc);\n            }\n        } while (++enc->mb.x < enc->frame.nmbx);\n\n        for (i = 0, k = 16; i < 3; i++, k = 8)\n        {\n            enc->dec.yuv[i] += k*(enc->dec.stride[i] - enc->frame.nmbx);\n        }\n\n        // start new row\n        enc->mb.x = 0;\n        *((uint32_t*)(enc->nnz)) = *((uint32_t*)(enc->nnz + 4)) = 0x01010101 * NNZ_NA; // left edge of NNZ predictor\n        enc->i4x4mode[0] = -1;\n\n    } while (++enc->mb.y < enc->frame.nmby);\n\n    if (enc->mb.skip_run)\n    {\n        UE(enc->mb.skip_run);\n    }\n\n    nal_end(enc);\n    for (i = 0, k = 16; i < 3; i++, k = 8)\n    {\n        enc->dec.yuv[i] -= k*enc->dec.stride[i]*enc->frame.nmby;\n    }\n}\n\n#if H264E_MAX_THREADS\ntypedef struct\n{\n    H264E_persist_t *enc;\n    int frame_type, long_term_idx_use, long_term_idx_update, pps_id, enc_type;\n} h264_enc_slice_thread_params_t;\n\nstatic void encode_slice_thread_simple(void *arg)\n{\n    h264_enc_slice_thread_params_t *h = (h264_enc_slice_thread_params_t*)arg;\n    encode_slice(h->enc, h->frame_type, h->long_term_idx_use, h->long_term_idx_update, h->pps_id, h->enc_type);\n}\n#endif\n\nstatic int H264E_encode_one(H264E_persist_t *enc, const H264E_run_param_t *opt,\n    int long_term_idx_use, int is_refers_to_long_term, int long_term_idx_update,\n    int frame_type, int pps_id, int enc_type)\n{\n    int i, k;\n    // slice reset\n    enc->slice.type = (long_term_idx_use < 0 ? SLICE_TYPE_I : SLICE_TYPE_P);\n    rc_frame_start(enc, (long_term_idx_use < 0) ? 1 : 0, is_refers_to_long_term);\n\n    enc->mb.x = enc->mb.y = enc->mb.num = 0;\n\n    if (long_term_idx_use > 0)\n    {\n        // Activate long-term reference buffer\n        for (i = 0; i < 3; i++)\n        {\n            SWAP(pix_t*, enc->ref.yuv[i], enc->lt_yuv[long_term_idx_use - 1][i]);\n        }\n    }\n\n    if (enc->param.vbv_size_bytes && !long_term_idx_use && long_term_idx_update <= 0 &&\n        enc->rc.vbv_bits - enc->run_param.desired_frame_bytes*8 > enc->param.vbv_size_bytes*8)\n    {\n        // encode transparent frame on VBV overflow\n        encode_slice_header(enc, frame_type, long_term_idx_use, long_term_idx_update, pps_id,enc_type);\n        enc->mb.skip_run = enc->frame.nmb;\n        UE(enc->mb.skip_run);\n        nal_end(enc);\n        for (i = 0, k = 16; i < 3; i++, k = 8)\n        {\n            pix_copy_pic(enc->dec.yuv[i], enc->dec.stride[i], enc->ref.yuv[i], enc->ref.stride[i], enc->frame.nmbx*k, enc->frame.nmby*k);\n        }\n    } else\n    {\n#if H264E_MAX_THREADS\n        if (enc->param.max_threads > 1)\n        {\n            H264E_persist_t enc_thr[H264E_MAX_THREADS];\n            int sizeof_scratch = enc_alloc_scratch(NULL, &enc->param, (void*)(uintptr_t)1);\n            unsigned char *scratch_base = ((unsigned char*)enc->scratch) + sizeof_scratch;\n            int mby = 0;\n            int ithr;\n            int nmby = enc->frame.nmby;\n            void *savep[3];\n            for (i = 0; i < 3; i++)\n            {\n                savep[i] = enc->dec.yuv[i];\n            }\n\n            for (ithr = 0; ithr < enc->param.max_threads; ithr++)\n            {\n                enc_thr[ithr] = *enc;\n                enc_thr[ithr].mb.y = mby;\n                enc_thr[ithr].mb.num = mby*enc->frame.nmbx;\n                mby += (enc->frame.nmby - mby) / (enc->param.max_threads - ithr);\n                enc_thr[ithr].frame.nmby = mby;\n                enc_thr[ithr].rc.bit_budget /= enc->param.max_threads;\n                enc_thr[ithr].frame.nmb = enc_thr[ithr].frame.nmbx * enc_thr[ithr].frame.nmby;\n\n                for (i = 0, k = 16; i < 3; i++, k = 8)\n                {\n                    enc_thr[ithr].dec.yuv[i] += k*enc->dec.stride[i]*enc_thr[ithr].mb.y;\n                }\n\n                //enc_alloc_scratch(enc_thr + ithr, &enc->param, (unsigned char*)(scratch_thr[ithr]));\n                scratch_base += enc_alloc_scratch(enc_thr + ithr, &enc->param, scratch_base);\n                enc_thr[ithr].out_pos = 0;\n                h264e_bs_init_bits(enc_thr[ithr].bs, enc_thr[ithr].out);\n            }\n\n            {\n                h264_enc_slice_thread_params_t thread_par[H264E_MAX_THREADS];\n                void *args[H264E_MAX_THREADS];\n                for (i = 0; i < enc->param.max_threads; i++)\n                {\n                    thread_par[i].enc = enc_thr + i;\n                    thread_par[i].frame_type = frame_type;\n                    thread_par[i].long_term_idx_use = long_term_idx_use;\n                    thread_par[i].long_term_idx_update = long_term_idx_update;\n                    thread_par[i].pps_id = pps_id;\n                    thread_par[i].enc_type = enc_type;\n                    args[i] = thread_par + i;\n                }\n                enc->param.run_func_in_thread(enc->param.token, encode_slice_thread_simple, args, enc->param.max_threads);\n            }\n\n            for (i = 0; i < enc->param.max_threads; i++)\n            {\n                memcpy(enc->out + enc->out_pos, enc_thr[i].out, enc_thr[i].out_pos);\n                enc->out_pos += enc_thr[i].out_pos;\n            }\n            enc->frame.nmby = nmby;\n            for (i = 0; i < 3; i++)\n            {\n                enc->dec.yuv[i] = savep[i];\n            }\n        } else\n#endif\n        {\n            encode_slice(enc, frame_type, long_term_idx_use, long_term_idx_update, pps_id, enc_type);\n        }\n    }\n\n    // Set flags for AMM state machine for standard compliance\n    if (frame_type == H264E_FRAME_TYPE_KEY)\n    {\n        // Reset long-term reference frames\n        memset(enc->lt_used, 0, sizeof(enc->lt_used));\n        // Assume that this frame is not short-term (have effect only if AMM used)\n        enc->short_term_used = 0;\n    }\n    if (long_term_idx_update > 0)\n    {\n        enc->lt_used[long_term_idx_update - 1] = 1;\n    } else if (long_term_idx_update == 0)\n    {\n        enc->short_term_used = 1;\n    }\n\n    rc_frame_end(enc, long_term_idx_use == -1, enc->mb.skip_run == enc->frame.nmb, is_refers_to_long_term);\n\n    if (long_term_idx_use > 0)\n    {\n        // deactivate long-term reference\n        for (i = 0; i < 3; i++)\n        {\n            SWAP(pix_t*, enc->ref.yuv[i], enc->lt_yuv[long_term_idx_use - 1][i]);\n        }\n    }\n\n    if (long_term_idx_update != -1)\n    {\n        pix_copy_recon_pic_to_ref(enc);\n\n        if (++enc->frame.num >= enc->param.gop && enc->param.gop && (opt->frame_type == H264E_FRAME_TYPE_DEFAULT))\n        {\n            enc->frame.num = 0;     // trigger to encode IDR on next call\n        }\n\n        if (long_term_idx_update > 0)\n        {\n            for (i = 0; i < 3; i++)\n            {\n                SWAP(pix_t*, enc->ref.yuv[i], enc->lt_yuv[long_term_idx_update - 1][i]);\n            }\n        }\n    }\n\n    return H264E_STATUS_SUCCESS;\n}\n\nstatic int check_parameters_align(const H264E_create_param_t *opt, const H264E_io_yuv_t *in)\n{\n    int i;\n    int min_align = 0;\n#if H264E_ENABLE_NEON || H264E_ENABLE_SSE2\n    min_align = 7;\n#endif\n    if (opt->const_input_flag && opt->temporal_denoise_flag)\n    {\n        min_align = 0;\n    }\n    for (i = 0; i < 3; i++)\n    {\n        if (((uintptr_t)in->yuv[i]) & min_align)\n        {\n            return i ? H264E_STATUS_BAD_CHROMA_ALIGN : H264E_STATUS_BAD_LUMA_ALIGN;\n        }\n        if (in->stride[i] & min_align)\n        {\n            return i ? H264E_STATUS_BAD_CHROMA_STRIDE : H264E_STATUS_BAD_LUMA_STRIDE;\n        }\n    }\n    return H264E_STATUS_SUCCESS;\n}\n\n/**\n*   Top-level encode function\n*   See header file for details.\n*/\nint H264E_encode(H264E_persist_t *enc, H264E_scratch_t *scratch, const H264E_run_param_t *opt,\n    H264E_io_yuv_t *in, unsigned char **coded_data, int *sizeof_coded_data)\n{\n    int i;\n    int frame_type;\n    int long_term_idx_use;\n    int long_term_idx_update;\n    int is_refers_to_long_term;\n    int error;\n\n    error = check_parameters_align(&enc->param, in);\n    if (error)\n    {\n        return error;\n    }\n    (void)i;\n    i = enc_alloc_scratch(enc, &enc->param, (unsigned char*)scratch);\n#if H264E_SVC_API\n    {\n        H264E_persist_t *e = enc->enc_next;\n        while (e)\n        {\n            i += enc_alloc_scratch(e, &enc->param, ((unsigned char*)scratch) + i);\n            e = e->enc_next;\n        }\n    }\n#endif\n\n    enc->inp = *in;\n\n#if H264E_ENABLE_DENOISE\n    // 1. Run optional denoise filter\n    if (enc->param.temporal_denoise_flag && opt->encode_speed < 2)\n    {\n        int sh = 0;\n        for (i = 0; i < 3; i++)\n        {\n            h264e_denoise_run(in->yuv[i], enc->denoise.yuv[i],  enc->param.width >> sh, enc->param.height >> sh, in->stride[i], enc->denoise.stride[i]);\n            enc->inp.yuv[i] = enc->denoise.yuv[i];\n            enc->inp.stride[i] = enc->denoise.stride[i];\n            sh = 1;\n        }\n    }\n#endif\n\n    enc->out_pos = 0;   // reset output bitbuffer position\n\n    if (opt)\n    {\n        enc->run_param = *opt;  // local copy of run-time parameters\n    }\n    opt = &enc->run_param;      // refer to local copy\n\n    // silently fix invalid QP without warning\n    if (!enc->run_param.qp_max || enc->run_param.qp_max > 51)\n    {\n        enc->run_param.qp_max = 51;\n    }\n    if (!enc->run_param.qp_min || enc->run_param.qp_min < MIN_QP)\n    {\n        enc->run_param.qp_min = MIN_QP;\n    }\n\n    enc->speed.disable_deblock = (opt->encode_speed == 8 || opt->encode_speed == 10);\n\n    if (!enc->param.const_input_flag)\n    {\n        // if input frame can be re-used as a scratch, set reconstructed frame to the input\n        enc->dec = *in;\n    }\n\n    // Set default frame type\n    frame_type = opt->frame_type;\n    if (frame_type == H264E_FRAME_TYPE_DEFAULT)\n    {\n        frame_type = enc->frame.num ? H264E_FRAME_TYPE_P : H264E_FRAME_TYPE_KEY;\n    }\n    // Estimate long-term indexes from frame type\n    // index 0 means \"short-term\" reference\n    // index -1 means \"not used\"\n    switch (frame_type)\n    {\n    default:\n    case H264E_FRAME_TYPE_I:        long_term_idx_use = -1; long_term_idx_update = 0; break;\n    case H264E_FRAME_TYPE_KEY:      long_term_idx_use = -1; long_term_idx_update = enc->param.max_long_term_reference_frames > 0; break;\n    case H264E_FRAME_TYPE_GOLDEN:   long_term_idx_use =  1; long_term_idx_update = 1; break;\n    case H264E_FRAME_TYPE_RECOVERY: long_term_idx_use =  1; long_term_idx_update = 0; break;\n    case H264E_FRAME_TYPE_P:        long_term_idx_use =  enc->most_recent_ref_frame_idx; long_term_idx_update =  0; break;\n    case H264E_FRAME_TYPE_DROPPABLE:long_term_idx_use =  enc->most_recent_ref_frame_idx; long_term_idx_update = -1; break;\n    case H264E_FRAME_TYPE_CUSTOM:   long_term_idx_use =  opt->long_term_idx_use; long_term_idx_update = opt->long_term_idx_update;\n        if (!long_term_idx_use)\n        {\n            long_term_idx_use = enc->most_recent_ref_frame_idx;\n        }\n        if (long_term_idx_use < 0)\n        {\n            // hack: redefine frame type, always encode IDR\n            frame_type = H264E_FRAME_TYPE_KEY;\n        }\n        break;\n    }\n\n#if H264E_RATE_CONTROL_GOLDEN_FRAMES\n    is_refers_to_long_term = (long_term_idx_use != enc->most_recent_ref_frame_idx && long_term_idx_use >= 0);\n#else\n    is_refers_to_long_term = 0;\n#endif\n\n    if (long_term_idx_update >= 0)\n    {\n        enc->most_recent_ref_frame_idx = long_term_idx_update;\n    }\n    if (frame_type == H264E_FRAME_TYPE_KEY)\n    {\n        int pic_init_qp = 30;\n        pic_init_qp = MIN(pic_init_qp, enc->run_param.qp_max);\n        pic_init_qp = MAX(pic_init_qp, enc->run_param.qp_min);\n\n        //temp only two layers!\n        enc->sps.pic_init_qp = pic_init_qp;\n        enc->next_idr_pic_id ^= 1;\n        enc->frame.num = 0;\n\n#if H264E_SVC_API\n        if (enc->param.num_layers > 1)\n        {\n            H264E_persist_t *enc_base = enc->enc_next;\n            enc_base->sps.pic_init_qp = pic_init_qp;\n            enc_base->next_idr_pic_id ^= 1;\n            enc_base->frame.num = 0;\n\n            enc_base->out = enc->out;\n            enc_base->out_pos = 0;\n            encode_sps(enc_base, 66);\n            encode_pps(enc_base, 0);\n\n            enc->out_pos += enc_base->out_pos;\n            encode_sps(enc, 83);\n            encode_pps(enc, 1);\n        } else\n#endif\n        {\n            encode_sps(enc, 66);\n            encode_pps(enc, 0);\n        }\n    } else\n    {\n        if (!enc->sps.pic_init_qp)\n        {\n            return H264E_STATUS_BAD_FRAME_TYPE;\n        }\n        if (long_term_idx_use > enc->param.max_long_term_reference_frames ||\n            long_term_idx_update > enc->param.max_long_term_reference_frames ||\n            long_term_idx_use > MAX_LONG_TERM_FRAMES)\n        {\n            return H264E_STATUS_BAD_FRAME_TYPE;\n        }\n    }\n\n#if H264E_SVC_API\n    if (enc->param.num_layers > 1)\n    {\n        H264E_persist_t *enc_base = enc->enc_next;\n        int sh = 0;\n\n        enc_base->run_param = enc->run_param;\n        enc_base->run_param.desired_frame_bytes = enc->run_param.desired_frame_bytes >> 2;\n\n        for (i = 0; i < 3; i++)\n        {\n            h264e_frame_downsampling(enc_base->inp.yuv[i], enc_base->inp.stride[i], enc_base->frame.h >> sh,\n                in->yuv[i], in->stride[i], enc->param.height >> sh, enc_base->param.width >> sh,\n                enc_base->param.height >> sh, enc->param.width >> sh, enc->param.height >> sh);\n            sh = 1;\n        }\n\n        enc_base->scratch = enc->scratch;\n        enc_base->out = enc->out + enc->out_pos;\n        enc_base->out_pos = 0;\n\n        H264E_encode_one(enc_base, &enc_base->run_param, long_term_idx_use, is_refers_to_long_term, long_term_idx_update,\n            frame_type, enc->param.sps_id*4 + 0, 0);\n\n        enc->out_pos += enc_base->out_pos;\n\n        if ((frame_type == H264E_FRAME_TYPE_I || frame_type == H264E_FRAME_TYPE_KEY) && enc->param.inter_layer_pred_flag)\n        {\n            for (i = 0, sh = 0; i < 3; i++, sh = 1)\n            {\n                h264e_intra_upsampling(enc_base->frame.w >> sh, enc_base->frame.h >> sh, enc->frame.w >> sh, enc->frame.h >> sh,\n                    sh, enc_base->dec.yuv[i], enc_base->dec.stride[i], enc->ref.yuv[i], enc->ref.stride[i]);\n            }\n        }\n\n        memset(enc->df.df_nzflag, 0, enc->frame.nmbx);\n        H264E_encode_one(enc, opt, long_term_idx_use, is_refers_to_long_term, long_term_idx_update,\n            frame_type, enc->param.sps_id*4 + 1, 20);\n    } else\n#endif // H264E_SVC_API\n    {\n        H264E_encode_one(enc, opt, long_term_idx_use, is_refers_to_long_term, long_term_idx_update,\n            frame_type, enc->param.sps_id*4 + 0, 0);\n    }\n\n    *sizeof_coded_data = enc->out_pos;\n    *coded_data = enc->out;\n    return H264E_STATUS_SUCCESS;\n}\n\n/**\n*   Return persistent and scratch memory requirements\n*   for given encoding options.\n*   See header file for details.\n*/\nint H264E_sizeof(const H264E_create_param_t *par, int *sizeof_persist, int *sizeof_scratch)\n{\n    int i;\n    int error = H264E_sizeof_one(par, sizeof_persist, sizeof_scratch, 0);\n    (void)i;\n#if H264E_SVC_API\n    for (i = par->num_layers; i > 1; i--)\n    {\n        H264E_create_param_t opt_next = *par;\n        opt_next.const_input_flag = 1;\n        opt_next.temporal_denoise_flag = 0;\n        opt_next.width   = opt_next.width >> 1;\n        opt_next.width  += opt_next.width & 1;\n        opt_next.height  = opt_next.height >> 1;\n        opt_next.height += opt_next.height & 1;\n        *sizeof_persist += enc_alloc(NULL, par, (void*)(uintptr_t)1, 1) + sizeof(h264e_enc_t);\n#if H264E_MAX_THREADS > 1\n        *sizeof_scratch += enc_alloc_scratch(NULL, par, (void*)(uintptr_t)1) * (H264E_MAX_THREADS + 1);\n#else\n        *sizeof_scratch += enc_alloc_scratch(NULL, par, (void*)(uintptr_t)1);\n#endif\n    }\n#endif\n    return error;\n}\n\n/**\n*   Set VBV size and fullness\n*   See header file for details.\n*/\nvoid H264E_set_vbv_state(\n    H264E_persist_t *enc,\n    int vbv_size_bytes,     //< New VBV size\n    int vbv_fullness_bytes  //< New VBV fulness, -1 = no change\n)\n{\n    if (enc)\n    {\n        enc->param.vbv_size_bytes = vbv_size_bytes;\n        if (vbv_fullness_bytes >= 0)\n        {\n            enc->rc.vbv_bits = vbv_fullness_bytes*8;\n            enc->rc.vbv_target_level = enc->rc.vbv_bits;\n        }\n    }\n}\n#endif\n"
  },
  {
    "path": "minih264e_test.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <assert.h>\n#include <string.h>\n#include <math.h>\n#define MINIH264_IMPLEMENTATION\n//#define MINIH264_ONLY_SIMD\n#include \"minih264e.h\"\n\n#define DEFAULT_GOP 20\n#define DEFAULT_QP 33\n#define DEFAULT_DENOISE 0\n\n#define ENABLE_TEMPORAL_SCALABILITY 0\n#define MAX_LONG_TERM_FRAMES        8 // used only if ENABLE_TEMPORAL_SCALABILITY==1\n\n#define DEFAULT_MAX_FRAMES  99999\n\nH264E_create_param_t create_param;\nH264E_run_param_t run_param;\nH264E_io_yuv_t yuv;\nuint8_t *buf_in, *buf_save;\nuint8_t *coded_data;\nFILE *fin, *fout;\nint sizeof_coded_data, frame_size, g_w, g_h, _qp;\n\n#ifdef _WIN32\n// only vs2017 have aligned_alloc\n#define ALIGNED_ALLOC(n, size) malloc(size)\n#else\n#define ALIGNED_ALLOC(n, size) aligned_alloc(n, (size + n - 1)/n*n)\n#endif\n\n#if H264E_MAX_THREADS\n#include \"system.h\"\ntypedef struct\n{\n    void *event_start;\n    void *event_done;\n    void (*callback)(void*);\n    void *job;\n    void *thread;\n    int terminated;\n} h264e_thread_t;\n\nstatic THREAD_RET THRAPI minih264_thread_func(void *arg)\n{\n    h264e_thread_t *t = (h264e_thread_t *)arg;\n    thread_name(\"h264\");\n    for (;;)\n    {\n        event_wait(t->event_start, INFINITE);\n        if (t->terminated)\n            break;\n        t->callback(t->job);\n        event_set(t->event_done);\n    }\n    return 0;\n}\n\nvoid *h264e_thread_pool_init(int max_threads)\n{\n    int i;\n    h264e_thread_t *threads = (h264e_thread_t *)calloc(sizeof(h264e_thread_t), max_threads);\n    if (!threads)\n        return 0;\n    for (i = 0; i < max_threads; i++)\n    {\n        h264e_thread_t *t = threads + i;\n        t->event_start = event_create(0, 0);\n        t->event_done  = event_create(0, 0);\n        t->thread = thread_create(minih264_thread_func, t);\n    }\n    return threads;\n}\n\nvoid h264e_thread_pool_close(void *pool, int max_threads)\n{\n    int i;\n    h264e_thread_t *threads = (h264e_thread_t *)pool;\n    for (i = 0; i < max_threads; i++)\n    {\n        h264e_thread_t *t = threads + i;\n        t->terminated = 1;\n        event_set(t->event_start);\n        thread_wait(t->thread);\n        thread_close(t->thread);\n        event_destroy(t->event_start);\n        event_destroy(t->event_done);\n    }\n    free(pool);\n}\n\nvoid h264e_thread_pool_run(void *pool, void (*callback)(void*), void *callback_job[], int njobs)\n{\n    h264e_thread_t *threads = (h264e_thread_t*)pool;\n    int i;\n    for (i = 0; i < njobs; i++)\n    {\n        h264e_thread_t *t = threads + i;\n        t->callback = (void (*)(void *))callback;\n        t->job = callback_job[i];\n        event_set(t->event_start);\n    }\n    for (i = 0; i < njobs; i++)\n    {\n        h264e_thread_t *t = threads + i;\n        event_wait(t->event_done, INFINITE);\n    }\n}\n#endif\n\nstruct\n{\n    const char *input_file;\n    const char *output_file;\n    int gen, gop, qp, kbps, max_frames, threads, speed, denoise, stats, psnr;\n} cmdline[1];\n\nstatic int str_equal(const char *pattern, char **p)\n{\n    if (!strncmp(pattern, *p, strlen(pattern)))\n    {\n        *p += strlen(pattern);\n        return 1;\n    } else\n    {\n        return 0;\n    }\n}\n\nstatic int read_cmdline_options(int argc, char *argv[])\n{\n    int i;\n    memset(cmdline, 0, sizeof(*cmdline));\n    cmdline->gop = DEFAULT_GOP;\n    cmdline->qp = DEFAULT_QP;\n    cmdline->max_frames = DEFAULT_MAX_FRAMES;\n    cmdline->kbps = 0;\n    //cmdline->kbps = 2048;\n    cmdline->denoise = DEFAULT_DENOISE;\n    for (i = 1; i < argc; i++)\n    {\n        char *p = argv[i];\n        if (*p == '-')\n        {\n            p++;\n            if (str_equal((\"gen\"), &p))\n            {\n                cmdline->gen = 1;\n            } else if (str_equal((\"gop\"), &p))\n            {\n                cmdline->gop = atoi(p);\n            } else if (str_equal((\"qp\"), &p))\n            {\n                cmdline->qp = atoi(p);\n            } else if (str_equal((\"kbps\"), &p))\n            {\n                cmdline->kbps = atoi(p);\n            } else if (str_equal((\"maxframes\"), &p))\n            {\n                cmdline->max_frames = atoi(p);\n            } else if (str_equal((\"threads\"), &p))\n            {\n                cmdline->threads = atoi(p);\n            } else if (str_equal((\"speed\"), &p))\n            {\n                cmdline->speed = atoi(p);\n            } else if (str_equal((\"denoise\"), &p))\n            {\n                cmdline->denoise = 1;\n            } else if (str_equal((\"stats\"), &p))\n            {\n                cmdline->stats = 1;\n            } else if (str_equal((\"psnr\"), &p))\n            {\n                cmdline->psnr = 1;\n            } else\n            {\n                printf(\"ERROR: Unknown option %s\\n\", p - 1);\n                return 0;\n            }\n        } else if (!cmdline->input_file && !cmdline->gen)\n        {\n            cmdline->input_file = p;\n        } else if (!cmdline->output_file)\n        {\n            cmdline->output_file = p;\n        } else\n        {\n            printf(\"ERROR: Unknown option %s\\n\", p);\n            return 0;\n        }\n    }\n    if (!cmdline->input_file && !cmdline->gen)\n    {\n        printf(\"Usage:\\n\"\n               \"    h264e_test [options] <input[frame_size].yuv> <output.264>\\n\"\n               \"Frame size can be: WxH sqcif qvga svga 4vga sxga xga vga qcif 4cif\\n\"\n               \"    4sif cif sif pal ntsc d1 16cif 16sif 720p 4SVGA 4XGA 16VGA 16VGA\\n\"\n               \"Options:\\n\"\n               \"    -gen            - generate input instead of passing <input.yuv>\\n\"\n               \"    -qop<n>         - key frame period >= 0\\n\"\n               \"    -qp<n>          - set QP [10..51]\\n\"\n               \"    -kbps<n>        - set bitrate (fps=30 assumed)\\n\"\n               \"    -maxframes<n>   - encode no more than given number of frames\\n\"\n               \"    -threads<n>     - use <n> threads for encode\\n\"\n               \"    -speed<n>       - speed [0..10], 0 means best quality\\n\"\n               \"    -denoise        - use temporal noise supression\\n\"\n               \"    -stats          - print frame statistics\\n\"\n               \"    -psnr           - print psnr statistics\\n\");\n        return 0;\n    }\n    return 1;\n}\n\ntypedef struct\n{\n    const char *size_name;\n    int g_w;\n    int h;\n} frame_size_descriptor_t;\n\nstatic const frame_size_descriptor_t g_frame_size_descriptor[] =\n{\n    {\"sqcif\",  128,   96},\n    { \"qvga\",  320,  240},\n    { \"svga\",  800,  600},\n    { \"4vga\", 1280,  960},\n    { \"sxga\", 1280, 1024},\n    {  \"xga\", 1024,  768},\n    {  \"vga\",  640,  480},\n    { \"qcif\",  176,  144},\n    { \"4cif\",  704,  576},\n    { \"4sif\",  704,  480},\n    {  \"cif\",  352,  288},\n    {  \"sif\",  352,  240},\n    {  \"pal\",  720,  576},\n    { \"ntsc\",  720,  480},\n    {   \"d1\",  720,  480},\n    {\"16cif\", 1408, 1152},\n    {\"16sif\", 1408,  960},\n    { \"720p\", 1280,  720},\n    {\"4SVGA\", 1600, 1200},\n    { \"4XGA\", 2048, 1536},\n    {\"16VGA\", 2560, 1920},\n    {\"16VGA\", 2560, 1920},\n    {NULL, 0, 0},\n};\n\n/**\n*   Guess image size specification from ASCII string.\n*   If string have several specs, only last one taken.\n*   Spec may look like \"352x288\" or \"qcif\", \"cif\", etc.\n*/\nstatic int guess_format_from_name(const char *file_name, int *w, int *h)\n{\n    int i = (int)strlen(file_name);\n    int found = 0;\n    while(--i >= 0)\n    {\n        const frame_size_descriptor_t *fmt = g_frame_size_descriptor;\n        const char *p = file_name + i;\n        int prev_found = found;\n        found = 0;\n        if (*p >= '0' && *p <= '9')\n        {\n            char * end;\n            int width = strtoul(p, &end, 10);\n            if (width && (*end == 'x' || *end == 'X') && (end[1] >= '1' && end[1] <= '9'))\n            {\n                int height = strtoul(end + 1, &end, 10);\n                if (height)\n                {\n                    *w = width;\n                    *h = height;\n                    found = 1;\n                }\n            }\n        }\n        do\n        {\n            if (!strncmp(file_name + i, fmt->size_name, strlen(fmt->size_name)))\n            {\n                *w = fmt->g_w;\n                *h = fmt->h;\n                found = 1;\n            }\n        } while((++fmt)->size_name);\n\n        if (!found && prev_found)\n        {\n            return prev_found;\n        }\n    }\n    return found;\n}\n\n// PSNR estimation results\ntypedef struct\n{\n    double psnr[4];             // PSNR, db\n    double kpbs_30fps;          // bitrate, kbps, assuming 30 fps\n    double psnr_to_logkbps_ratio;  // cumulative quality metric\n    double psnr_to_kbps_ratio;  // another variant of cumulative quality metric\n} rd_t;\n\n\nstatic struct\n{\n    // Y,U,V,Y+U+V\n    double noise[4];\n    double count[4];\n    double bytes;\n    int frames;\n} g_psnr;\n\nstatic void psnr_init()\n{\n    memset(&g_psnr, 0, sizeof(g_psnr));\n}\n\nstatic void psnr_add(unsigned char *p0, unsigned char *p1, int w, int h, int bytes)\n{\n    int i, k;\n    for (k = 0; k < 3; k++)\n    {\n        double s = 0;\n        for (i = 0; i < w*h; i++)\n        {\n            int d = *p0++ - *p1++;\n            s += d*d;\n        }\n        g_psnr.count[k] += w*h;\n        g_psnr.noise[k] += s;\n        if (!k) w >>= 1, h >>= 1;\n    }\n    g_psnr.count[3] = g_psnr.count[0] + g_psnr.count[1] + g_psnr.count[2];\n    g_psnr.noise[3] = g_psnr.noise[0] + g_psnr.noise[1] + g_psnr.noise[2];\n    g_psnr.frames++;\n    g_psnr.bytes += bytes;\n}\n\nstatic rd_t psnr_get()\n{\n    int i;\n    rd_t rd;\n    double fps = 30;\n    double realkbps = g_psnr.bytes*8./((double)g_psnr.frames/(fps))/1000;\n    double db = 10*log10(255.*255/(g_psnr.noise[0]/g_psnr.count[0]));\n    for (i = 0; i < 4; i++)\n    {\n        rd.psnr[i] = 10*log10(255.*255/(g_psnr.noise[i]/g_psnr.count[i]));\n    }\n    rd.psnr_to_kbps_ratio = 10*log10((double)g_psnr.count[0]*g_psnr.count[0]*3/2 * 255*255/(g_psnr.noise[0] * g_psnr.bytes));\n    rd.psnr_to_logkbps_ratio = db / log10(realkbps);\n    rd.kpbs_30fps = realkbps;\n    return rd;\n}\n\nstatic void psnr_print(rd_t rd)\n{\n    int i;\n    printf(\"%5.0f kbps@30fps  \", rd.kpbs_30fps);\n    for (i = 0; i < 3; i++)\n    {\n        //printf(\"  %.2f db \", rd.psnr[i]);\n        printf(\" %s=%.2f db \", i ? (i == 1 ? \"UPSNR\" : \"VPSNR\") : \"YPSNR\", rd.psnr[i]);\n    }\n    printf(\"  %6.2f db/rate \", rd.psnr_to_kbps_ratio);\n    printf(\"  %6.3f db/lgrate \", rd.psnr_to_logkbps_ratio);\n    printf(\"  \\n\");\n}\n\nstatic int pixel_of_chessboard(double x, double y)\n{\n#if 0\n    int mid = (fabs(x) < 4 && fabs(y) < 4);\n    int i = (int)(x);\n    int j = (int)(y);\n    int cx, cy;\n    cx = (i & 16) ? 255 : 0;\n    cy = (j & 16) ? 255 : 0;\n    if ((i & 15) == 0) cx *= (x - i);\n    if ((j & 15) == 0) cx *= (y - j);\n    return (cx + cy + 1) >> 1;\n#else\n    int mid = (fabs(x ) < 4 && fabs(y) < 4);\n    int i = (int)(x);\n    int j = (int)(y);\n    int black = (mid) ? 128 : i/16;\n    int white = (mid) ? 128 : 255 - j/16;\n    int c00 = (((i >> 4) + (j >> 4)) & 1) ? white : black;\n    int c01 = ((((i + 1)>> 4) + (j >> 4)) & 1) ? white : black;\n    int c10 = (((i >> 4) + ((j + 1) >> 4)) & 1) ? white : black;\n    int c11 = ((((i + 1) >> 4) + ((j + 1) >> 4)) & 1) ? white : black;\n    int s    = (int)((c00 * (1 - (x - i)) + c01*(x - i))*(1 - (y - j)) +\n                     (c10 * (1 - (x - i)) + c11*(x - i))*((y - j)) + 0.5);\n    return s < 0 ? 0 : s > 255 ? 255 : s;\n#endif\n}\n\nstatic void gen_chessboard_rot(unsigned char *p, int w, int h, int frm)\n{\n    int r, c;\n    double x, y;\n    double co = cos(.01*frm);\n    double si = sin(.01*frm);\n    int hw = w >> 1;\n    int hh = h >> 1;\n    for (r = 0; r < h; r++)\n    {\n        for (c = 0; c < w; c++)\n        {\n            x =  co*(c - hw) + si*(r - hh);\n            y = -si*(c - hw) + co*(r - hh);\n            p[r*w + c] = pixel_of_chessboard(x, y);\n        }\n    }\n}\n\nint main(int argc, char *argv[])\n{\n    int i, frames = 0;\n    const char *fnin, *fnout;\n\n    if (!read_cmdline_options(argc, argv))\n        return 1;\n    fnin  = cmdline->input_file;\n    fnout = cmdline->output_file;\n\n    if (!cmdline->gen)\n    {\n        g_w = 352;\n        g_h = 288;\n        guess_format_from_name(fnin, &g_w, &g_h);\n        fin = fopen(fnin, \"rb\");\n        if (!fin)\n        {\n            printf(\"ERROR: cant open input file %s\\n\", fnin);\n            return 1;\n        }\n    } else\n    {\n        g_w = 1024;\n        g_h = 768;\n    }\n\n    if (!fnout)\n        fnout = \"out.264\";\n    fout = fopen(fnout, \"wb\");\n    if (!fout)\n    {\n        printf(\"ERROR: cant open output file %s\\n\", fnout);\n        return 1;\n    }\n\n    create_param.enableNEON = 1;\n#if H264E_SVC_API\n    create_param.num_layers = 1;\n    create_param.inter_layer_pred_flag = 0;\n#endif\n    create_param.gop = cmdline->gop;\n    create_param.height = g_h;\n    create_param.width  = g_w;\n    create_param.max_long_term_reference_frames = 0;\n#if ENABLE_TEMPORAL_SCALABILITY\n    create_param.max_long_term_reference_frames = MAX_LONG_TERM_FRAMES;\n#endif\n    create_param.fine_rate_control_flag = 0;\n    create_param.const_input_flag = cmdline->psnr ? 0 : 1;\n    //create_param.vbv_overflow_empty_frame_flag = 1;\n    //create_param.vbv_underflow_stuffing_flag = 1;\n    create_param.vbv_size_bytes = cmdline->kbps*1000/8*2; // 2 seconds vbv buffer for quality, so rate control can allocate more bits for intra frame\n    create_param.temporal_denoise_flag = cmdline->denoise;\n\n#if H264E_MAX_THREADS\n    void *thread_pool = NULL;\n    create_param.max_threads = cmdline->threads;\n    if (cmdline->threads)\n    {\n        thread_pool = h264e_thread_pool_init(cmdline->threads);\n        create_param.token = thread_pool;\n        create_param.run_func_in_thread = h264e_thread_pool_run;\n    }\n#endif\n\n    frame_size = g_w*g_h*3/2;\n    buf_in   = (uint8_t*)ALIGNED_ALLOC(64, frame_size);\n    buf_save = (uint8_t*)ALIGNED_ALLOC(64, frame_size);\n\n    if (!buf_in || !buf_save)\n    {\n        printf(\"ERROR: not enough memory\\n\");\n        return 1;\n    }\n    //for (cmdline->qp = 10; cmdline->qp <= 51; cmdline->qp += 10)\n    //for (cmdline->qp = 40; cmdline->qp <= 51; cmdline->qp += 10)\n    //for (cmdline->qp = 50; cmdline->qp <= 51; cmdline->qp += 2)\n    //printf(\"encoding %s to %s with qp = %d\\n\", fnin, fnout, cmdline->qp);\n    {\n        int sum_bytes = 0;\n        int max_bytes = 0;\n        int min_bytes = 10000000;\n        int sizeof_persist = 0, sizeof_scratch = 0, error;\n        H264E_persist_t *enc = NULL;\n        H264E_scratch_t *scratch = NULL;\n        if (cmdline->psnr)\n            psnr_init();\n\n        error = H264E_sizeof(&create_param, &sizeof_persist, &sizeof_scratch);\n        if (error)\n        {\n            printf(\"H264E_init error = %d\\n\", error);\n            return 0;\n        }\n        printf(\"sizeof_persist = %d sizeof_scratch = %d\\n\", sizeof_persist, sizeof_scratch);\n        enc     = (H264E_persist_t *)ALIGNED_ALLOC(64, sizeof_persist);\n        scratch = (H264E_scratch_t *)ALIGNED_ALLOC(64, sizeof_scratch);\n        error = H264E_init(enc, &create_param);\n\n        if (fin)\n            fseek(fin, 0, SEEK_SET);\n\n        for (i = 0; cmdline->max_frames; i++)\n        {\n            if (!fin)\n            {\n                if (i > 300) break;\n                memset(buf_in + g_w*g_h, 128, g_w*g_h/2);\n                gen_chessboard_rot(buf_in, g_w, g_h, i);\n            } else\n                if (!fread(buf_in, frame_size, 1, fin)) break;\n            if (cmdline->psnr)\n                memcpy(buf_save, buf_in, frame_size);\n\n            yuv.yuv[0] = buf_in; yuv.stride[0] = g_w;\n            yuv.yuv[1] = buf_in + g_w*g_h; yuv.stride[1] = g_w/2;\n            yuv.yuv[2] = buf_in + g_w*g_h*5/4; yuv.stride[2] = g_w/2;\n\n            run_param.frame_type = 0;\n            run_param.encode_speed = cmdline->speed;\n            //run_param.desired_nalu_bytes = 100;\n\n            if (cmdline->kbps)\n            {\n                run_param.desired_frame_bytes = cmdline->kbps*1000/8/30;\n                run_param.qp_min = 10;\n                run_param.qp_max = 50;\n            } else\n            {\n                run_param.qp_min = run_param.qp_max = cmdline->qp;\n            }\n\n#if ENABLE_TEMPORAL_SCALABILITY\n            {\n            int level, logmod = 1;\n            int j, mod = 1 << logmod;\n            static int fresh[200] = {-1,-1,-1,-1};\n\n            run_param.frame_type = H264E_FRAME_TYPE_CUSTOM;\n\n            for (level = logmod; level && (~i & (mod >> level)); level--){}\n\n            run_param.long_term_idx_update = level + 1;\n            if (level == logmod && logmod > 0)\n                run_param.long_term_idx_update = -1;\n            if (level == logmod - 1 && logmod > 1)\n                run_param.long_term_idx_update = 0;\n\n            //if (run_param.long_term_idx_update > logmod) run_param.long_term_idx_update -= logmod+1;\n            //run_param.long_term_idx_update = logmod - 0 - level;\n            //if (run_param.long_term_idx_update > 0)\n            //{\n            //    run_param.long_term_idx_update = logmod - run_param.long_term_idx_update;\n            //}\n            run_param.long_term_idx_use    = fresh[level];\n            for (j = level; j <= logmod; j++)\n            {\n                fresh[j] = run_param.long_term_idx_update;\n            }\n            if (!i)\n            {\n                run_param.long_term_idx_use = -1;\n            }\n            }\n#endif\n            error = H264E_encode(enc, scratch, &run_param, &yuv, &coded_data, &sizeof_coded_data);\n            assert(!error);\n\n            if (i)\n            {\n                sum_bytes += sizeof_coded_data - 4;\n                if (min_bytes > sizeof_coded_data - 4) min_bytes = sizeof_coded_data - 4;\n                if (max_bytes < sizeof_coded_data - 4) max_bytes = sizeof_coded_data - 4;\n            }\n\n            if (cmdline->stats)\n                printf(\"frame=%d, bytes=%d\\n\", frames++, sizeof_coded_data);\n\n            if (fout)\n            {\n                if (!fwrite(coded_data, sizeof_coded_data, 1, fout))\n                {\n                    printf(\"ERROR writing output file\\n\");\n                    break;\n                }\n            }\n            if (cmdline->psnr)\n                psnr_add(buf_save, buf_in, g_w, g_h, sizeof_coded_data);\n        }\n        //fprintf(stderr, \"%d avr = %6d  [%6d %6d]\\n\", qp, sum_bytes/299, min_bytes, max_bytes);\n\n        if (cmdline->psnr)\n            psnr_print(psnr_get());\n\n        if (enc)\n            free(enc);\n        if (scratch)\n            free(scratch);\n    }\n    free(buf_in);\n    free(buf_save);\n\n    if (fin)\n        fclose(fin);\n    if (fout)\n        fclose(fout);\n#if H264E_MAX_THREADS\n    if (thread_pool)\n    {\n        h264e_thread_pool_close(thread_pool, cmdline->threads);\n    }\n#endif\n    return 0;\n}\n"
  },
  {
    "path": "scripts/build_arm.sh",
    "content": "_FILENAME=${0##*/}\nCUR_DIR=${0/${_FILENAME}}\nCUR_DIR=$(cd $(dirname ${CUR_DIR}); pwd)/$(basename ${CUR_DIR})/\n\npushd $CUR_DIR/..\n\narm-linux-gnueabihf-gcc -static -flto -O3 -std=gnu11 -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=hard -marm \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections -ftree-vectorize \\\n-DH264E_MAX_THREADS=4 -DH264E_SVC_API=1 -DNDEBUG -U_FORTIFY_SOURCE \\\n-o h264enc_arm_gcc minih264e_test.c system.c -lm -lpthread\n\narm-linux-gnueabihf-gcc -static -flto -O3 -std=gnu11 -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=hard -marm \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections -ftree-vectorize \\\n-DH264E_MAX_THREADS=4 -DH264E_SVC_API=1 -DNDEBUG -DMINIH264_ASM -U_FORTIFY_SOURCE \\\n-o h264enc_arm_gcc_asm minih264e_test.c system.c asm/neon/*.s -lm -lpthread\n\naarch64-linux-gnu-gcc -static -flto -O3 -std=gnu11 \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections -ftree-vectorize \\\n-DH264E_MAX_THREADS=4 -DH264E_SVC_API=1 -DNDEBUG -U_FORTIFY_SOURCE \\\n-o h264enc_arm64_gcc minih264e_test.c system.c -lm -lpthread\n"
  },
  {
    "path": "scripts/build_arm_clang.sh",
    "content": "_FILENAME=${0##*/}\nCUR_DIR=${0/${_FILENAME}}\nCUR_DIR=$(cd $(dirname ${CUR_DIR}); pwd)/$(basename ${CUR_DIR})/\n\npushd $CUR_DIR/..\n\n# why pthreads broken?\nclang -static -O3 -std=gnu11 -target arm-linux-gnueabihf -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=hard -marm \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections -ftree-vectorize \\\n-DH264E_MAX_THREADS=0 -DH264E_SVC_API=0 -DNDEBUG -D__NO_MATH_INLINES -U_FORTIFY_SOURCE \\\n-o h264enc_arm_clang minih264e_test.c -lm\n\narm-linux-gnueabihf-gcc -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=hard -c asm/neon/*.s\n\nclang -static -O3 -std=gnu11 -target arm-linux-gnueabihf -mcpu=cortex-a8 -mfpu=neon -mfloat-abi=hard -marm \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections -ftree-vectorize \\\n-DH264E_MAX_THREADS=0 -DH264E_SVC_API=0 -DNDEBUG -D__NO_MATH_INLINES -DMINIH264_ASM -U_FORTIFY_SOURCE \\\n-o h264enc_arm_clang_asm minih264e_test.c *.o -lm\nrm *.o\n\nclang -static -O3 -std=gnu11 -target aarch64-linux-gnu -mfpu=neon -mfloat-abi=hard \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections -ftree-vectorize \\\n-DH264E_MAX_THREADS=0 -DH264E_SVC_API=0 -DNDEBUG -D__NO_MATH_INLINES -U_FORTIFY_SOURCE \\\n-o h264enc_arm64_clang minih264e_test.c -lm\n"
  },
  {
    "path": "scripts/build_x86.sh",
    "content": "_FILENAME=${0##*/}\nCUR_DIR=${0/${_FILENAME}}\nCUR_DIR=$(cd $(dirname ${CUR_DIR}); pwd)/$(basename ${CUR_DIR})/\n\npushd $CUR_DIR/..\n\ngcc -flto -O3 -m32 -std=gnu11 -DH264E_MAX_THREADS=4 -DH264E_SVC_API=1 -DNDEBUG -U_FORTIFY_SOURCE \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections -mpreferred-stack-boundary=4 \\\n-o h264enc_x86 minih264e_test.c system.c -lm -lpthread\n\ngcc -flto -O3 -m32 -msse2 -std=gnu11 -DH264E_MAX_THREADS=4 -DH264E_SVC_API=1 -DNDEBUG -U_FORTIFY_SOURCE \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections -mpreferred-stack-boundary=4 \\\n-o h264enc_x86_sse2 minih264e_test.c system.c -lm -lpthread\n\ngcc -flto -O3 -std=gnu11 -DH264E_MAX_THREADS=4 -DH264E_SVC_API=1 -DNDEBUG -U_FORTIFY_SOURCE \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections -mpreferred-stack-boundary=4 \\\n-o h264enc_x64 minih264e_test.c system.c -lm -lpthread"
  },
  {
    "path": "scripts/build_x86_clang.sh",
    "content": "_FILENAME=${0##*/}\nCUR_DIR=${0/${_FILENAME}}\nCUR_DIR=$(cd $(dirname ${CUR_DIR}); pwd)/$(basename ${CUR_DIR})/\n\npushd $CUR_DIR/..\n\nclang -flto -O3 -std=gnu11 -DH264E_MAX_THREADS=4 -DH264E_SVC_API=1 -DNDEBUG -U_FORTIFY_SOURCE \\\n-Wall -Wextra \\\n-ffast-math -fno-stack-protector -fomit-frame-pointer -ffunction-sections -fdata-sections -Wl,--gc-sections \\\n-o h264enc_x64_clang minih264e_test.c system.c -lm -lpthread"
  },
  {
    "path": "scripts/profile.sh",
    "content": "_FILENAME=${0##*/}\nCUR_DIR=${0/${_FILENAME}}\nCUR_DIR=$(cd $(dirname ${CUR_DIR}); pwd)/$(basename ${CUR_DIR})/\n\npushd $CUR_DIR/..\n\nqemu-arm -d in_asm,exec,nochain ./h264enc_arm_clang vectors/foreman.cif 2>&1 | ./qemu-prof"
  },
  {
    "path": "scripts/test.sh",
    "content": "_FILENAME=${0##*/}\nCUR_DIR=${0/${_FILENAME}}\nCUR_DIR=$(cd $(dirname ${CUR_DIR}); pwd)/$(basename ${CUR_DIR})/\n\npushd $CUR_DIR/..\n\n./h264enc_x86 vectors/foreman.cif\nif ! cmp ./out.264 vectors/out_ref.264 >/dev/null 2>&1\nthen\n    echo test failed\n    exit 1\nfi\nrm out.264\n\n./h264enc_x86_sse2 vectors/foreman.cif\nif ! cmp ./out.264 vectors/out_ref.264 >/dev/null 2>&1\nthen\n    echo test failed\n    exit 1\nfi\nrm out.264\n\n./h264enc_x64 vectors/foreman.cif\nif ! cmp ./out.264 vectors/out_ref.264 >/dev/null 2>&1\nthen\n    echo test failed\n    exit 1\nfi\nrm out.264\n\nqemu-arm ./h264enc_arm_gcc vectors/foreman.cif\nif ! cmp ./out.264 vectors/out_ref.264 >/dev/null 2>&1\nthen\n    echo test failed\n    exit 1\nfi\nrm out.264\n\nqemu-arm ./h264enc_arm_gcc_asm vectors/foreman.cif\nif ! cmp ./out.264 vectors/out_ref.264 >/dev/null 2>&1\nthen\n    echo test failed\n    exit 1\nfi\nrm out.264\n\nqemu-aarch64 ./h264enc_arm64_gcc vectors/foreman.cif\nif ! cmp ./out.264 vectors/out_ref.264 >/dev/null 2>&1\nthen\n    echo test failed\n    exit 1\nfi\nrm out.264\n\necho test passed\n"
  },
  {
    "path": "system.c",
    "content": "#include \"system.h\"\n\n#ifndef _WIN32\n\n#include <stdlib.h>\n#include <time.h>\n#include <errno.h>\n#include <unistd.h>\n#if defined(__linux) || defined(__linux__)\n#include <sys/prctl.h>\n#endif\n\n#define nullptr 0\n\ntypedef struct Event Event;\n\ntypedef struct Event\n{\n    Event * volatile pMultipleCond;\n    pthread_mutex_t mutex;\n    pthread_cond_t cond;\n    volatile bool signaled;\n    bool manual_reset;\n} Event;\n\nstatic bool InitEvent(Event *e)\n{\n#if (defined(ANDROID) && !defined(__LP64__)) || defined(__APPLE__)\n    if (pthread_cond_init(&e->cond, NULL))\n        return FALSE;\n#else\n    pthread_condattr_t attr;\n    if (pthread_condattr_init(&attr))\n        return FALSE;\n    if (pthread_condattr_setclock(&attr, CLOCK_MONOTONIC))\n    {\n        pthread_condattr_destroy(&attr);\n        return FALSE;\n    }\n    if (pthread_cond_init(&e->cond, &attr))\n    {\n        pthread_condattr_destroy(&attr);\n        return FALSE;\n    }\n    pthread_condattr_destroy(&attr);\n#endif\n    if (pthread_mutex_init(&e->mutex, NULL))\n    {\n        pthread_cond_destroy(&e->cond);\n        return FALSE;\n    }\n    e->pMultipleCond = NULL;\n    return TRUE;\n}\n\n#ifdef __APPLE__\n#include <mach/mach_time.h>\nstatic inline uint64_t GetAbsTimeInNanoseconds()\n{\n    static mach_timebase_info_data_t g_timebase_info;\n    if (g_timebase_info.denom == 0)\n        mach_timebase_info(&g_timebase_info);\n    return mach_absolute_time()*g_timebase_info.numer/g_timebase_info.denom;\n}\n#endif\n\nstatic inline void GetAbsTime(struct timespec *ts, uint32_t timeout)\n{\n#if defined(__APPLE__)\n    uint64_t cur_time = GetAbsTimeInNanoseconds();\n    ts->tv_sec  = cur_time/1000000000u + timeout/1000u;\n    ts->tv_nsec = (cur_time % 1000000000u) + (timeout % 1000u)*1000000u;\n#else\n    clock_gettime(CLOCK_MONOTONIC, ts);\n    ts->tv_sec  += timeout/1000u;\n    ts->tv_nsec += (timeout % 1000u)*1000000u;\n#endif\n    if (ts->tv_nsec >= 1000000000)\n    {\n        ts->tv_nsec -= 1000000000;\n        ts->tv_sec++;\n    }\n}\n\nstatic inline int CondTimedWait(pthread_cond_t *cond, pthread_mutex_t *mutex, const struct timespec *abstime)\n{\n#if defined(ANDROID) && !defined(__LP64__)\n    return pthread_cond_timedwait_monotonic_np(cond, mutex, abstime);\n#elif defined(__APPLE__)\n    struct timespec reltime;\n    uint64_t cur_time = GetAbsTimeInNanoseconds();\n    reltime.tv_sec  = abstime->tv_sec  - cur_time/1000000000u;\n    reltime.tv_nsec = abstime->tv_nsec - (cur_time % 1000000000u);\n    if (reltime.tv_nsec < 0)\n    {\n        reltime.tv_nsec += 1000000000;\n        reltime.tv_sec--;\n    }\n    if ((reltime.tv_sec < 0) || ((reltime.tv_sec == 0) && (reltime.tv_nsec == 0)))\n        return ETIMEDOUT;\n    return pthread_cond_timedwait_relative_np(cond, mutex, &reltime);\n#else\n    return pthread_cond_timedwait(cond, mutex, abstime);\n#endif\n}\n\nstatic bool WaitForEvent(Event *e, uint32_t timeout, bool *signaled)\n{\n    if (pthread_mutex_lock(&e->mutex))\n        return FALSE;\n\n    if (timeout == INFINITE)\n    {\n        while (!e->signaled)\n            pthread_cond_wait(&e->cond, &e->mutex);\n    } else if (timeout != 0)\n    {\n        struct timespec t;\n        GetAbsTime(&t, timeout);\n        while (!e->signaled)\n        {\n            if (CondTimedWait(&e->cond, &e->mutex, &t))\n                break;\n        }\n    }\n    *signaled = e->signaled;\n    if (!e->manual_reset)\n        e->signaled = FALSE;\n\n    if (pthread_mutex_unlock(&e->mutex))\n        return FALSE;\n    return TRUE;\n}\n\nstatic bool WaitForMultipleEvents(Event **e, uint32_t count, uint32_t timeout, bool waitAll, int *signaled_num)\n{\n    uint32_t i;\n#define PTHR(func, num) for (i = num; i < count; i++)\\\n        if (func(&e[i]->mutex))\\\n            return FALSE;\n    PTHR(pthread_mutex_lock, 0);\n\n    int sig_num = -1;\n    if (timeout == 0)\n    {\n#define CHECK_SIGNALED \\\n        if (waitAll)\\\n        {\\\n            for (i = 0; i < count; i++)\\\n                if (!e[i]->signaled)\\\n                    break;\\\n            if (i == count)\\\n                for (i = 0; i < count; i++)\\\n                {\\\n                    if (sig_num < 0 && e[i]->signaled)\\\n                        sig_num = (int)i;\\\n                    if (!e[i]->manual_reset)\\\n                        e[i]->signaled = FALSE;\\\n                }\\\n        } else\\\n        {\\\n            for (i = 0; i < count; i++)\\\n                if (e[i]->signaled)\\\n                {\\\n                    sig_num = (int)i;\\\n                    if (!e[i]->manual_reset)\\\n                        e[i]->signaled = FALSE;\\\n                    break;\\\n                }\\\n        }\n        CHECK_SIGNALED;\n    } else\n    if (timeout == INFINITE)\n    {\n#define SET_MULTIPLE(val) for (i = 1; i < count; i++)\\\n            e[i]->pMultipleCond = val;\n        SET_MULTIPLE(e[0]);\n        for (;;)\n        {\n            CHECK_SIGNALED;\n            if (sig_num >= 0)\n                break;\n            PTHR(pthread_mutex_unlock, 1);\n            pthread_cond_wait(&e[0]->cond, &e[0]->mutex);\n            PTHR(pthread_mutex_lock, 1);\n        }\n        SET_MULTIPLE(0);\n    } else\n    {\n        SET_MULTIPLE(e[0]);\n        struct timespec t;\n        GetAbsTime(&t, timeout);\n        for (;;)\n        {\n            CHECK_SIGNALED;\n            if (sig_num >= 0)\n                break;\n            PTHR(pthread_mutex_unlock, 1);\n            int res = CondTimedWait(&e[0]->cond, &e[0]->mutex, &t);\n            PTHR(pthread_mutex_lock, 1);\n            if (res)\n                break;\n        }\n        SET_MULTIPLE(0);\n    }\n    PTHR(pthread_mutex_unlock, 0);\n    *signaled_num = sig_num;\n    return TRUE;\n}\n\nHANDLE event_create(bool manualReset, bool initialState)\n{\n    Event *e = (Event *)malloc(sizeof(*e));\n    if (!e)\n        return NULL;\n    if (!InitEvent(e))\n    {\n        free(e);\n        return NULL;\n    }\n    e->manual_reset = manualReset;\n    e->signaled     = initialState;\n    return (HANDLE)e;\n}\n\nbool event_destroy(HANDLE event)\n{\n    Event *e = (Event *)event;\n    if (!e)\n        return FALSE;\n    if (pthread_cond_destroy(&e->cond))\n        return FALSE;\n    if (pthread_mutex_destroy(&e->mutex))\n        return FALSE;\n    free((void *)e);\n    return TRUE;\n}\n\nbool event_set(HANDLE event)\n{\n    Event *e = (Event *)event;\n    if (pthread_mutex_lock(&e->mutex))\n        return FALSE;\n\n    Event *pMultipleCond = e->pMultipleCond;\n    e->signaled = TRUE;\n    if (pthread_cond_signal(&e->cond))\n        return FALSE;\n\n    if (pthread_mutex_unlock(&e->mutex))\n        return FALSE;\n\n    if (pMultipleCond && pMultipleCond != e)\n    {\n        if (pthread_mutex_lock(&pMultipleCond->mutex))\n            return FALSE;\n        if (pthread_cond_signal(&pMultipleCond->cond))\n            return FALSE;\n        if (pthread_mutex_unlock(&pMultipleCond->mutex))\n            return FALSE;\n    }\n    return TRUE;\n}\n\nbool event_reset(HANDLE event)\n{\n    Event *e = (Event *)event;\n    if (pthread_mutex_lock(&e->mutex))\n        return FALSE;\n    e->signaled = FALSE;\n    if (pthread_mutex_unlock(&e->mutex))\n        return FALSE;\n    return TRUE;\n}\n\nint event_wait(HANDLE event, uint32_t milliseconds)\n{\n    bool signaled;\n    if (!WaitForEvent((Event *)event, milliseconds, &signaled))\n        return WAIT_FAILED;\n    return signaled ? WAIT_OBJECT : WAIT_TIMEOUT;\n}\n\nint event_wait_multiple(uint32_t count, const HANDLE *events, bool waitAll, uint32_t milliseconds)\n{\n    if (count == 1)\n        return event_wait(events[0], milliseconds);\n    int signaled_num = -1;\n    if (!WaitForMultipleEvents((Event **)events, count, milliseconds, waitAll, &signaled_num))\n        return WAIT_FAILED;\n    return (signaled_num < 0) ? WAIT_TIMEOUT : (WAIT_OBJECT_0 + signaled_num);\n}\n\nbool InitializeCriticalSection(LPCRITICAL_SECTION lpCriticalSection)\n{\n    pthread_mutexattr_t ma;\n    if (pthread_mutexattr_init(&ma))\n        return FALSE;\n    if (pthread_mutexattr_settype(&ma, PTHREAD_MUTEX_RECURSIVE))\n    {\n        pthread_mutexattr_destroy(&ma);\n        return FALSE;\n    }\n    if (pthread_mutex_init((pthread_mutex_t *)lpCriticalSection, &ma))\n    {\n        pthread_mutexattr_destroy(&ma);\n        return FALSE;\n    }\n    if (pthread_mutexattr_destroy(&ma))\n        return FALSE;\n    return TRUE;\n}\n\nbool DeleteCriticalSection(LPCRITICAL_SECTION lpCriticalSection)\n{\n    if (pthread_mutex_destroy((pthread_mutex_t *)lpCriticalSection))\n        return FALSE;\n    return TRUE;\n}\n\nbool EnterCriticalSection(LPCRITICAL_SECTION lpCriticalSection)\n{\n    if (pthread_mutex_lock((pthread_mutex_t *)lpCriticalSection))\n        return FALSE;\n    return TRUE;\n}\n\nbool LeaveCriticalSection(LPCRITICAL_SECTION lpCriticalSection) \n{\n    if (pthread_mutex_unlock((pthread_mutex_t *)lpCriticalSection))\n        return FALSE;\n    return TRUE;\n}\n\nHANDLE thread_create(LPTHREAD_START_ROUTINE lpStartAddress, void *lpParameter)\n{\n    pthread_t *t = (pthread_t *)malloc(sizeof(*t));\n    if (!t)\n        return nullptr;\n    if (pthread_create(t, 0, lpStartAddress, lpParameter))\n    {\n        free(t);\n        return nullptr;\n    }\n    //if (lpThreadId)\n    //    *lpThreadId = (uint32_t)*t;\n    return (HANDLE)t;\n}\n\nbool thread_close(HANDLE thread)\n{\n    if (!thread)\n        return FALSE;\n    pthread_t *t = (pthread_t *)thread;\n    if (*t)\n        pthread_detach(*t);\n    free(t);\n    return TRUE;\n}\n\nvoid *thread_wait(HANDLE thread)\n{\n    if (!thread)\n        return (void*)-1;\n    void *ret = 0;\n    pthread_t *t = (pthread_t *)thread;\n    if (!*t)\n        return ret;\n    int res = pthread_join(*t, &ret);\n    if (res)\n        return (void*)-1;\n#if 0\n    if (timeout == 0)\n    {\n        int res = pthread_tryjoin_np(*t, &ret);\n        if (res)\n            return FALSE;\n    } else\n    if (timeout == INFINITE)\n    {\n        int res = pthread_join(*t, &ret);\n        if (res)\n            return FALSE;\n    } else\n    {\n        struct timespec ts;\n        GetAbsTime(&ts, timeout);\n        int res = pthread_timedjoin_np(*t, &ret, &ts);\n        if (res)\n            return FALSE;\n    }\n#endif\n    *t = 0; // thread joined - no need to detach\n    return ret;\n}\n\n#else  //_WIN32\n\nHANDLE thread_create(LPTHREAD_START_ROUTINE lpStartAddress, void *lpParameter)\n{\n    DWORD tid;\n    return CreateThread(0, 0, lpStartAddress, lpParameter, 0, &tid);\n}\n\nHANDLE event_create(bool manualReset, bool initialState)\n{\n    return CreateEvent(0, manualReset, initialState, 0);\n}\n\nbool event_destroy(HANDLE event)\n{\n    CloseHandle(event);\n    return TRUE;\n}\n\nbool thread_close(HANDLE thread)\n{\n    CloseHandle(thread);\n    return TRUE;\n}\n\nvoid *thread_wait(HANDLE thread)\n{\n    if (WaitForSingleObject(thread, INFINITE) == WAIT_OBJECT_0)\n    {\n        DWORD ExitCode;\n        GetExitCodeThread(thread, &ExitCode);\n        return (void *)(intptr_t)ExitCode;\n    }\n    return (void *)(intptr_t)-1;\n}\n\n#endif //_WIN32\n\nbool thread_name(const char *name)\n{\n#ifdef _WIN32\n#ifdef _MSC_VER\n    struct tagTHREADNAME_INFO\n    {\n        DWORD dwType;\n        LPCSTR szName;\n        DWORD dwThreadID;\n        DWORD dwFlags;\n    } info = { 0x1000, name, (DWORD)-1, 0 };\n    __try\n    {\n        RaiseException(0x406D1388, 0, sizeof(info)/sizeof(ULONG_PTR), (ULONG_PTR*)&info);\n    }\n    __except(EXCEPTION_EXECUTE_HANDLER)\n    {\n    }\n#endif\n    return TRUE;\n#elif defined(__linux) || defined(__linux__)\n    return (0 == prctl(PR_SET_NAME, name, 0, 0, 0));\n    //return (0 == pthread_setname_np(pthread_self(), name));\n#else // macos, ios\n    return (0 == pthread_setname_np(name));\n#endif\n}\n\nvoid thread_sleep(uint32_t milliseconds)\n{\n#ifdef _WIN32\n    Sleep(milliseconds);\n#else\n    usleep((useconds_t)milliseconds*1000);\n#endif\n}\n\nuint64_t GetTime()\n{\n    uint64_t time;\n#ifdef _WIN32\n    GetSystemTimeAsFileTime((FILETIME*)&time);\n    time = time/10 - 11644473600000000;\n#elif defined(__APPLE__)\n    time = GetAbsTimeInNanoseconds() / 1000u;\n#else\n    struct timespec ts;\n    // CLOCK_PROCESS_CPUTIME_ID CLOCK_THREAD_CPUTIME_ID\n    clock_gettime(CLOCK_MONOTONIC, &ts);\n    time = (uint64_t)ts.tv_sec * 1000000u + ts.tv_nsec / 1000u;\n#endif\n    return time;\n}\n"
  },
  {
    "path": "system.h",
    "content": "#pragma once\n#ifndef __LGE_SYSTEM_H__\n#define __LGE_SYSTEM_H__\n\n#ifdef _WIN32\n\n#include <windows.h>\ntypedef DWORD THREAD_RET;\n#define THRAPI __stdcall\n#include <stdint.h>\n\n#else  //_WIN32\n\n#include <stdint.h>\n#include <pthread.h>\n\ntypedef void * THREAD_RET;\ntypedef THREAD_RET (*PTHREAD_START_ROUTINE)(void *lpThreadParameter);\ntypedef PTHREAD_START_ROUTINE LPTHREAD_START_ROUTINE;\n\ntypedef pthread_mutex_t CRITICAL_SECTION, *PCRITICAL_SECTION, *LPCRITICAL_SECTION;\n\n#define THRAPI\n\n#ifndef FALSE\n#define FALSE 0\n#endif\n\n#ifndef TRUE\n#define TRUE 1\n#endif\n\ntypedef void * HANDLE;\n#define MAXIMUM_WAIT_OBJECTS 64\n#define INFINITE       (uint32_t)(-1)\n#define WAIT_FAILED    (-1)\n#define WAIT_TIMEOUT   0x102\n#define WAIT_OBJECT    0\n#define WAIT_OBJECT_0  0\n#define WAIT_ABANDONED   128\n#define WAIT_ABANDONED_0 128\n\n#endif //_WIN32\n\n#ifdef __cplusplus\nextern \"C\" {\n#else\n#ifndef bool\n#define bool int\n#endif\n#endif\n\nHANDLE event_create(bool manualReset, bool initialState);\nbool event_destroy(HANDLE event);\n\n#ifndef _WIN32\n#define SetEvent event_set\n#define ResetEvent event_reset\n#define WaitForSingleObject event_wait\n#define WaitForMultipleObjects event_wait_multiple\nbool event_set(HANDLE event);\nbool event_reset(HANDLE event);\nint event_wait(HANDLE event, uint32_t milliseconds);\nint event_wait_multiple(uint32_t count, const HANDLE *events, bool waitAll, uint32_t milliseconds);\nbool InitializeCriticalSection(LPCRITICAL_SECTION lpCriticalSection);\nbool DeleteCriticalSection(LPCRITICAL_SECTION lpCriticalSection);\nbool EnterCriticalSection(LPCRITICAL_SECTION lpCriticalSection);\nbool LeaveCriticalSection(LPCRITICAL_SECTION lpCriticalSection);\n#else\n#define event_set SetEvent\n#define event_reset ResetEvent\n#define event_wait WaitForSingleObject\n#define event_wait_multiple WaitForMultipleObjects\n#endif\n\nHANDLE thread_create(LPTHREAD_START_ROUTINE lpStartAddress, void *lpParameter);\nbool thread_close(HANDLE thread);\nvoid *thread_wait(HANDLE thread);\nbool thread_name(const char *name);\nvoid thread_sleep(uint32_t milliseconds);\n\nuint64_t GetTime();\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif //__LGE_SYSTEM_H__\n"
  }
]