summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
* aes-ppc: add ECB bulk acceleration for benchmarking purposesJussi Kivilinna2023-02-264-0/+269
| | | | | | | | | | | | | | | | | | | | | | | | | * cipher/rijndael-ppc-functions.h (ECB_CRYPT_FUNC): New. * cipher/rijndael-ppc.c (_gcry_aes_ppc8_ecb_crypt): New. * cipher/rijndael-ppc9le.c (_gcry_aes_ppc9le_ecb_crypt): New. * cipher/rijndael.c (_gcry_aes_ppc8_ecb_crypt) (_gcry_aes_ppc9le_ecb_crypt): New. (do_setkey): Set up _gcry_aes_ppc8_ecb_crypt for POWER8 and _gcry_aes_ppc9le_ecb_crypt for POWER9. -- Benchmark on POWER9: Before: AES | nanosecs/byte mebibytes/sec cycles/byte ECB enc | 0.875 ns/B 1090 MiB/s 2.01 c/B ECB dec | 1.06 ns/B 899.8 MiB/s 2.44 c/B After: AES | nanosecs/byte mebibytes/sec cycles/byte ECB enc | 0.305 ns/B 3126 MiB/s 0.702 c/B ECB dec | 0.305 ns/B 3126 MiB/s 0.702 c/B Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* sha2-ppc: better optimization for POWER9Jussi Kivilinna2023-02-263-1325/+940
| | | | | | | | | | | | | | | | | | | | | | | | * cipher/sha256-ppc.c: Change to use vector registers, generate POWER8 and POWER9 from same code with help of 'target' and 'optimize' attribute. * cipher/sha512-ppc.c: Likewise. * configure.ac (gcry_cv_gcc_attribute_optimize) (gcry_cv_gcc_attribute_ppc_target): New. -- Benchmark on POWER9: Before: | nanosecs/byte mebibytes/sec cycles/byte SHA256 | 5.22 ns/B 182.8 MiB/s 12.00 c/B SHA512 | 3.53 ns/B 269.9 MiB/s 8.13 c/B After (sha256 ~12% faster, sha512 ~19% faster): | nanosecs/byte mebibytes/sec cycles/byte SHA256 | 4.65 ns/B 204.9 MiB/s 10.71 c/B SHA512 | 2.97 ns/B 321.1 MiB/s 6.83 c/B Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* camellia-aesni-avx: speed up for round key broadcastingJussi Kivilinna2023-02-221-42/+47
| | | | | | | | | | | | | | | | | | | | * cipher/camellia-aesni-avx2-amd64.h (roundsm16, fls16): Broadcast round key bytes directly with 'vpshufb'. -- Benchmark on AMD Ryzen 9 7900X (turbo-freq off): Before: CAMELLIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.837 ns/B 1139 MiB/s 3.94 c/B 4700 ECB dec | 0.839 ns/B 1137 MiB/s 3.94 c/B 4700 After (~3% faster): CAMELLIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.808 ns/B 1180 MiB/s 3.80 c/B 4700 ECB dec | 0.810 ns/B 1177 MiB/s 3.81 c/B 4700 Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* camellia-avx2: speed up for round key broadcastingJussi Kivilinna2023-02-222-89/+55
| | | | | | | | | | | | | | | | | | | | | | | * cipher/camellia-aesni-avx2-amd64.h (roundsm32, fls32): Use 'vpbroadcastb' for loading round key. * cipher/camellia-glue.c (camellia_encrypt_blk1_32) (camellia_decrypt_blk1_32): Adjust num_blks thresholds for AVX2 implementations, 2 blks for GFNI, 4 blks for VAES and 5 blks for AESNI. -- Benchmark on AMD Ryzen 9 7900X (turbo-freq off): Before: CAMELLIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.213 ns/B 4469 MiB/s 1.00 c/B 4700 ECB dec | 0.215 ns/B 4440 MiB/s 1.01 c/B 4700 After (~10% faster): CAMELLIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.194 ns/B 4919 MiB/s 0.911 c/B 4700 ECB dec | 0.195 ns/B 4896 MiB/s 0.916 c/B 4700 Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* camellia-gfni-avx512: speed up for round key broadcastingJussi Kivilinna2023-02-221-57/+31
| | | | | | | | | | | | | | | | | | | | * cipher/camellia-gfni-avx512-amd64.S (roundsm64, fls64): Use 'vpbroadcastb' for loading round key. -- Benchmark on AMD Ryzen 9 7900X (turbo-freq off): Before: CAMELLIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.173 ns/B 5514 MiB/s 0.813 c/B 4700 ECB dec | 0.176 ns/B 5432 MiB/s 0.825 c/B 4700 After (~13% faster): CAMELLIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.152 ns/B 6267 MiB/s 0.715 c/B 4700 ECB dec | 0.155 ns/B 6170 MiB/s 0.726 c/B 4700 Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* camellia-avx2: add fast path for full 32 block ECB inputJussi Kivilinna2023-02-221-8/+33
| | | | | | | | * cipher/camellia-aesni-avx2-amd64.h (enc_blk1_32, dec_blk1_32): Add fast path for 32 block input. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* camellia: add CTR-mode byte addition for AVX/AVX2/AVX512 impl.Jussi Kivilinna2023-02-224-15/+257
| | | | | | | | | | | | | | | | * cipher/camellia-aesni-avx-amd64.S (_gcry_camellia_aesni_avx_ctr_enc): Add byte addition fast-path. * cipher/camellia-aesni-avx2-amd64.h (ctr_enc): Likewise. * cipher/camellia-gfni-avx512-amd64.S (_gcry_camellia_gfni_avx512_ctr_enc): Likewise. * cipher/camellia-glue.c (CAMELLIA_context): Add 'use_avx2'. (camellia_setkey, _gcry_camellia_ctr_enc, _gcry_camellia_cbc_dec) (_gcry_camellia_cfb_dec, _gcry_camellia_ocb_crypt) (_gcry_camellia_ocb_auth) [USE_AESNI_AVX2]: Use 'use_avx2' to check if any of the AVX2 implementations is enabled. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* camellia-aesni-avx: add acceleration for ECB/XTS/CTR32LE modesJussi Kivilinna2023-02-222-18/+133
| | | | | | | | | | | | | * cipher/camellia-aesni-avx-amd64.S (_gcry_camellia_aesni_avx_ecb_enc) (_gcry_camellia_aesni_avx_ecb_dec): New. * cipher/camellia-glue.c (_gcry_camellia_aesni_avx_ecb_enc) (_gcry_camellia_aesni_avx_ecb_dec): New. (camellia_setkey): Always enable XTS/ECB/CTR32LE bulk functions. (camellia_encrypt_blk1_32, camellia_decrypt_blk1_32) [USE_AESNI_AVX]: Add AESNI/AVX code-path. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* sm4: add CTR-mode byte addition for AVX/AVX2/AVX512 implementationsJussi Kivilinna2023-02-224-6/+295
| | | | | | | | | | | | | | | * cipher/sm4-aesni-avx-amd64.S (_gcry_sm4_aesni_avx_ctr_enc): Add byte addition fast-path. * cipher/sm4-aesni-avx2-amd64.S (_gcry_sm4_aesni_avx2_ctr_enc): Likewise. * cipher/sm4-gfni-avx2-amd64.S (_gcry_sm4_gfni_avx2_ctr_enc): Likewise. * cipher/sm4-gfni-avx512-amd64.S (_gcry_sm4_gfni_avx512_ctr_enc) (_gcry_sm4_gfni_avx512_ctr_enc_blk32): Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* aes-vaes-avx2: improve case when only CTR needs carry handlingJussi Kivilinna2023-02-221-35/+41
| | | | | | | | | | * cipher/rijndael-vaes-avx2-amd64.S (_gcry_vaes_avx2_ctr_enc_amd64): Add handling for the case when only main counter needs carry handling but generated vector counters do not. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* aria-avx2: add VAES accelerated implementationJussi Kivilinna2023-02-222-9/+409
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * cipher/aria-aesni-avx2-amd64.S (CONFIG_AS_VAES): New. [CONFIG_AS_VAES]: Add VAES accelerated assembly macros and functions. * cipher/aria.c (USE_VAES_AVX2): New. (ARIA_context): Add 'use_vaes_avx2'. (_gcry_aria_vaes_avx2_ecb_crypt_blk32) (_gcry_aria_vaes_avx2_ctr_crypt_blk32) (aria_avx2_ecb_crypt_blk32, aria_avx2_ctr_crypt_blk32): Add VAES/AVX2 code paths. (aria_setkey): Enable VAES/AVX2 implementation based on HW features. -- This patch adds VAES/AVX2 accelerated ARIA block cipher implementation. VAES instruction set extends AESNI instructions to work on all 128-bit lanes of 256-bit YMM and 512-bit ZMM vector registers, thus AES operations can be executed directly on YMM registers without needing to manually split YMM to two XMM halfs for AESNI instructions. This improves performance on CPUs that support VAES but not GFNI, like AMD Zen3. Benchmark on Ryzen 7 5800X (zen3, turbo-freq off): Before (AESNI/AVX2): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.559 ns/B 1707 MiB/s 2.12 c/B 3800 ECB dec | 0.560 ns/B 1703 MiB/s 2.13 c/B 3800 CTR enc | 0.570 ns/B 1672 MiB/s 2.17 c/B 3800 CTR dec | 0.568 ns/B 1679 MiB/s 2.16 c/B 3800 After (VAES/AVX2, ~33% faster): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.435 ns/B 2193 MiB/s 1.65 c/B 3800 ECB dec | 0.434 ns/B 2197 MiB/s 1.65 c/B 3800 CTR enc | 0.413 ns/B 2306 MiB/s 1.57 c/B 3800 CTR dec | 0.411 ns/B 2318 MiB/s 1.56 c/B 3800 Cc: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* aria-avx512: small optimization for aria_diff_mJussi Kivilinna2023-02-221-10/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | * cipher/aria-gfni-avx512-amd64.S (aria_diff_m): Use 'vpternlogq' for 3-way XOR operation. --- Using vpternlogq gives small performance improvement on AMD Zen4. With Intel tiger-lake speed is the same as before. Benchmark on AMD Ryzen 9 7900X (zen4, turbo-freq off): Before: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.203 ns/B 4703 MiB/s 0.953 c/B 4700 ECB dec | 0.204 ns/B 4675 MiB/s 0.959 c/B 4700 CTR enc | 0.207 ns/B 4609 MiB/s 0.973 c/B 4700 CTR dec | 0.207 ns/B 4608 MiB/s 0.973 c/B 4700 After (~3% faster): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.197 ns/B 4847 MiB/s 0.925 c/B 4700 ECB dec | 0.197 ns/B 4852 MiB/s 0.924 c/B 4700 CTR enc | 0.200 ns/B 4759 MiB/s 0.942 c/B 4700 CTR dec | 0.200 ns/B 4772 MiB/s 0.939 c/B 4700 Cc: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* aria-avx: small optimization for aria_ark_8wayJussi Kivilinna2023-02-221-14/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * cipher/aria-aesni-avx-amd64.S (aria_ark_8way): Use 'vmovd' for loading key material and 'vpshufb' for broadcasting from byte locations 3, 2, 1 and 0. -- Benchmark on AMD Ryzen 9 7900X (zen4, turbo-freq off): Before (GFNI/AVX): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.516 ns/B 1847 MiB/s 2.43 c/B 4700 ECB dec | 0.519 ns/B 1839 MiB/s 2.44 c/B 4700 CTR enc | 0.517 ns/B 1846 MiB/s 2.43 c/B 4700 CTR dec | 0.518 ns/B 1843 MiB/s 2.43 c/B 4700 After (GFNI/AVX, ~5% faster): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.490 ns/B 1947 MiB/s 2.30 c/B 4700 ECB dec | 0.490 ns/B 1946 MiB/s 2.30 c/B 4700 CTR enc | 0.493 ns/B 1935 MiB/s 2.32 c/B 4700 CTR dec | 0.493 ns/B 1934 MiB/s 2.32 c/B 4700 === Benchmark on Intel Core i3-1115G4 (tiger-lake, turbo-freq off): Before (GFNI/AVX): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.967 ns/B 986.6 MiB/s 2.89 c/B 2992 ECB dec | 0.966 ns/B 987.1 MiB/s 2.89 c/B 2992 CTR enc | 0.972 ns/B 980.8 MiB/s 2.91 c/B 2993 CTR dec | 0.971 ns/B 982.5 MiB/s 2.90 c/B 2993 After (GFNI/AVX, ~6% faster): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.908 ns/B 1050 MiB/s 2.72 c/B 2992 ECB dec | 0.903 ns/B 1056 MiB/s 2.70 c/B 2992 CTR enc | 0.913 ns/B 1045 MiB/s 2.73 c/B 2992 CTR dec | 0.910 ns/B 1048 MiB/s 2.72 c/B 2992 === Benchmark on AMD Ryzen 7 5800X (zen3, turbo-freq off): Before (AESNI/AVX): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.921 ns/B 1035 MiB/s 3.50 c/B 3800 ECB dec | 0.922 ns/B 1034 MiB/s 3.50 c/B 3800 CTR enc | 0.923 ns/B 1033 MiB/s 3.51 c/B 3800 CTR dec | 0.923 ns/B 1033 MiB/s 3.51 c/B 3800 After (AESNI/AVX, ~6% faster) ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.862 ns/B 1106 MiB/s 3.28 c/B 3800 ECB dec | 0.862 ns/B 1106 MiB/s 3.28 c/B 3800 CTR enc | 0.865 ns/B 1102 MiB/s 3.29 c/B 3800 CTR dec | 0.865 ns/B 1103 MiB/s 3.29 c/B 3800 === Benchmark on AMD EPYC 7642 (zen2): Before (AESNI/AVX): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 1.22 ns/B 784.5 MiB/s 4.01 c/B 3298 ECB dec | 1.22 ns/B 784.8 MiB/s 4.00 c/B 3292 CTR enc | 1.22 ns/B 780.1 MiB/s 4.03 c/B 3299 CTR dec | 1.22 ns/B 779.1 MiB/s 4.04 c/B 3299 After (AESNI/AVX, ~13% faster): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 1.07 ns/B 888.3 MiB/s 3.54 c/B 3299 ECB dec | 1.08 ns/B 885.3 MiB/s 3.55 c/B 3299 CTR enc | 1.07 ns/B 888.7 MiB/s 3.54 c/B 3298 CTR dec | 1.07 ns/B 887.4 MiB/s 3.55 c/B 3299 === Benchmark on Intel Core i5-6500 (skylake): Before (AESNI/AVX): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 1.24 ns/B 766.6 MiB/s 4.48 c/B 3598 ECB dec | 1.25 ns/B 764.9 MiB/s 4.49 c/B 3598 CTR enc | 1.25 ns/B 761.7 MiB/s 4.50 c/B 3598 CTR dec | 1.25 ns/B 761.6 MiB/s 4.51 c/B 3598 After (AESNI/AVX, ~2% faster): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 1.22 ns/B 780.0 MiB/s 4.40 c/B 3598 ECB dec | 1.22 ns/B 779.6 MiB/s 4.40 c/B 3598 CTR enc | 1.23 ns/B 776.6 MiB/s 4.42 c/B 3598 CTR dec | 1.23 ns/B 776.6 MiB/s 4.42 c/B 3598 === Benchmark on Intel Core i5-2450M (sandy-bridge, turbo-freq off): Before (AESNI/AVX): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 2.11 ns/B 452.7 MiB/s 5.25 c/B 2494 ECB dec | 2.10 ns/B 454.5 MiB/s 5.23 c/B 2494 CTR enc | 2.10 ns/B 453.2 MiB/s 5.25 c/B 2494 CTR dec | 2.10 ns/B 453.2 MiB/s 5.25 c/B 2494 After (AESNI/AVX, ~4% faster) ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 2.00 ns/B 475.8 MiB/s 5.00 c/B 2494 ECB dec | 2.00 ns/B 476.4 MiB/s 4.99 c/B 2494 CTR enc | 2.01 ns/B 474.7 MiB/s 5.01 c/B 2494 CTR dec | 2.01 ns/B 473.9 MiB/s 5.02 c/B 2494 Cc: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* aria: add x86_64 GFNI/AVX512 accelerated implementationJussi Kivilinna2023-02-224-2/+1100
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * cipher/Makefile.am: Add 'aria-gfni-avx512-amd64.S'. * cipher/aria-gfni-avx512-amd64.S: New. * cipher/aria.c (USE_GFNI_AVX512): New. [USE_GFNI_AVX512] (MAX_PARALLEL_BLKS): New. (ARIA_context): Add 'use_gfni_avx512'. (_gcry_aria_gfni_avx512_ecb_crypt_blk64) (_gcry_aria_gfni_avx512_ctr_crypt_blk64) (aria_gfni_avx512_ecb_crypt_blk64) (aria_gfni_avx512_ctr_crypt_blk64): New. (aria_crypt_blocks) [USE_GFNI_AVX512]: Add 64 parallel block AVX512/GFNI processing. (_gcry_aria_ctr_enc) [USE_GFNI_AVX512]: Add 64 parallel block AVX512/GFNI processing. (aria_setkey): Enable GFNI/AVX512 based on HW features. * configure.ac: Add 'aria-gfni-avx512-amd64.lo'. -- This patch adds AVX512/GFNI accelerated ARIA block cipher implementation for libgcrypt. This implementation is based on work by Taehee Yoo, with following notable changes: - Integration to libgcrypt, use of 'aes-common-amd64.h'. - Use round loop instead of unrolling for smaller code size and increased performance. - Use stack for temporary storage instead of external buffers. - Add byte-addition fast path for CTR. === Benchmark on AMD Ryzen 9 7900X (zen4, turbo-freq off): GFNI/AVX512: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.203 ns/B 4703 MiB/s 0.953 c/B 4700 ECB dec | 0.204 ns/B 4675 MiB/s 0.959 c/B 4700 CTR enc | 0.207 ns/B 4609 MiB/s 0.973 c/B 4700 CTR dec | 0.207 ns/B 4608 MiB/s 0.973 c/B 4700 === Benchmark on Intel Core i3-1115G4 (tiger-lake, turbo-freq off): GFNI/AVX512: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.362 ns/B 2635 MiB/s 1.08 c/B 2992 ECB dec | 0.361 ns/B 2639 MiB/s 1.08 c/B 2992 CTR enc | 0.362 ns/B 2633 MiB/s 1.08 c/B 2992 CTR dec | 0.362 ns/B 2633 MiB/s 1.08 c/B 2992 [v2]: - Add byte-addition fast path for CTR. Cc: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* aria: add x86_64 AESNI/GFNI/AVX/AVX2 accelerated implementationsJussi Kivilinna2023-02-225-26/+3186
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * cipher/Makefile.am: Add 'aria-aesni-avx-amd64.S' and 'aria-aesni-avx2-amd64.S'. * cipher/aria-aesni-avx-amd64.S: New. * cipher/aria-aesni-avx2-amd64.S: New. * cipher/aria.c (USE_AESNI_AVX, USE_GFNI_AVX, USE_AESNI_AVX2) (USE_GFNI_AVX2, MAX_PARALLEL_BLKS, ASM_FUNC_ABI, ASM_EXTRA_STACK): New. (ARIA_context): Add 'use_aesni_avx', 'use_gfni_avx', 'use_aesni_avx2' and 'use_gfni_avx2'. (_gcry_aria_aesni_avx_ecb_crypt_blk1_16) (_gcry_aria_aesni_avx_ctr_crypt_blk16) (_gcry_aria_gfni_avx_ecb_crypt_blk1_16) (_gcry_aria_gfni_avx_ctr_crypt_blk16) (aria_avx_ecb_crypt_blk1_16, aria_avx_ctr_crypt_blk16) (_gcry_aria_aesni_avx2_ecb_crypt_blk32) (_gcry_aria_aesni_avx2_ctr_crypt_blk32) (_gcry_aria_gfni_avx2_ecb_crypt_blk32) (_gcry_aria_gfni_avx2_ctr_crypt_blk32) (aria_avx2_ecb_crypt_blk32, aria_avx2_ctr_crypt_blk32): New. (aria_crypt_blocks) [USE_AESNI_AVX2]: Add 32 parallel block AVX2/AESNI/GFNI processing. (aria_crypt_blocks) [USE_AESNI_AVX]: Add 3 to 16 parallel block AVX/AESNI/GFNI processing. (_gcry_aria_ctr_enc) [USE_AESNI_AVX2]: Add 32 parallel block AVX2/AESNI/GFNI processing. (_gcry_aria_ctr_enc) [USE_AESNI_AVX]: Add 16 parallel block AVX/AESNI/GFNI processing. (_gcry_aria_ctr_enc, _gcry_aria_cbc_dec, _gcry_aria_cfb_enc) (_gcry_aria_ecb_crypt, _gcry_aria_xts_crypt, _gcry_aria_ctr32le_enc) (_gcry_aria_ocb_crypt, _gcry_aria_ocb_auth): Use MAX_PARALLEL_BLKS for parallel processing width. (aria_setkey): Enable AESNI/AVX, GFNI/AVX, AESNI/AVX2, GFNI/AVX2 based on HW features. * configure.ac: Add 'aria-aesni-avx-amd64.lo' and 'aria-aesni-avx2-amd64.lo'. --- This patch adds AVX/AVX2/AESNI/GFNI accelerated ARIA block cipher implementations for libgcrypt. This implementation is based on work by Taehee Yoo, with following notable changes: - Integration to libgcrypt, use of 'aes-common-amd64.h'. - Use 'vmovddup' for loading GFNI constants. - Use round loop instead of unrolling for smaller code size and increased performance. - Use stack for temporary storage instead of external buffers. - Use merge ECB encryption/decryption to single function. - Add 1 to 15 blocks support for AVX ECB functions. - Add byte-addition fast path for CTR. === Benchmark on AMD Ryzen 9 7900X (zen4, turbo-freq off): AESNI/AVX: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.715 ns/B 1333 MiB/s 3.36 c/B 4700 ECB dec | 0.712 ns/B 1339 MiB/s 3.35 c/B 4700 CTR enc | 0.714 ns/B 1336 MiB/s 3.36 c/B 4700 CTR dec | 0.714 ns/B 1335 MiB/s 3.36 c/B 4700 GFNI/AVX: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.516 ns/B 1847 MiB/s 2.43 c/B 4700 ECB dec | 0.519 ns/B 1839 MiB/s 2.44 c/B 4700 CTR enc | 0.517 ns/B 1846 MiB/s 2.43 c/B 4700 CTR dec | 0.518 ns/B 1843 MiB/s 2.43 c/B 4700 AESNI/AVX2: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.416 ns/B 2292 MiB/s 1.96 c/B 4700 ECB dec | 0.421 ns/B 2266 MiB/s 1.98 c/B 4700 CTR enc | 0.415 ns/B 2298 MiB/s 1.95 c/B 4700 CTR dec | 0.415 ns/B 2300 MiB/s 1.95 c/B 4700 GFNI/AVX2: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.235 ns/B 4056 MiB/s 1.11 c/B 4700 ECB dec | 0.234 ns/B 4079 MiB/s 1.10 c/B 4700 CTR enc | 0.232 ns/B 4104 MiB/s 1.09 c/B 4700 CTR dec | 0.233 ns/B 4094 MiB/s 1.10 c/B 4700 === Benchmark on Intel Core i3-1115G4 (tiger-lake, turbo-freq off): AESNI/AVX: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 1.26 ns/B 757.6 MiB/s 3.77 c/B 2993 ECB dec | 1.27 ns/B 753.1 MiB/s 3.79 c/B 2992 CTR enc | 1.25 ns/B 760.3 MiB/s 3.75 c/B 2992 CTR dec | 1.26 ns/B 759.1 MiB/s 3.76 c/B 2992 GFNI/AVX: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.967 ns/B 986.6 MiB/s 2.89 c/B 2992 ECB dec | 0.966 ns/B 987.1 MiB/s 2.89 c/B 2992 CTR enc | 0.972 ns/B 980.8 MiB/s 2.91 c/B 2993 CTR dec | 0.971 ns/B 982.5 MiB/s 2.90 c/B 2993 AESNI/AVX2: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.817 ns/B 1167 MiB/s 2.44 c/B 2992 ECB dec | 0.819 ns/B 1164 MiB/s 2.45 c/B 2992 CTR enc | 0.819 ns/B 1164 MiB/s 2.45 c/B 2992 CTR dec | 0.819 ns/B 1164 MiB/s 2.45 c/B 2992 GFNI/AVX2: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.506 ns/B 1886 MiB/s 1.51 c/B 2992 ECB dec | 0.505 ns/B 1887 MiB/s 1.51 c/B 2992 CTR enc | 0.564 ns/B 1691 MiB/s 1.69 c/B 2992 CTR dec | 0.565 ns/B 1689 MiB/s 1.69 c/B 2992 === Benchmark on AMD Ryzen 7 5800X (zen3, turbo-freq off): AESNI/AVX: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.921 ns/B 1035 MiB/s 3.50 c/B 3800 ECB dec | 0.922 ns/B 1034 MiB/s 3.50 c/B 3800 CTR enc | 0.923 ns/B 1033 MiB/s 3.51 c/B 3800 CTR dec | 0.923 ns/B 1033 MiB/s 3.51 c/B 3800 AESNI/AVX2: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.559 ns/B 1707 MiB/s 2.12 c/B 3800 ECB dec | 0.560 ns/B 1703 MiB/s 2.13 c/B 3800 CTR enc | 0.570 ns/B 1672 MiB/s 2.17 c/B 3800 CTR dec | 0.568 ns/B 1679 MiB/s 2.16 c/B 3800 === Benchmark on AMD EPYC 7642 (zen2): AESNI/AVX: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 1.22 ns/B 784.5 MiB/s 4.01 c/B 3298 ECB dec | 1.22 ns/B 784.8 MiB/s 4.00 c/B 3292 CTR enc | 1.22 ns/B 780.1 MiB/s 4.03 c/B 3299 CTR dec | 1.22 ns/B 779.1 MiB/s 4.04 c/B 3299 AESNI/AVX2: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.735 ns/B 1298 MiB/s 2.42 c/B 3299 ECB dec | 0.738 ns/B 1292 MiB/s 2.44 c/B 3299 CTR enc | 0.732 ns/B 1303 MiB/s 2.41 c/B 3299 CTR dec | 0.732 ns/B 1303 MiB/s 2.41 c/B 3299 === Benchmark on Intel Core i5-6500 (skylake): AESNI/AVX: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 1.24 ns/B 766.6 MiB/s 4.48 c/B 3598 ECB dec | 1.25 ns/B 764.9 MiB/s 4.49 c/B 3598 CTR enc | 1.25 ns/B 761.7 MiB/s 4.50 c/B 3598 CTR dec | 1.25 ns/B 761.6 MiB/s 4.51 c/B 3598 AESNI/AVX2: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 0.829 ns/B 1150 MiB/s 2.98 c/B 3599 ECB dec | 0.831 ns/B 1147 MiB/s 2.99 c/B 3598 CTR enc | 0.829 ns/B 1150 MiB/s 2.98 c/B 3598 CTR dec | 0.828 ns/B 1152 MiB/s 2.98 c/B 3598 === Benchmark on Intel Core i5-2450M (sandy-bridge, turbo-freq off): AESNI/AVX: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 2.11 ns/B 452.7 MiB/s 5.25 c/B 2494 ECB dec | 2.10 ns/B 454.5 MiB/s 5.23 c/B 2494 CTR enc | 2.10 ns/B 453.2 MiB/s 5.25 c/B 2494 CTR dec | 2.10 ns/B 453.2 MiB/s 5.25 c/B 2494 [v2] - Optimization for CTR mode: Use CTR byte-addition path when counter carry-overflow happen only on ctr-variable but not in generated counter vector registers. Cc: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* asm-common-aarch64: fix read-only section for Windows targetJussi Kivilinna2023-01-211-1/+5
| | | | | | | | * cipher/asm-common-aarch64.h (SECTION_RODATA): Use .rdata for _WIN32. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* aarch64-asm: align functions to 16 bytesJussi Kivilinna2023-01-1920-51/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | * cipher/camellia-aarch64.S: Align functions to 16 bytes. * cipher/chacha20-aarch64.S: Likewise. * cipher/cipher-gcm-armv8-aarch64-ce.S: Likewise. * cipher/crc-armv8-aarch64-ce.S: Likewise. * cipher/rijndael-aarch64.S: Likewise. * cipher/rijndael-armv8-aarch64-ce.S: Likewise. * cipher/sha1-armv8-aarch64-ce.S: Likewise. * cipher/sha256-armv8-aarch64-ce.S: Likewise. * cipher/sha512-armv8-aarch64-ce.S: Likewise. * cipher/sm3-aarch64.S: Likewise. * cipher/sm3-armv8-aarch64-ce.S: Likewise. * cipher/sm4-aarch64.S: Likewise. * cipher/sm4-armv8-aarch64-ce.S: Likewise. * cipher/sm4-armv9-aarch64-sve-ce.S: Likewise. * cipher/twofish-aarch64.S: Likewise. * mpi/aarch64/mpih-add1.S: Likewise. * mpi/aarch64/mpih-mul1.S: Likewise. * mpi/aarch64/mpih-mul2.S: Likewise. * mpi/aarch64/mpih-mul3.S: Likewise. * mpi/aarch64/mpih-sub1.S: Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* aarch64-asm: move constant data to read-only sectionJussi Kivilinna2023-01-1913-44/+69
| | | | | | | | | | | | | | | | | | | | | | * cipher/asm-common-aarch64.h (SECTION_RODATA) (GET_DATA_POINTER): New. (GET_LOCAL_POINTER): Remove. * cipher/camellia-aarch64.S: Move constant data to read-only data section; Remove unneeded '.ltorg'. * cipher/chacha20-aarch64.S: Likewise. * cipher/cipher-gcm-armv8-aarch64-ce.S: Likewise. * cipher/crc-armv8-aarch64-ce.S: Likewise. * cipher/rijndael-aarch64.S: Likewise. * cipher/sha1-armv8-aarch64-ce.S: Likewise. * cipher/sha256-armv8-aarch64-ce.S: Likewise. * cipher/sm3-aarch64.S: Likewise. * cipher/sm3-armv8-aarch64-ce.S: Likewise. * cipher/sm4-aarch64.S: Likewise. * cipher/sm4-armv9-aarch64-sve-ce.S: Likewise. * cipher/twofish-aarch64.S: Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* s390x-asm: move constant data to read-only sectionJussi Kivilinna2023-01-192-6/+11
| | | | | | | | | * cipher/chacha20-s390x.S: Move constant data to read-only section; Align functions to 16 bytes. * cipher/poly1305-s390x.S: Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* powerpc-asm: move constant data to read-only sectionJussi Kivilinna2023-01-191-1/+1
| | | | | | | | * cipher/chacha20-p10le-8x.s: Move constant data to read-only section. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* mpi/amd64: align functions and inner loops to 16 bytesJussi Kivilinna2023-01-197-8/+14
| | | | | | | | | | | | | * mpi/amd64/mpih-add1.S: Align function and inner loop to 16 bytes. * mpi/amd64/mpih-lshift.S: Likewise. * mpi/amd64/mpih-mul1.S: Likewise. * mpi/amd64/mpih-mul2.S: Likewise. * mpi/amd64/mpih-mul3.S: Likewise. * mpi/amd64/mpih-rshift.S: Likewise. * mpi/amd64/mpih-sub1.S: Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* amd64-asm: move constant data to read-only section for cipher algosJussi Kivilinna2023-01-1915-18/+74
| | | | | | | | | | | | | | | | | | | | | | * cipher/camellia-aesni-avx-amd64.S: Move constant data to read-only section. * cipher/camellia-aesni-avx2-amd64.h: Likewise. * cipher/camellia-gfni-avx512-amd64.S: Likewise. * cipher/chacha20-amd64-avx2.S: Likewise. * cipher/chacha20-amd64-avx512.S: Likewise. * cipher/chacha20-amd64-ssse3.S: Likewise. * cipher/des-amd64.s: Likewise. * cipher/rijndael-ssse3-amd64-asm.S: Likewise. * cipher/rijndael-vaes-avx2-amd64.S: Likewise. * cipher/serpent-avx2-amd64.S: Likewise. * cipher/sm4-aesni-avx-amd64.S: Likewise. * cipher/sm4-aesni-avx2-amd64.S: Likewise. * cipher/sm4-gfni-avx2-amd64.S: Likewise. * cipher/sm4-gfni-avx512-amd64.S: Likewise. * cipher/twofish-avx2-amd64.S: Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* amd64-asm: align functions to 16 bytes for cipher algosJussi Kivilinna2023-01-1918-130/+132
| | | | | | | | | | | | | | | | | | | | | | | | * cipher/blowfish-amd64.S: Align functions to 16 bytes. * cipher/camellia-aesni-avx-amd64.S: Likewise. * cipher/camellia-aesni-avx2-amd64.h: Likewise. * cipher/camellia-gfni-avx512-amd64.S: Likewise. * cipher/cast5-amd64.S: Likewise. * cipher/chacha20-amd64-avx2.S: Likewise. * cipher/chacha20-amd64-ssse3.S: Likewise. * cipher/des-amd64.s: Likewise. * cipher/rijndael-amd64.S: Likewise. * cipher/rijndael-ssse3-amd64-asm.S: Likewise. * cipher/salsa20-amd64.S: Likewise. * cipher/serpent-avx2-amd64.S: Likewise. * cipher/serpent-sse2-amd64.S: Likewise. * cipher/sm4-aesni-avx-amd64.S: Likewise. * cipher/sm4-aesni-avx2-amd64.S: Likewise. * cipher/sm4-gfni-avx2-amd64.S: Likewise. * cipher/twofish-amd64.S: Likewise. * cipher/twofish-avx2-amd64.S: Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* amd64-asm: move constant data to read-only section for hash/mac algosJussi Kivilinna2023-01-1918-20/+90
| | | | | | | | | | | | | | | | | | | | | | | | | * cipher/asm-common-amd64.h (SECTION_RODATA): New. * cipher/blake2b-amd64-avx2.S: Use read-only section for constant data. * cipher/blake2b-amd64-avx512.S: Likewise. * cipher/blake2s-amd64-avx.S: Likewise. * cipher/blake2s-amd64-avx512.S: Likewise. * cipher/poly1305-amd64-avx512.S: Likewise. * cipher/sha1-avx-amd64.S: Likewise. * cipher/sha1-avx-bmi2-amd64.S: Likewise. * cipher/sha1-avx2-bmi2-amd64.S: Likewise. * cipher/sha1-ssse3-amd64.S: Likewise. * cipher/sha256-avx-amd64.S: Likewise. * cipher/sha256-avx2-bmi2-amd64.S: Likewise. * cipher/sha256-ssse3-amd64.S: Likewise. * cipher/sha512-avx-amd64.S: Likewise. * cipher/sha512-avx2-bmi2-amd64.S: Likewise. * cipher/sha512-avx512-amd64.S: Likewise. * cipher/sha512-ssse3-amd64.S: Likewise. * cipher/sha3-avx-bmi2-amd64.S: Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* tests/bench-slope: skip CPU warm-up in regression testsJussi Kivilinna2023-01-171-0/+3
| | | | | | | * tests/bench-slope.c (warm_up_cpu): Skip in regression tests. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* tests/basic: perform x86 vector cluttering only when __SSE2__ is setJussi Kivilinna2023-01-171-12/+8
| | | | | | | | | | | | | | * tests/basic.c (CLUTTER_VECTOR_REGISTER_AMD64) (CLUTTER_VECTOR_REGISTER_I386): Set only if __SSE2__ defined. (clutter_vector_registers) [CLUTTER_VECTOR_REGISTER_AMD64]: Remove __SSE2__ check for "xmm" clobbers. (clutter_vector_registers) [CLUTTER_VECTOR_REGISTER_I386]: Likewise. -- Force __SSE2__ check as buggy compiler might not define __SSE2__ but still attempt to use XMM registers. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* tests/basic: fix clutter vector register asm for amd64 and i386Jussi Kivilinna2023-01-171-48/+26
| | | | | | | | | | | * tests/basic.c (clutter_vector_registers): Pass data pointers through single register for CLUTTER_VECTOR_REGISTER_AMD64 and CLUTTER_VECTOR_REGISTER_I386 as compiler might attempt to allocate separate pointer register for each "m" operator. -- Reported-by: Julian Kirsch <mail@kirschju.re> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* avx512: tweak zmm16-zmm31 register clearingJussi Kivilinna2023-01-177-37/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | * cipher/asm-common-amd64.h (spec_stop_avx512): Clear ymm16 before and after vpopcntb. * cipher/camellia-gfni-avx512-amd64.S (clear_zmm16_zmm31): Clear YMM16-YMM31 registers instead of XMM16-XMM31. * cipher/chacha20-amd64-avx512.S (clear_zmm16_zmm31): Likewise. * cipher/keccak-amd64-avx512.S (clear_regs): Likewise. (clear_avx512_4regs): Clear all 4 registers with XOR. * cipher/cipher-gcm-intel-pclmul.c (_gcry_ghash_intel_pclmul) (_gcry_polyval_intel_pclmul): Clear YMM16-YMM19 registers instead of ZMM16-ZMM19. * cipher/poly1305-amd64-avx512.S (POLY1305_BLOCKS): Clear YMM16-YMM31 registers after vector processing instead of XMM16-XMM31. * cipher/sha512-avx512-amd64.S (_gcry_sha512_transform_amd64_avx512): Likewise. -- Clear zmm16-zmm31 registers with 256bit XOR instead of 128bit as this is better for AMD Zen4. Also clear xmm16 register after vpopcnt in avx512 spec-stop so we do not leave any zmm register state which might end up unnecessarily using CPU resources. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* aria: add generic 2-way bulk processingJussi Kivilinna2023-01-061-2/+477
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * cipher/aria.c (ARIA_context): Add 'bulk_prefetch_ready'. (aria_crypt_2blks, aria_crypt_blocks, aria_enc_blocks, aria_dec_blocks) (_gcry_aria_ctr_enc, _gcry_aria_cbc_enc, _gcry_aria_cbc_dec) (_gcry_aria_cfb_enc, _gcry_aria_cfb_dec, _gcry_aria_ecb_crypt) (_gcry_aria_xts_crypt, _gcry_aria_ctr32le_enc, _gcry_aria_ocb_crypt) (_gcry_aria_ocb_auth): New. (aria_setkey): Setup 'bulk_ops' function pointers. -- Patch adds 2-way parallel generic ARIA implementation for modest performance increase. Benchmark on AMD Ryzen 9 7900X (x86-64) shows ~40% performance improvement for parallelizable modes: ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 2.62 ns/B 364.0 MiB/s 14.74 c/B 5625 ECB dec | 2.61 ns/B 365.2 MiB/s 14.69 c/B 5625 CBC enc | 3.62 ns/B 263.7 MiB/s 20.34 c/B 5625 CBC dec | 2.63 ns/B 363.0 MiB/s 14.78 c/B 5625 CFB enc | 3.59 ns/B 265.3 MiB/s 20.22 c/B 5625 CFB dec | 2.63 ns/B 362.0 MiB/s 14.82 c/B 5625 OFB enc | 3.98 ns/B 239.7 MiB/s 22.38 c/B 5625 OFB dec | 4.00 ns/B 238.2 MiB/s 22.52 c/B 5625 CTR enc | 2.64 ns/B 360.6 MiB/s 14.87 c/B 5624 CTR dec | 2.65 ns/B 360.0 MiB/s 14.90 c/B 5625 XTS enc | 2.68 ns/B 355.8 MiB/s 15.08 c/B 5625 XTS dec | 2.67 ns/B 356.9 MiB/s 15.03 c/B 5625 CCM enc | 6.24 ns/B 152.7 MiB/s 35.12 c/B 5625 CCM dec | 6.25 ns/B 152.5 MiB/s 35.18 c/B 5625 CCM auth | 3.59 ns/B 265.4 MiB/s 20.21 c/B 5625 EAX enc | 6.23 ns/B 153.0 MiB/s 35.06 c/B 5625 EAX dec | 6.23 ns/B 153.1 MiB/s 35.05 c/B 5625 EAX auth | 3.59 ns/B 265.4 MiB/s 20.22 c/B 5625 GCM enc | 2.68 ns/B 355.8 MiB/s 15.08 c/B 5625 GCM dec | 2.69 ns/B 354.7 MiB/s 15.12 c/B 5625 GCM auth | 0.031 ns/B 30832 MiB/s 0.174 c/B 5625 OCB enc | 2.71 ns/B 351.4 MiB/s 15.27 c/B 5625 OCB dec | 2.74 ns/B 347.6 MiB/s 15.43 c/B 5625 OCB auth | 2.64 ns/B 360.8 MiB/s 14.87 c/B 5625 SIV enc | 6.24 ns/B 152.9 MiB/s 35.08 c/B 5625 SIV dec | 6.24 ns/B 152.8 MiB/s 35.10 c/B 5625 SIV auth | 3.59 ns/B 266.0 MiB/s 20.17 c/B 5625 GCM-SIV enc | 2.67 ns/B 356.7 MiB/s 15.04 c/B 5625 GCM-SIV dec | 2.68 ns/B 355.7 MiB/s 15.08 c/B 5625 GCM-SIV auth | 0.034 ns/B 28303 MiB/s 0.190 c/B 5625 Cc: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* Add ARIA block cipherJussi Kivilinna2023-01-0615-8/+1495
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * cipher/Makefile.am: Add 'aria.c'. * cipher/aria.c: New. * cipher/cipher.c (cipher_list, cipher_list_algo301): Add ARIA cipher specs. * cipher/mac-cmac.c (map_mac_algo_to_cipher): Add GCRY_MAC_CMAC_ARIA. (_gcry_mac_type_spec_cmac_aria): New. * cipher/mac-gmac.c (map_mac_algo_to_cipher): Add GCRY_MAC_GMAC_ARIA. (_gcry_mac_type_spec_gmac_aria): New. * cipher/mac-internal.h (_gcry_mac_type_spec_cmac_aria) (_gcry_mac_type_spec_gmac_aria) (_gcry_mac_type_spec_poly1305mac_aria): New. * cipher/mac-poly1305.c (poly1305mac_open): Add GCRY_MAC_GMAC_ARIA. (_gcry_mac_type_spec_poly1305mac_aria): New. * cipher/mac.c (mac_list, mac_list_algo201, mac_list_algo401) (mac_list_algo501): Add ARIA MAC specs. * configure.ac (available_ciphers): Add 'aria'. (GCRYPT_CIPHERS): Add 'aria.lo'. (USE_ARIA): New. * doc/gcrypt.texi: Add GCRY_CIPHER_ARIA128, GCRY_CIPHER_ARIA192, GCRY_CIPHER_ARIA256, GCRY_MAC_CMAC_ARIA, GCRY_MAC_GMAC_ARIA and GCRY_MAC_POLY1305_ARIA. * src/cipher.h (_gcry_cipher_spec_aria128, _gcry_cipher_spec_aria192) (_gcry_cipher_spec_aria256): New. * src/gcrypt.h.in (gcry_cipher_algos): Add GCRY_CIPHER_ARIA128, GCRY_CIPHER_ARIA192 and GCRY_CIPHER_ARIA256. (gcry_mac_algos): GCRY_MAC_CMAC_ARIA, GCRY_MAC_GMAC_ARIA and GCRY_MAC_POLY1305_ARIA. * tests/basic.c (check_ecb_cipher, check_ctr_cipher) (check_cfb_cipher, check_ocb_cipher) [USE_ARIA]: Add ARIA test-vectors. (check_ciphers) [USE_ARIA]: Add GCRY_CIPHER_ARIA128, GCRY_CIPHER_ARIA192 and GCRY_CIPHER_ARIA256. (main): Also run 'check_bulk_cipher_modes' for 'cipher_modes_only'-mode. * tests/bench-slope.c (bench_mac_init): Add GCRY_MAC_POLY1305_ARIA setiv-handling. * tests/benchmark.c (mac_bench): Likewise. -- This patch adds ARIA block cipher for libgcrypt. This implementation is based on work by Taehee Yoo, with following notable changes: - Integration to libgcrypt, use of bithelp.h and bufhelp.h helper functions where possible. - Added lookup table prefetching as is done in AES, GCM and SM4 implementations. - Changed `get_u8` to return `u32` as returning `byte` caused sub-optimal code generation with gcc-12/x86-64 (zero extending from 8-bit to 32-bit register, followed by extraneous sign extending from 32-bit to 64-bit register). - Changed 'aria_crypt' loop structure a bit for tiny performance increase (~1% seen with gcc-12/x86-64/zen4). Benchmark on AMD Ryzen 9 7900X (x86-64): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 3.99 ns/B 239.1 MiB/s 22.43 c/B 5625 ECB dec | 4.00 ns/B 238.4 MiB/s 22.50 c/B 5625 Benchmark on AMD Ryzen 9 7900X (win32): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 4.57 ns/B 208.7 MiB/s 25.31 c/B 5538 ECB dec | 4.66 ns/B 204.8 MiB/s 25.39 c/B 5453 Benchmark on ARM Cortex-A53 (aarch64): ARIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz ECB enc | 74.69 ns/B 12.77 MiB/s 48.40 c/B 647.9 ECB dec | 74.99 ns/B 12.72 MiB/s 48.58 c/B 647.9 Cc: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* sm4: add missing OCB 16-way GFNI-AVX512 pathJussi Kivilinna2023-01-041-0/+20
| | | | | | | | * cipher/sm4.c (_gcry_sm4_ocb_crypt) [USE_GFNI_AVX512]: Add 16-way GFNI-AVX512 handling. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* bulkhelp: change bulk function definition to allow modifying contextJussi Kivilinna2023-01-045-61/+59
| | | | | | | | | | | | | | | | | | | | | | | | | * cipher/bulkhelp.h (bulk_crypt_fn_t): Make 'ctx' non-constant and change 'num_blks' from 'unsigned int' to 'size_t'. * cipher/camellia-glue.c (camellia_encrypt_blk1_32) (camellia_encrypt_blk1_64, camellia_decrypt_blk1_32) (camellia_decrypt_blk1_64): Adjust to match 'bulk_crypt_fn_t'. * cipher/serpent.c (serpent_crypt_blk1_16, serpent_encrypt_blk1_16) (serpent_decrypt_blk1_16): Likewise. * cipher/sm4.c (crypt_blk1_16_fn_t, _gcry_sm4_aesni_avx_crypt_blk1_8) (sm4_aesni_avx_crypt_blk1_16, _gcry_sm4_aesni_avx2_crypt_blk1_16) (sm4_aesni_avx2_crypt_blk1_16, _gcry_sm4_gfni_avx2_crypt_blk1_16) (sm4_gfni_avx2_crypt_blk1_16, _gcry_sm4_gfni_avx512_crypt_blk1_16) (_gcry_sm4_gfni_avx512_crypt_blk32, sm4_gfni_avx512_crypt_blk1_16) (_gcry_sm4_aarch64_crypt_blk1_8, sm4_aarch64_crypt_blk1_16) (_gcry_sm4_armv8_ce_crypt_blk1_8, sm4_armv8_ce_crypt_blk1_16) (_gcry_sm4_armv9_sve_ce_crypt, sm4_armv9_sve_ce_crypt_blk1_16) (sm4_crypt_blocks, sm4_crypt_blk1_32, sm4_encrypt_blk1_32) (sm4_decrypt_blk1_32): Likewise. * cipher/twofish.c (twofish_crypt_blk1_16, twofish_encrypt_blk1_16) (twofish_decrypt_blk1_16): Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* Add GMAC-SM4 and Poly1305-SM4Jussi Kivilinna2023-01-0410-12/+58
| | | | | | | | | | | | | | | | | | | | | | | | * cipher/cipher.c (cipher_list_algo301): Remove comma at the end of last entry. * cipher/mac-gmac.c (map_mac_algo_to_cipher): Add SM4. (_gcry_mac_type_spec_gmac_sm4): New. * cipher/max-internal.h (_gcry_mac_type_spec_gmac_sm4) (_gcry_mac_type_spec_poly1305mac_sm4): New. * cipher/mac-poly1305.c (poly1305mac_open): Add SM4. (_gcry_mac_type_spec_poly1305mac_sm4): New. * cipher/mac.c (mac_list, mac_list_algo401, mac_list_algo501): Add GMAC-SM4 and Poly1304-SM4. (mac_list_algo101): Remove comma at the end of last entry. * cipher/md.c (digest_list_algo301): Remove comma at the end of last entry. * doc/gcrypt.texi: Add GCRY_MAC_GMAC_SM4 and GCRY_MAC_POLY1305_SM4. * src/gcrypt.h.in (GCRY_MAC_GMAC_SM4, GCRY_MAC_POLY1305_SM4): New. * tests/bench-slope.c (bench_mac_init): Setup IV for GCRY_MAC_POLY1305_SM4. * tests/benchmark.c (mac_bench): Likewise. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* Fix compiler warnings seen with clang-powerpc64le targetJussi Kivilinna2023-01-043-9/+12
| | | | | | | | | | | | * cipher/rijndael-ppc-common.h (asm_sbox_be): New. * cipher/rijndael-ppc.c (_gcry_aes_sbox4_ppc8): Use 'asm_sbox_be' instead of 'vec_sbox_be' since this instrinsics has different prototype definition on GCC and Clang ('vector uchar' vs 'vector ulong long'). * cipher/sha256-ppc.c (vec_ror_u32): Remove unused function. -- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* Add clang support for ARM 32-bit assemblyJussi Kivilinna2022-12-1415-682/+682
| | | | | | | | | | | | | | | | | | | | | | | | * configure.ac (gcry_cv_gcc_arm_platform_as_ok) (gcry_cv_gcc_inline_asm_neon): Remove % prefix from register names. * cipher/cipher-gcm-armv7-neon.S (vmull_p64): Prefix constant values with # character instead of $. * cipher/blowfish-arm.S: Remove % prefix from all register names. * cipher/camellia-arm.S: Likewise. * cipher/cast5-arm.S: Likewise. * cipher/rijndael-arm.S: Likewise. * cipher/rijndael-armv8-aarch32-ce.S: Likewise. * cipher/sha512-arm.S: Likewise. * cipher/sha512-armv7-neon.S: Likewise. * cipher/twofish-arm.S: Likewise. * mpi/arm/mpih-add1.S: Likewise. * mpi/arm/mpih-mul1.S: Likewise. * mpi/arm/mpih-mul2.S: Likewise. * mpi/arm/mpih-mul3.S: Likewise. * mpi/arm/mpih-sub1.S: Likewise. -- Reported-by: Dmytro Kovalov <dmytro.a.kovalov@globallogic.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* rijndael-ppc: fix wrong inline assembly constraintJussi Kivilinna2022-12-141-1/+1
| | | | | | | | | | * cipher/rijndael-ppc-function.h (CBC_ENC_FUNC): Fix outiv constraint. -- Noticed when trying to compile with powerpc64le clang. GCC accepted the buggy constraint without complaints. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* Fix building AVX512 Intel-syntax assembly with x86-64 clangJussi Kivilinna2022-12-143-2/+6
| | | | | | | | | | | * cipher/asm-common-amd64.h (spec_stop_avx512_intel_syntax): New. * cipher/poly1305-amd64-avx512.S: Use spec_stop_avx512_intel_syntax instead of spec_stop_avx512. * cipher/sha512-avx512-amd64.S: Likewise. -- Reported-by: Clemens Lang <cllang@redhat.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* build: Fix m4 macros for strict C compiler.NIIBE Yutaka2022-12-142-2/+2
| | | | | | | | | * m4/ax_cc_for_build.m4: Fix for no arg. * m4/noexecstack.m4: Likewise. -- Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
* build: Fix configure.ac for strict C99.NIIBE Yutaka2022-12-141-0/+3
| | | | | | | | * configure.ac: More fixes for other architecture. -- Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
* build: Fix configure.ac for strict C99.NIIBE Yutaka2022-12-131-29/+43
| | | | | | | | | * configure.ac: Add function declarations for asm functions. -- Suggested-by: Florian Weimer <fweimer@redhat.com> Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
* avx512: tweak AVX512 spec stop, use common macro in assemblyJussi Kivilinna2022-12-1210-20/+44
| | | | | | | | | | | | | | | | | * cipher/cipher-gcm-intel-pclmul.c: Use xmm registers for AVX512 spec stop. * cipher/asm-common-amd64.h (spec_stop_avx512): New. * cipher/blake2b-amd64-avx512.S: Use spec_stop_avx512. * cipher/blake2s-amd64-avx512.S: Likewise. * cipher/camellia-gfni-avx512-amd64.S: Likewise. * cipher/chacha20-avx512-amd64.S: Likewise. * cipher/keccak-amd64-avx512.S: Likewise. * cipher/poly1305-amd64-avx512.S: Likewise. * cipher/sha512-avx512-amd64.S: Likewise. * cipher/sm4-gfni-avx512-amd64.S: Likewise. --- Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* chacha20-avx512: add handling for any input block count and tweak 16 block ↵Jussi Kivilinna2022-12-122-55/+496
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | code a bit * cipher/chacha20-amd64-avx512.S: Add tail handling for 8/4/2/1 blocks; Rename `_gcry_chacha20_amd64_avx512_blocks16` to `_gcry_chacha20_amd64_avx512_blocks`; Tweak 16 parallel block processing for small speed improvement. * cipher/chacha20.c (_gcry_chacha20_amd64_avx512_blocks16): Rename to ... (_gcry_chacha20_amd64_avx512_blocks): ... this. (chacha20_blocks) [USE_AVX512]: Add AVX512 code-path. (do_chacha20_encrypt_stream_tail) [USE_AVX512]: Change to handle any number of full input blocks instead of multiples of 16. -- Patch improves performance of ChaCha20-AVX512 implementation on small input buffer sizes (less than 64*16B = 1024B). Following benchmarks show improvement in 16 parallel blocks processing performance. === Benchmark on AMD Ryzen 9 7900X: Before: CHACHA20 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz STREAM enc | 0.130 ns/B 7330 MiB/s 0.716 c/B 5500 STREAM dec | 0.128 ns/B 7426 MiB/s 0.713 c/B 5555 POLY1305 enc | 0.175 ns/B 5444 MiB/s 0.964 c/B 5500 POLY1305 dec | 0.175 ns/B 5455 MiB/s 0.962 c/B 5500 After: CHACHA20 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz STREAM enc | 0.123 ns/B 7767 MiB/s 0.691 c/B 5625 STREAM dec | 0.123 ns/B 7736 MiB/s 0.693 c/B 5625 POLY1305 enc | 0.168 ns/B 5679 MiB/s 0.945 c/B 5625 POLY1305 dec | 0.167 ns/B 5708 MiB/s 0.940 c/B 5625 === Benchmark on Intel Core i3-1115G4: Before: CHACHA20 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz STREAM enc | 0.161 ns/B 5934 MiB/s 0.658 c/B 4097±3 STREAM dec | 0.160 ns/B 5951 MiB/s 0.656 c/B 4097±4 POLY1305 enc | 0.220 ns/B 4333 MiB/s 0.902 c/B 4096±3 POLY1305 dec | 0.220 ns/B 4325 MiB/s 0.903 c/B 4096±3 After: CHACHA20 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz STREAM enc | 0.152 ns/B 6267 MiB/s 0.623 c/B 4097±3 STREAM dec | 0.152 ns/B 6287 MiB/s 0.621 c/B 4097±3 POLY1305 enc | 0.215 ns/B 4443 MiB/s 0.879 c/B 4096±3 POLY1305 dec | 0.214 ns/B 4452 MiB/s 0.878 c/B 4096±3 Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
* doc: Minor fix up.NIIBE Yutaka2022-12-061-3/+3
| | | | | | -- Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
* fips,rsa: Prevent usage of X9.31 keygen in FIPS mode.Jakub Jelen2022-12-063-7/+54
| | | | | | | | | | | | | * cipher/rsa.c (rsa_generate): Do not accept use-x931 or derive-parms in FIPS mode. * tests/pubkey.c (get_keys_x931_new): Expect failure in FIPS mode. (check_run): Skip checking X9.31 keys in FIPS mode. * doc/gcrypt.texi: Document "test-parms" and clarify some cases around the X9.31 keygen. -- Signed-off-by: Jakub Jelen <jjelen@redhat.com>
* rsa: Prevent usage of long salt in FIPS modeJakub Jelen2022-11-303-2/+33
| | | | | | | | | * cipher/rsa-common.c (_gcry_rsa_pss_encode): Prevent usage of large salt lengths (_gcry_rsa_pss_verify): Ditto. * tests/basic.c (check_pubkey_sign): Check longer salt length fails in FIPS mode * tests/t-rsa-pss.c (one_test_sexp): Fix function name in error message
* random:w32: Don't emit message for diskperf when it's not useful.NIIBE Yutaka2022-11-211-2/+9
| | | | | | | | * random/rndw32.c (slow_gatherer): Suppress emitting by log_info. -- Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
* fips: Mark AES key wrapping as approved.Jakub Jelen2022-11-181-0/+1
| | | | | | | | | | * src/fips.c (_gcry_fips_indicator_cipher): Add key wrapping mode as approved. -- GnuPG-bug-id: 5512 Signed-off-by: Jakub Jelen <jjelen@redhat.com>
* pkdf2: Add checks for FIPS.Jakub Jelen2022-11-181-0/+12
| | | | | | | | | | * cipher/kdf.c (_gcry_kdf_pkdf2): Require 8 chars passphrase for FIPS. Set bounds for salt length and iteration count in FIPS mode. -- GnuPG-bug-id: 6039 Signed-off-by: Jakub Jelen <jjelen@redhat.com>
* doc: Update document for pkg-config and libgcrypt.m4.NIIBE Yutaka2022-11-151-28/+18
| | | | | | -- Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
* build: Prefer gpgrt-config when available.NIIBE Yutaka2022-11-011-2/+2
| | | | | | | | | | | | | * src/libgcrypt.m4: Overriding the decision by --with-libgcrypt-prefix, use gpgrt-config libgcrypt when gpgrt-config is available. -- This may offer better migration. GnuPG-bug-id: 5034 Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>