summaryrefslogtreecommitdiff
path: root/aria.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Breakout and cleanup macros. Add CRYPTOPP_ENABLE_ARIA_SSE2_INTRINSICS, ↵Jeffrey Walton2017-04-131-57/+138
| | | | | | | | | | CRYPTOPP_ENABLE_ARIA_SSSE3_INTRINSICS and CRYPTOPP_ENABLE_ARIA_NEON_INTRINSICS. Tune CRYPTOPP_ENABLE_ARIA_SSE2_INTRINSICS and CRYPTOPP_ENABLE_ARIA_SSSE3_INTRINSICS macro for older GCC and Clang. Clang needs some more tuning on Aarch64 becuase performance is off by about 15%. Add additional NEON code paths. Remove keyBits from Aarch64 code paths.
* Improve x86 and x64 ARIA performanceJeffrey Walton2017-04-131-48/+89
| | | | The changes were meant to improve Windows, but GCC benefited more. Windows gained 0.3 cpb, while GCC gained 1.2 cpb
* Fix unaligned pointer crash on Win32 due to _mm_load_si128Jeffrey Walton2017-04-131-38/+70
| | | | | | | | | | The SSSE3 intrinsics were performing aligned loads using _mm_load_si128 using user supplied pointers. The pointers are only a byte pointer, so its alignment can drop to 1 or 2. Switching to _mm_loadu_si128 will sidestep potential problems. The crash surfaced under Win32 testing. Switch to memcpy's when performing bulk assignment x[0]=y[0] ... x[3]=y[3]. I believe Yun used the pattern to promote vectorization. Some compilers appear to be braindead and issue integer move's one word at a time. Non-braindead compiler will still take the optimization when advantageous, and slower compilers will benefit from the bulk move. We also cherry picked vectorization opportunities, like in ARIA_GSRK_NEON. Remove keyBits variable. We now use UncheckedSetKey's keylen throughout. Also fix a typo in CRYPTOPP_BOOL_SSSE3_INTRINSICS_AVAILABLE. __SSSE3__ was listed twice.
* Add Intel and ARM intrinsicsJeffrey Walton2017-04-121-77/+187
| | | | | | | | Win32 and Win64 benefited from the Intel intrinsics. A32 and Aarch64 benefited from the ARM intrinsics. The intrinsics shaved 150 to 350 cycles from key setup. The intrinsics slowed modern GCC down a small bit, and did not appear to affect old GCC. As such, Intel intrinsics were only enabled for Microsoft compilers. We were not able to improve encryption and decryption. In fact, some of the attempted macro conversions and intrinsics attempts slowed things down considerably. For example, GCC 5.4 on x86_64 went from 120 MB/s to about 70 MB/s when we tried to improve code around the Key XOR Layer (ARIA_KXL).
* Add NEON intrinsics for ARIA_GSRK_NEONJeffrey Walton2017-04-121-32/+87
| | | | Update documentation
* Rework ARIA_GSRK to have MSVC generate "rotate imm" rather than "rot reg"Jeffrey Walton2017-04-111-48/+64
| | | | The immediate version of rotate can be 4 to 6 times faster than the register version
* Additional library integration for ARIAJeffrey Walton2017-04-111-79/+80
|
* Switch to code based on 32-bit implementationJeffrey Walton2017-04-111-205/+383
| | | | The 32-bit code is based on Aaram Yun's code. Yun's code combined with a few library specific tweaks improves performance to roughly Camellia.
* Add ARIA block cipherJeffrey Walton2017-04-101-0/+270
This is the reference implementation, test data and test vectors from the ARIA.zip package on the KISA website. The website is located at http://seed.kisa.or.kr/iwt/ko/bbs/EgovReferenceList.do?bbsId=BBSMSTR_000000000002. We have optimized routines that improve Key Setup and Bulk Encryption performance, but they are not being checked-in at the moment. The ARIA team is updating its implementation for contemporary hardware and we would like to use it as a starting point before we wander too far away from the KISA implementation.