| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Change stride to cache line size divided by word size based on Yun's 32-bit word implementation
|
|
|
|
|
|
| |
The ARIA S-boxes could leak timining information. This commit applies the counter measures present in Rijndael and Camellia to ARIA. We take a penalty of about 0.05 to 0.1 cpb. It equates to about 0 MiB/s on an ARM device, and about 2 MiB/s on a modern Skylake.
We recently gained some performance though use of SSE and NEON in ProcessAndXorBlock, so the net result is an improvement.
|
|
|
|
|
|
|
|
|
|
| |
CRYPTOPP_ENABLE_ARIA_SSSE3_INTRINSICS and CRYPTOPP_ENABLE_ARIA_NEON_INTRINSICS.
Tune CRYPTOPP_ENABLE_ARIA_SSE2_INTRINSICS and CRYPTOPP_ENABLE_ARIA_SSSE3_INTRINSICS macro for older GCC and Clang. Clang needs some more tuning on Aarch64 becuase performance is off by about 15%.
Add additional NEON code paths.
Remove keyBits from Aarch64 code paths.
|
|
|
|
| |
The changes were meant to improve Windows, but GCC benefited more. Windows gained 0.3 cpb, while GCC gained 1.2 cpb
|
|
|
|
|
|
|
|
|
|
| |
The SSSE3 intrinsics were performing aligned loads using _mm_load_si128 using user supplied pointers. The pointers are only a byte pointer, so its alignment can drop to 1 or 2. Switching to _mm_loadu_si128 will sidestep potential problems. The crash surfaced under Win32 testing.
Switch to memcpy's when performing bulk assignment x[0]=y[0] ... x[3]=y[3]. I believe Yun used the pattern to promote vectorization. Some compilers appear to be braindead and issue integer move's one word at a time. Non-braindead compiler will still take the optimization when advantageous, and slower compilers will benefit from the bulk move. We also cherry picked vectorization opportunities, like in ARIA_GSRK_NEON.
Remove keyBits variable. We now use UncheckedSetKey's keylen throughout.
Also fix a typo in CRYPTOPP_BOOL_SSSE3_INTRINSICS_AVAILABLE. __SSSE3__ was listed twice.
|
|
|
|
|
|
|
|
| |
Win32 and Win64 benefited from the Intel intrinsics. A32 and Aarch64 benefited from the ARM intrinsics. The intrinsics shaved 150 to 350 cycles from key setup.
The intrinsics slowed modern GCC down a small bit, and did not appear to affect old GCC. As such, Intel intrinsics were only enabled for Microsoft compilers.
We were not able to improve encryption and decryption. In fact, some of the attempted macro conversions and intrinsics attempts slowed things down considerably. For example, GCC 5.4 on x86_64 went from 120 MB/s to about 70 MB/s when we tried to improve code around the Key XOR Layer (ARIA_KXL).
|
|
|
|
| |
Update documentation
|
|
|
|
| |
The immediate version of rotate can be 4 to 6 times faster than the register version
|
| |
|
|
|
|
| |
The 32-bit code is based on Aaram Yun's code. Yun's code combined with a few library specific tweaks improves performance to roughly Camellia.
|
|
This is the reference implementation, test data and test vectors from the ARIA.zip package on the KISA website. The website is located at http://seed.kisa.or.kr/iwt/ko/bbs/EgovReferenceList.do?bbsId=BBSMSTR_000000000002.
We have optimized routines that improve Key Setup and Bulk Encryption performance, but they are not being checked-in at the moment. The ARIA team is updating its implementation for contemporary hardware and we would like to use it as a starting point before we wander too far away from the KISA implementation.
|