aboutsummaryrefslogtreecommitdiff
path: root/src/crypto
diff options
context:
space:
mode:
authorNoodleDoodleNoodleDoodleNoodleDoodleNoo <xeven77@outlook.com>2015-07-10 13:09:32 -0700
committerNoodleDoodleNoodleDoodleNoodleDoodleNoo <xeven77@outlook.com>2015-07-15 23:20:16 -0700
commite5d2680094ee15889934fe28901e4e133cda56f2 (patch)
treec96ac8800d3a17a9c7b50fbe0b0ef2ced8c7ff0b /src/crypto
parentUpdate blockchain.cpp (diff)
downloadmonero-e5d2680094ee15889934fe28901e4e133cda56f2.tar.xz
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
Diffstat (limited to 'src/crypto')
-rw-r--r--src/crypto/aesb.c12
-rw-r--r--src/crypto/slow-hash.c190
2 files changed, 200 insertions, 2 deletions
diff --git a/src/crypto/aesb.c b/src/crypto/aesb.c
index ba48313da..128c523ea 100644
--- a/src/crypto/aesb.c
+++ b/src/crypto/aesb.c
@@ -140,7 +140,15 @@ extern "C"
d_4(uint32_t, t_dec(f,n), sb_data, u0, u1, u2, u3);
-void aesb_single_round(const uint8_t *in, uint8_t *out, uint8_t *expandedKey)
+#if !defined(STATIC)
+#define STATIC
+#endif
+
+#if !defined(INLINE)
+#define INLINE
+#endif
+
+STATIC INLINE void aesb_single_round(const uint8_t *in, uint8_t *out, uint8_t *expandedKey)
{
uint32_t b0[4], b1[4];
const uint32_t *kp = (uint32_t *) expandedKey;
@@ -151,7 +159,7 @@ void aesb_single_round(const uint8_t *in, uint8_t *out, uint8_t *expandedKey)
state_out(out, b1);
}
-void aesb_pseudo_round(const uint8_t *in, uint8_t *out, uint8_t *expandedKey)
+STATIC INLINE void aesb_pseudo_round(const uint8_t *in, uint8_t *out, uint8_t *expandedKey)
{
uint32_t b0[4], b1[4];
const uint32_t *kp = (uint32_t *) expandedKey;
diff --git a/src/crypto/slow-hash.c b/src/crypto/slow-hash.c
index 425737984..60699d4e6 100644
--- a/src/crypto/slow-hash.c
+++ b/src/crypto/slow-hash.c
@@ -624,6 +624,196 @@ void cn_slow_hash(const void *data, size_t length, char *hash)
extra_hashes[state.hs.b[0] & 3](&state, 200, hash);
}
+#elif defined(__arm__)
+// ND: Some minor optimizations for ARM7 (raspberrry pi 2), effect seems to be ~40-50% faster.
+// Needs more work.
+void slow_hash_allocate_state(void)
+{
+ // Do nothing, this is just to maintain compatibility with the upgraded slow-hash.c
+ return;
+}
+
+void slow_hash_free_state(void)
+{
+ // As above
+ return;
+}
+
+static void (*const extra_hashes[4])(const void *, size_t, char *) = {
+ hash_extra_blake, hash_extra_groestl, hash_extra_jh, hash_extra_skein
+};
+
+#define MEMORY (1 << 21) /* 2 MiB */
+#define ITER (1 << 20)
+#define AES_BLOCK_SIZE 16
+#define AES_KEY_SIZE 32 /*16*/
+#define INIT_SIZE_BLK 8
+#define INIT_SIZE_BYTE (INIT_SIZE_BLK * AES_BLOCK_SIZE)
+
+#if defined(__GNUC__)
+#define RDATA_ALIGN16 __attribute__ ((aligned(16)))
+#define STATIC static
+#define INLINE inline
+#else
+#define RDATA_ALIGN16
+#define STATIC static
+#define INLINE
+#endif
+
+#define U64(x) ((uint64_t *) (x))
+
+#include "aesb.c"
+
+STATIC INLINE void ___mul128(uint32_t *a, uint32_t *b, uint32_t *h, uint32_t *l)
+{
+ // ND: 64x64 multiplication for ARM7
+ __asm__ __volatile__
+ (
+ // lo hi
+ "umull %[r0], %[r1], %[b], %[d]\n\t" // bd [r0 = bd.lo]
+ "umull %[r2], %[r3], %[b], %[c]\n\t" // bc
+ "umull %[b], %[c], %[a], %[c]\n\t" // ac
+ "adds %[r1], %[r1], %[r2]\n\t" // r1 = bd.hi + bc.lo
+ "adcs %[r2], %[r3], %[b]\n\t" // r2 = ac.lo + bc.hi + carry
+ "adc %[r3], %[c], #0\n\t" // r3 = ac.hi + carry
+ "umull %[b], %[a], %[a], %[d]\n\t" // ad
+ "adds %[r1], %[r1], %[b]\n\t" // r1 = bd.hi + bc.lo + ad.lo
+ "adcs %[r2], %[r2], %[a]\n\t" // r2 = ac.lo + bc.hi + ad.hi + carry
+ "adc %[r3], %[r3], #0\n\t" // r3 = ac.hi + carry
+ : [r0]"=&r"(l[0]), [r1]"=&r"(l[1]), [r2]"=&r"(h[0]), [r3]"=&r"(h[1])
+ : [a]"r"(a[1]), [b]"r"(a[0]), [c]"r"(b[1]), [d]"r"(b[0])
+ : "cc"
+ );
+}
+
+STATIC INLINE void mul(const uint8_t* a, const uint8_t* b, uint8_t* res)
+{
+ ___mul128((uint32_t *) a, (uint32_t *) b, (uint32_t *) (res + 0), (uint32_t *) (res + 8));
+}
+
+STATIC INLINE void sum_half_blocks(uint8_t* a, const uint8_t* b)
+{
+ uint64_t a0, a1, b0, b1;
+ a0 = U64(a)[0];
+ a1 = U64(a)[1];
+ b0 = U64(b)[0];
+ b1 = U64(b)[1];
+ a0 += b0;
+ a1 += b1;
+ U64(a)[0] = a0;
+ U64(a)[1] = a1;
+}
+
+STATIC INLINE void swap_blocks(uint8_t *a, uint8_t *b)
+{
+ uint64_t t[2];
+ U64(t)[0] = U64(a)[0];
+ U64(t)[1] = U64(a)[1];
+ U64(a)[0] = U64(b)[0];
+ U64(a)[1] = U64(b)[1];
+ U64(b)[0] = U64(t)[0];
+ U64(b)[1] = U64(t)[1];
+}
+
+STATIC INLINE void xor_blocks(uint8_t* a, const uint8_t* b)
+{
+ U64(a)[0] ^= U64(b)[0];
+ U64(a)[1] ^= U64(b)[1];
+}
+
+#pragma pack(push, 1)
+union cn_slow_hash_state
+{
+ union hash_state hs;
+ struct
+ {
+ uint8_t k[64];
+ uint8_t init[INIT_SIZE_BYTE];
+ };
+};
+#pragma pack(pop)
+
+void cn_slow_hash(const void *data, size_t length, char *hash)
+{
+ uint8_t long_state[MEMORY];
+ uint8_t text[INIT_SIZE_BYTE];
+ uint8_t a[AES_BLOCK_SIZE];
+ uint8_t b[AES_BLOCK_SIZE];
+ uint8_t d[AES_BLOCK_SIZE];
+ uint8_t aes_key[AES_KEY_SIZE];
+ RDATA_ALIGN16 uint8_t expandedKey[256];
+
+ union cn_slow_hash_state state;
+
+ size_t i, j;
+ uint8_t *p = NULL;
+ oaes_ctx *aes_ctx;
+ static void (*const extra_hashes[4])(const void *, size_t, char *) =
+ {
+ hash_extra_blake, hash_extra_groestl, hash_extra_jh, hash_extra_skein
+ };
+
+ hash_process(&state.hs, data, length);
+ memcpy(text, state.init, INIT_SIZE_BYTE);
+
+ aes_ctx = (oaes_ctx *) oaes_alloc();
+ oaes_key_import_data(aes_ctx, state.hs.b, AES_KEY_SIZE);
+
+ // use aligned data
+ memcpy(expandedKey, aes_ctx->key->exp_data, aes_ctx->key->exp_data_len);
+ for(i = 0; i < MEMORY / INIT_SIZE_BYTE; i++)
+ {
+ for(j = 0; j < INIT_SIZE_BLK; j++)
+ aesb_pseudo_round(&text[AES_BLOCK_SIZE * j], &text[AES_BLOCK_SIZE * j], expandedKey);
+ memcpy(&long_state[i * INIT_SIZE_BYTE], text, INIT_SIZE_BYTE);
+ }
+
+ U64(a)[0] = U64(&state.k[0])[0] ^ U64(&state.k[32])[0];
+ U64(a)[1] = U64(&state.k[0])[1] ^ U64(&state.k[32])[1];
+ U64(b)[0] = U64(&state.k[16])[0] ^ U64(&state.k[48])[0];
+ U64(b)[1] = U64(&state.k[16])[1] ^ U64(&state.k[48])[1];
+
+ for(i = 0; i < ITER / 2; i++)
+ {
+ #define MASK ((uint32_t)(((MEMORY / AES_BLOCK_SIZE) - 1) << 4))
+ #define state_index(x) ((*(uint32_t *) x) & MASK)
+
+ // Iteration 1
+ p = &long_state[state_index(a)];
+ aesb_single_round(p, p, a);
+
+ xor_blocks(b, p);
+ swap_blocks(b, p);
+ swap_blocks(a, b);
+
+ // Iteration 2
+ p = &long_state[state_index(a)];
+
+ mul(a, p, d);
+ sum_half_blocks(b, d);
+ swap_blocks(b, p);
+ xor_blocks(b, p);
+ swap_blocks(a, b);
+ }
+
+ memcpy(text, state.init, INIT_SIZE_BYTE);
+ oaes_key_import_data(aes_ctx, &state.hs.b[32], AES_KEY_SIZE);
+ memcpy(expandedKey, aes_ctx->key->exp_data, aes_ctx->key->exp_data_len);
+ for(i = 0; i < MEMORY / INIT_SIZE_BYTE; i++)
+ {
+ for(j = 0; j < INIT_SIZE_BLK; j++)
+ {
+ xor_blocks(&text[j * AES_BLOCK_SIZE], &long_state[i * INIT_SIZE_BYTE + j * AES_BLOCK_SIZE]);
+ aesb_pseudo_round(&text[AES_BLOCK_SIZE * j], &text[AES_BLOCK_SIZE * j], expandedKey);
+ }
+ }
+
+ oaes_free((OAES_CTX **) &aes_ctx);
+ memcpy(state.init, text, INIT_SIZE_BYTE);
+ hash_permutation(&state.hs);
+ extra_hashes[state.hs.b[0] & 3](&state, 200, hash);
+}
+
#else
// Portable implementation as a fallback