aboutsummaryrefslogtreecommitdiff
path: root/src/cryptonote_core/blockchain.h
diff options
context:
space:
mode:
authorNoodleDoodleNoodleDoodleNoodleDoodleNoo <xeven77@outlook.com>2015-07-10 13:09:32 -0700
committerNoodleDoodleNoodleDoodleNoodleDoodleNoo <xeven77@outlook.com>2015-07-15 23:20:16 -0700
commite5d2680094ee15889934fe28901e4e133cda56f2 (patch)
treec96ac8800d3a17a9c7b50fbe0b0ef2ced8c7ff0b /src/cryptonote_core/blockchain.h
parentUpdate blockchain.cpp (diff)
downloadmonero-e5d2680094ee15889934fe28901e4e133cda56f2.tar.xz
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
Diffstat (limited to 'src/cryptonote_core/blockchain.h')
-rw-r--r--src/cryptonote_core/blockchain.h64
1 files changed, 56 insertions, 8 deletions
diff --git a/src/cryptonote_core/blockchain.h b/src/cryptonote_core/blockchain.h
index 869b0b4b6..c3d01c65a 100644
--- a/src/cryptonote_core/blockchain.h
+++ b/src/cryptonote_core/blockchain.h
@@ -56,6 +56,13 @@ namespace cryptonote
{
class tx_memory_pool;
+ enum blockchain_db_sync_mode
+ {
+ db_sync,
+ db_async,
+ db_nosync
+ };
+
/************************************************************************/
/* */
/************************************************************************/
@@ -94,6 +101,8 @@ namespace cryptonote
crypto::hash get_block_id_by_height(uint64_t height) const;
bool get_block_by_hash(const crypto::hash &h, block &blk) const;
void get_all_known_block_ids(std::list<crypto::hash> &main, std::list<crypto::hash> &alt, std::list<crypto::hash> &invalid) const;
+ bool prepare_handle_incoming_blocks(const std::list<block_complete_entry> &blocks);
+ bool cleanup_handle_incoming_blocks(bool force_sync = false);
template<class archive_t>
void serialize(archive_t & ar, const unsigned int version);
@@ -102,16 +111,13 @@ namespace cryptonote
bool have_tx_keyimges_as_spent(const transaction &tx) const;
bool have_tx_keyimg_as_spent(const crypto::key_image &key_im) const;
- template<class visitor_t>
- bool scan_outputkeys_for_indexes(const txin_to_key& tx_in_to_key, visitor_t& vis, uint64_t* pmax_related_block_height = NULL) const;
-
uint64_t get_current_blockchain_height() const;
crypto::hash get_tail_id() const;
crypto::hash get_tail_id(uint64_t& height) const;
- difficulty_type get_difficulty_for_next_block() const;
+ difficulty_type get_difficulty_for_next_block();
bool add_new_block(const block& bl_, block_verification_context& bvc);
bool reset_and_set_genesis_block(const block& b);
- bool create_block_template(block& b, const account_public_address& miner_address, difficulty_type& di, uint64_t& height, const blobdata& ex_nonce) const;
+ bool create_block_template(block& b, const account_public_address& miner_address, difficulty_type& di, uint64_t& height, const blobdata& ex_nonce);
bool have_block(const crypto::hash& id) const;
size_t get_total_transactions() const;
bool get_short_chain_history(std::list<crypto::hash>& ids) const;
@@ -123,9 +129,8 @@ namespace cryptonote
bool get_random_outs_for_amounts(const COMMAND_RPC_GET_RANDOM_OUTPUTS_FOR_AMOUNTS::request& req, COMMAND_RPC_GET_RANDOM_OUTPUTS_FOR_AMOUNTS::response& res) const;
bool get_tx_outputs_gindexs(const crypto::hash& tx_id, std::vector<uint64_t>& indexs) const;
bool store_blockchain();
- bool check_tx_input(const txin_to_key& txin, const crypto::hash& tx_prefix_hash, const std::vector<crypto::signature>& sig, uint64_t* pmax_related_block_height = NULL) const;
- bool check_tx_inputs(const transaction& tx, uint64_t* pmax_used_block_height = NULL) const;
- bool check_tx_inputs(const transaction& tx, uint64_t& pmax_used_block_height, crypto::hash& max_used_block_id) const;
+
+ bool check_tx_inputs(const transaction& tx, uint64_t& pmax_used_block_height, crypto::hash& max_used_block_id, bool kept_by_block = false);
uint64_t get_current_cumulative_blocksize_limit() const;
bool is_storing_blockchain()const{return m_is_blockchain_storing;}
uint64_t block_difficulty(uint64_t i) const;
@@ -145,11 +150,23 @@ namespace cryptonote
void set_enforce_dns_checkpoints(bool enforce);
bool update_checkpoints(const std::string& file_path, bool check_dns);
+ // user options, must be called before calling init()
+ void set_user_options(uint64_t block_threads, uint64_t blocks_per_sync,
+ blockchain_db_sync_mode sync_mode, bool fast_sync);
+
+ void set_show_time_stats(bool stats) { m_show_time_stats = stats; }
+
BlockchainDB& get_db()
{
return *m_db;
}
+ void output_scan_worker(const uint64_t amount,const std::vector<uint64_t> &offsets,
+ std::vector<output_data_t> &outputs, std::unordered_map<crypto::hash,
+ cryptonote::transaction> &txs) const;
+
+ void block_longhash_worker(const uint64_t height, const std::vector<block> &blocks,
+ std::unordered_map<crypto::hash, crypto::hash> &map) const;
private:
typedef std::unordered_map<crypto::hash, size_t> blocks_by_id_index;
typedef std::unordered_map<crypto::hash, transaction_chain_entry> transactions_container;
@@ -171,6 +188,29 @@ namespace cryptonote
key_images_container m_spent_keys;
size_t m_current_block_cumul_sz_limit;
+ std::unordered_map<crypto::hash, std::unordered_map<crypto::key_image, std::vector<output_data_t>>> m_scan_table;
+ std::unordered_map<crypto::hash, std::pair<bool, uint64_t>> m_check_tx_inputs_table;
+ std::unordered_map<crypto::hash, crypto::hash> m_blocks_longhash_table;
+
+ // SHA-3 hashes for each block and for fast pow checking
+ std::vector<crypto::hash> m_blocks_hash_check;
+ std::vector<crypto::hash> m_blocks_txs_check;
+
+ blockchain_db_sync_mode m_db_sync_mode;
+ bool m_fast_sync;
+ bool m_show_time_stats;
+ uint64_t m_db_blocks_per_sync;
+ uint64_t m_max_prepare_blocks_threads;
+ uint64_t m_fake_pow_calc_time;
+ uint64_t m_fake_scan_time;
+ uint64_t m_sync_counter;
+ std::vector<uint64_t> m_timestamps;
+ std::vector<difficulty_type> m_difficulties;
+ uint64_t m_timestamps_and_difficulties_height;
+
+ boost::asio::io_service m_async_service;
+ boost::thread_group m_async_pool;
+ std::unique_ptr<boost::asio::io_service::work> m_async_work_idle;
// all alternative chains
blocks_ext_by_hash m_alternative_chains; // crypto::hash -> block_extended_info
@@ -185,6 +225,11 @@ namespace cryptonote
std::atomic<bool> m_is_blockchain_storing;
bool m_enforce_dns_checkpoints;
+ template<class visitor_t>
+ inline bool scan_outputkeys_for_indexes(const txin_to_key& tx_in_to_key, visitor_t &vis, const crypto::hash &tx_prefix_hash, uint64_t* pmax_related_block_height = NULL) const;
+ bool check_tx_input(const txin_to_key& txin, const crypto::hash& tx_prefix_hash, const std::vector<crypto::signature>& sig, std::vector<crypto::public_key> &output_keys, uint64_t* pmax_related_block_height);
+ bool check_tx_inputs(const transaction& tx, uint64_t* pmax_used_block_height = NULL);
+
bool switch_to_alternative_blockchain(std::list<blocks_ext_by_hash::iterator>& alt_chain, bool discard_disconnected_chain);
block pop_block_from_blockchain();
bool purge_transaction_from_blockchain(const crypto::hash& tx_id);
@@ -213,6 +258,9 @@ namespace cryptonote
bool update_next_cumulative_size_limit();
bool check_for_double_spend(const transaction& tx, key_images_container& keys_this_block) const;
+ void get_timestamp_and_difficulty(uint64_t &timestamp, difficulty_type &difficulty, const int offset) const;
+ void check_ring_signature(const crypto::hash &tx_prefix_hash, const crypto::key_image &key_image,
+ const std::vector<crypto::public_key> &pubkeys, const std::vector<crypto::signature> &sig, uint64_t &result);
};