aboutsummaryrefslogtreecommitdiff
path: root/src/blockchain_db/lmdb/db_lmdb.cpp
diff options
context:
space:
mode:
authorNoodleDoodleNoodleDoodleNoodleDoodleNoo <xeven77@outlook.com>2015-07-10 13:09:32 -0700
committerNoodleDoodleNoodleDoodleNoodleDoodleNoo <xeven77@outlook.com>2015-07-15 23:20:16 -0700
commite5d2680094ee15889934fe28901e4e133cda56f2 (patch)
treec96ac8800d3a17a9c7b50fbe0b0ef2ced8c7ff0b /src/blockchain_db/lmdb/db_lmdb.cpp
parentUpdate blockchain.cpp (diff)
downloadmonero-e5d2680094ee15889934fe28901e4e133cda56f2.tar.xz
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
Diffstat (limited to 'src/blockchain_db/lmdb/db_lmdb.cpp')
-rw-r--r--src/blockchain_db/lmdb/db_lmdb.cpp466
1 files changed, 350 insertions, 116 deletions
diff --git a/src/blockchain_db/lmdb/db_lmdb.cpp b/src/blockchain_db/lmdb/db_lmdb.cpp
index 0d0b8ba87..22f7dcd49 100644
--- a/src/blockchain_db/lmdb/db_lmdb.cpp
+++ b/src/blockchain_db/lmdb/db_lmdb.cpp
@@ -65,10 +65,19 @@ struct lmdb_cur
done = false;
}
- ~lmdb_cur() { close(); }
-
- operator MDB_cursor*() { return m_cur; }
- operator MDB_cursor**() { return &m_cur; }
+ ~lmdb_cur()
+ {
+ close();
+ }
+
+ operator MDB_cursor*()
+ {
+ return m_cur;
+ }
+ operator MDB_cursor**()
+ {
+ return &m_cur;
+ }
void close()
{
@@ -87,7 +96,8 @@ private:
template<typename T>
struct MDB_val_copy: public MDB_val
{
- MDB_val_copy(const T &t): t_copy(t)
+ MDB_val_copy(const T &t) :
+ t_copy(t)
{
mv_size = sizeof (T);
mv_data = &t_copy;
@@ -99,7 +109,8 @@ private:
template<>
struct MDB_val_copy<cryptonote::blobdata>: public MDB_val
{
- MDB_val_copy(const cryptonote::blobdata &bd): data(new char[bd.size()])
+ MDB_val_copy(const cryptonote::blobdata &bd) :
+ data(new char[bd.size()])
{
memcpy(data.get(), bd.data(), bd.size());
mv_size = bd.size();
@@ -109,7 +120,8 @@ private:
std::unique_ptr<char[]> data;
};
-auto compare_uint64 = [](const MDB_val *a, const MDB_val *b) {
+auto compare_uint64 = [](const MDB_val *a, const MDB_val *b)
+{
const uint64_t va = *(const uint64_t*)a->mv_data;
const uint64_t vb = *(const uint64_t*)b->mv_data;
if (va < vb) return -1;
@@ -117,6 +129,20 @@ auto compare_uint64 = [](const MDB_val *a, const MDB_val *b) {
else return 1;
};
+int compare_hash32(const MDB_val *a, const MDB_val *b)
+{
+ uint32_t *va = (uint32_t*) a->mv_data;
+ uint32_t *vb = (uint32_t*) b->mv_data;
+ for (int n = 7; n >= 0; n--)
+ {
+ if (va[n] == vb[n])
+ continue;
+ return va[n] < vb[n] ? -1 : 1;
+ }
+
+ return 0;
+}
+
const char* const LMDB_BLOCKS = "blocks";
const char* const LMDB_BLOCK_TIMESTAMPS = "block_timestamps";
const char* const LMDB_BLOCK_HEIGHTS = "block_heights";
@@ -235,6 +261,26 @@ void mdb_txn_safe::allow_new_txns()
void BlockchainLMDB::do_resize(uint64_t increase_size)
{
+ CRITICAL_REGION_LOCAL(m_synchronization_lock);
+ const uint64_t add_size = 1LL << 30;
+
+ // check disk capacity
+ try
+ {
+ boost::filesystem::path path(m_folder);
+ boost::filesystem::space_info si = boost::filesystem::space(path);
+ if(si.available < add_size)
+ {
+ LOG_PRINT_RED_L0("!! WARNING: Insufficient free space to extend database !!: " << si.available / 1LL << 20L);
+ return;
+ }
+ }
+ catch(...)
+ {
+ // print something but proceed.
+ LOG_PRINT_YELLOW("Unable to query free disk space.", LOG_LEVEL_0);
+ }
+
MDB_envinfo mei;
mdb_env_info(m_env, &mei);
@@ -250,6 +296,9 @@ void BlockchainLMDB::do_resize(uint64_t increase_size)
if (increase_size > 0)
new_mapsize = mei.me_mapsize + increase_size;
+ // add 1Gb per resize, instead of doing a percentage increase
+ // uint64_t new_mapsize = (double) mei.me_mapsize + add_size;
+
new_mapsize += (new_mapsize % mst.ms_psize);
mdb_txn_safe::prevent_new_txns();
@@ -270,9 +319,7 @@ void BlockchainLMDB::do_resize(uint64_t increase_size)
mdb_env_set_mapsize(m_env, new_mapsize);
- LOG_PRINT_L0("LMDB Mapsize increased."
- << " Old: " << mei.me_mapsize / (1024 * 1024) << "MiB"
- << ", New: " << new_mapsize / (1024 * 1024) << "MiB");
+ LOG_PRINT_GREEN("LMDB Mapsize increased." << " Old: " << mei.me_mapsize / (1024 * 1024) << "MiB" << ", New: " << new_mapsize / (1024 * 1024) << "MiB", LOG_LEVEL_0);
mdb_txn_safe::allow_new_txns();
}
@@ -280,6 +327,7 @@ void BlockchainLMDB::do_resize(uint64_t increase_size)
// threshold_size is used for batch transactions
bool BlockchainLMDB::need_resize(uint64_t threshold_size) const
{
+#if defined(ENABLE_AUTO_RESIZE)
MDB_envinfo mei;
mdb_env_info(m_env, &mei);
@@ -311,12 +359,19 @@ bool BlockchainLMDB::need_resize(uint64_t threshold_size) const
return false;
}
- if ((double)size_used / mei.me_mapsize > RESIZE_PERCENT)
+ std::mt19937 engine(std::random_device{}());
+ std::uniform_real_distribution<double> fdis(0.6, 0.9);
+ double resize_percent = fdis(engine);
+
+ if ((double)size_used / mei.me_mapsize > resize_percent)
{
LOG_PRINT_L1("Threshold met (percent-based)");
return true;
}
return false;
+#else
+ return false;
+#endif
}
void BlockchainLMDB::check_and_resize_for_batch(uint64_t batch_num_blocks)
@@ -389,12 +444,8 @@ uint64_t BlockchainLMDB::get_estimated_batch_size(uint64_t batch_num_blocks) con
return threshold_size;
}
-void BlockchainLMDB::add_block( const block& blk
- , const size_t& block_size
- , const difficulty_type& cumulative_difficulty
- , const uint64_t& coins_generated
- , const crypto::hash& blk_hash
- )
+void BlockchainLMDB::add_block(const block& blk, const size_t& block_size, const difficulty_type& cumulative_difficulty, const uint64_t& coins_generated,
+ const crypto::hash& blk_hash)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
@@ -545,7 +596,7 @@ void BlockchainLMDB::remove_transaction_data(const crypto::hash& tx_hash, const
}
-void BlockchainLMDB::add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index)
+void BlockchainLMDB::add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index, const uint64_t unlock_time)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
@@ -574,10 +625,15 @@ void BlockchainLMDB::add_output(const crypto::hash& tx_hash, const tx_out& tx_ou
if (tx_output.target.type() == typeid(txout_to_key))
{
- MDB_val_copy<crypto::public_key> val_pubkey(boost::get<txout_to_key>(tx_output.target).key);
- result = mdb_put(*m_write_txn, m_output_keys, &k, &val_pubkey, 0);
- if (result)
- throw0(DB_ERROR(std::string("Failed to add output pubkey to db transaction: ").append(mdb_strerror(result)).c_str()));
+ output_data_t od;
+ od.pubkey = boost::get < txout_to_key > (tx_output.target).key;
+ od.unlock_time = unlock_time;
+ od.height = m_height;
+
+ MDB_val_copy<output_data_t> data(od);
+ //MDB_val_copy<crypto::public_key> val_pubkey(boost::get<txout_to_key>(tx_output.target).key);
+ if (mdb_put(*m_write_txn, m_output_keys, &k, &data, 0))
+ throw0(DB_ERROR("Failed to add output pubkey to db transaction"));
}
@@ -808,47 +864,17 @@ tx_out BlockchainLMDB::output_from_blob(const blobdata& blob) const
return o;
}
-uint64_t BlockchainLMDB::get_output_global_index(const uint64_t& amount, const uint64_t& index) const
+uint64_t BlockchainLMDB::get_output_global_index(const uint64_t& amount, const uint64_t& index)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
- check_open();
-
- mdb_txn_safe txn;
- if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
- throw0(DB_ERROR("Failed to create a transaction for the db"));
-
- lmdb_cur cur(txn, m_output_amounts);
-
- MDB_val_copy<uint64_t> k(amount);
- MDB_val v;
-
- auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
- if (result == MDB_NOTFOUND)
+ std::vector <uint64_t> offsets;
+ std::vector <uint64_t> global_indices;
+ offsets.push_back(index);
+ get_output_global_indices(amount, offsets, global_indices);
+ if (!global_indices.size())
throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but amount not found"));
- else if (result)
- throw0(DB_ERROR("DB error attempting to get an output"));
-
- size_t num_elems = 0;
- mdb_cursor_count(cur, &num_elems);
- if (num_elems <= index)
- throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but output not found"));
-
- mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
-
- for (uint64_t i = 0; i < index; ++i)
- {
- mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
- }
-
- mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
-
- uint64_t glob_index = *(const uint64_t*)v.mv_data;
-
- cur.close();
- txn.commit();
-
- return glob_index;
+ return global_indices[0];
}
void BlockchainLMDB::check_open() const
@@ -903,8 +929,7 @@ void BlockchainLMDB::open(const std::string& filename, const int mdb_flags)
// check for existing LMDB files in base directory
boost::filesystem::path old_files = direc.parent_path();
- if (boost::filesystem::exists(old_files / "data.mdb") ||
- boost::filesystem::exists(old_files / "lock.mdb"))
+ if (boost::filesystem::exists(old_files / "data.mdb") || boost::filesystem::exists(old_files / "lock.mdb"))
{
LOG_PRINT_L0("Found existing LMDB files in " << old_files.string());
LOG_PRINT_L0("Move data.mdb and/or lock.mdb to " << filename << ", or delete them, and then restart");
@@ -970,7 +995,7 @@ void BlockchainLMDB::open(const std::string& filename, const int mdb_flags)
lmdb_db_open(txn, LMDB_OUTPUT_TXS, MDB_INTEGERKEY | MDB_CREATE, m_output_txs, "Failed to open db handle for m_output_txs");
lmdb_db_open(txn, LMDB_OUTPUT_INDICES, MDB_INTEGERKEY | MDB_CREATE, m_output_indices, "Failed to open db handle for m_output_indices");
- lmdb_db_open(txn, LMDB_OUTPUT_AMOUNTS, MDB_INTEGERKEY | MDB_DUPSORT | MDB_CREATE, m_output_amounts, "Failed to open db handle for m_output_amounts");
+ lmdb_db_open(txn, LMDB_OUTPUT_AMOUNTS, MDB_INTEGERKEY | MDB_DUPSORT | MDB_DUPFIXED | MDB_CREATE, m_output_amounts, "Failed to open db handle for m_output_amounts");
lmdb_db_open(txn, LMDB_OUTPUT_KEYS, MDB_INTEGERKEY | MDB_CREATE, m_output_keys, "Failed to open db handle for m_output_keys");
/*************** not used, but kept for posterity
@@ -982,6 +1007,11 @@ void BlockchainLMDB::open(const std::string& filename, const int mdb_flags)
mdb_set_dupsort(txn, m_output_amounts, compare_uint64);
mdb_set_dupsort(txn, m_tx_outputs, compare_uint64);
+ mdb_set_compare(txn, m_spent_keys, compare_hash32);
+ mdb_set_compare(txn, m_block_heights, compare_hash32);
+ mdb_set_compare(txn, m_txs, compare_hash32);
+ mdb_set_compare(txn, m_tx_unlocks, compare_hash32);
+ mdb_set_compare(txn, m_tx_heights, compare_hash32);
// get and keep current height
MDB_stat db_stats;
@@ -995,6 +1025,34 @@ void BlockchainLMDB::open(const std::string& filename, const int mdb_flags)
throw0(DB_ERROR("Failed to query m_output_indices"));
m_num_outputs = db_stats.ms_entries;
+ // ND: This "new" version of the lmdb database is incompatible with
+ // the previous version. Ensure that the output_keys database is
+ // sizeof(output_data_t) in length. Otherwise, inform user and
+ // terminate.
+ if(m_height > 0)
+ {
+ MDB_val_copy<uint64_t> k(0);
+ MDB_val v;
+ auto get_result = mdb_get(txn, m_output_keys, &k, &v);
+ if(get_result != MDB_SUCCESS)
+ {
+ txn.abort();
+ m_open = false;
+ return;
+ }
+
+ // LOG_PRINT_L0("Output keys size: " << v.mv_size);
+ if(v.mv_size != sizeof(output_data_t))
+ {
+ txn.abort();
+ mdb_env_close(m_env);
+ m_open = false;
+ LOG_PRINT_RED_L0("Existing lmdb database is incompatible with this version.");
+ LOG_PRINT_RED_L0("Please delete the existing database and resync.");
+ return;
+ }
+ }
+
// commit the transaction
txn.commit();
@@ -1604,26 +1662,33 @@ uint64_t BlockchainLMDB::get_num_outputs(const uint64_t& amount) const
return num_elems;
}
-crypto::public_key BlockchainLMDB::get_output_key(const uint64_t& amount, const uint64_t& index) const
+output_data_t BlockchainLMDB::get_output_key(const uint64_t &global_index) const
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
- uint64_t glob_index = get_output_global_index(amount, index);
-
mdb_txn_safe txn;
if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
throw0(DB_ERROR("Failed to create a transaction for the db"));
- MDB_val_copy<uint64_t> k(glob_index);
+ MDB_val_copy<uint64_t> k(global_index);
MDB_val v;
auto get_result = mdb_get(txn, m_output_keys, &k, &v);
if (get_result == MDB_NOTFOUND)
throw0(DB_ERROR("Attempting to get output pubkey by global index, but key does not exist"));
else if (get_result)
throw0(DB_ERROR("Error attempting to retrieve an output pubkey from the db"));
+ txn.commit();
+ return *(output_data_t *) v.mv_data;
+}
+
+output_data_t BlockchainLMDB::get_output_key(const uint64_t& amount, const uint64_t& index)
+{
+ LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+ check_open();
- return *(crypto::public_key*)v.mv_data;
+ uint64_t glob_index = get_output_global_index(amount, index);
+ return get_output_key(glob_index);
}
tx_out BlockchainLMDB::get_output(const crypto::hash& h, const uint64_t& index) const
@@ -1731,53 +1796,17 @@ tx_out_index BlockchainLMDB::get_output_tx_and_index_from_global(const uint64_t&
return tx_out_index(tx_hash, *(const uint64_t *)v.mv_data);
}
-tx_out_index BlockchainLMDB::get_output_tx_and_index(const uint64_t& amount, const uint64_t& index) const
+tx_out_index BlockchainLMDB::get_output_tx_and_index(const uint64_t& amount, const uint64_t& index)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
- check_open();
-
- mdb_txn_safe txn;
- mdb_txn_safe* txn_ptr = &txn;
- if (m_batch_active)
- txn_ptr = m_write_txn;
- else
- {
- if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
- throw0(DB_ERROR("Failed to create a transaction for the db"));
- }
- lmdb_cur cur(*txn_ptr, m_output_amounts);
-
- MDB_val_copy<uint64_t> k(amount);
- MDB_val v;
-
- auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
- if (result == MDB_NOTFOUND)
+ std::vector < uint64_t > offsets;
+ std::vector<tx_out_index> indices;
+ offsets.push_back(index);
+ get_output_tx_and_index(amount, offsets, indices);
+ if (!indices.size())
throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but amount not found"));
- else if (result)
- throw0(DB_ERROR("DB error attempting to get an output"));
-
- size_t num_elems = 0;
- mdb_cursor_count(cur, &num_elems);
- if (num_elems <= index)
- throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but output not found"));
- mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
-
- for (uint64_t i = 0; i < index; ++i)
- {
- mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
- }
-
- mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
-
- uint64_t glob_index = *(const uint64_t*)v.mv_data;
-
- cur.close();
-
- if (! m_batch_active)
- txn.commit();
-
- return get_output_tx_and_index_from_global(glob_index);
+ return indices[0];
}
std::vector<uint64_t> BlockchainLMDB::get_tx_output_indices(const crypto::hash& h) const
@@ -2016,12 +2045,8 @@ void BlockchainLMDB::set_batch_transactions(bool batch_transactions)
LOG_PRINT_L3("batch transactions " << (m_batch_transactions ? "enabled" : "disabled"));
}
-uint64_t BlockchainLMDB::add_block( const block& blk
- , const size_t& block_size
- , const difficulty_type& cumulative_difficulty
- , const uint64_t& coins_generated
- , const std::vector<transaction>& txs
- )
+uint64_t BlockchainLMDB::add_block(const block& blk, const size_t& block_size, const difficulty_type& cumulative_difficulty, const uint64_t& coins_generated,
+ const std::vector<transaction>& txs)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
@@ -2103,4 +2128,213 @@ void BlockchainLMDB::pop_block(block& blk, std::vector<transaction>& txs)
--m_height;
}
+void BlockchainLMDB::get_output_tx_and_index_from_global(const std::vector<uint64_t> &global_indices,
+ std::vector<tx_out_index> &tx_out_indices) const
+{
+ LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+ check_open();
+ tx_out_indices.clear();
+
+ mdb_txn_safe txn;
+ mdb_txn_safe* txn_ptr = &txn;
+ if (m_batch_active)
+ txn_ptr = m_write_txn;
+ else
+ {
+ if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+ throw0(DB_ERROR("Failed to create a transaction for the db"));
+ }
+
+ for (const uint64_t &index : global_indices)
+ {
+ MDB_val_copy<uint64_t> k(index);
+ MDB_val v;
+
+ auto get_result = mdb_get(*txn_ptr, m_output_txs, &k, &v);
+ if (get_result == MDB_NOTFOUND)
+ throw1(OUTPUT_DNE("output with given index not in db"));
+ else if (get_result)
+ throw0(DB_ERROR("DB error attempting to fetch output tx hash"));
+
+ crypto::hash tx_hash = *(crypto::hash*) v.mv_data;
+
+ get_result = mdb_get(*txn_ptr, m_output_indices, &k, &v);
+ if (get_result == MDB_NOTFOUND)
+ throw1(OUTPUT_DNE("output with given index not in db"));
+ else if (get_result)
+ throw0(DB_ERROR("DB error attempting to fetch output tx index"));
+
+ auto result = tx_out_index(tx_hash, *(const uint64_t *) v.mv_data);
+ tx_out_indices.push_back(result);
+ }
+
+ if (!m_batch_active)
+ txn.commit();
+}
+
+void BlockchainLMDB::get_output_global_indices(const uint64_t& amount, const std::vector<uint64_t> &offsets,
+ std::vector<uint64_t> &global_indices)
+{
+ LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+ TIME_MEASURE_START(txx);
+ check_open();
+ global_indices.clear();
+
+ uint64_t max = 0;
+ for (const uint64_t &index : offsets)
+ {
+ if (index > max)
+ max = index;
+ }
+
+ mdb_txn_safe txn;
+ mdb_txn_safe* txn_ptr = &txn;
+ if(m_batch_active)
+ txn_ptr = m_write_txn;
+ else
+ {
+ if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+ throw0(DB_ERROR("Failed to create a transaction for the db"));
+ }
+
+ lmdb_cur cur(*txn_ptr, m_output_amounts);
+
+ MDB_val_copy<uint64_t> k(amount);
+ MDB_val v;
+ auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
+ if (result == MDB_NOTFOUND)
+ throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but amount not found"));
+ else if (result)
+ throw0(DB_ERROR("DB error attempting to get an output"));
+
+ size_t num_elems = 0;
+ mdb_cursor_count(cur, &num_elems);
+ if (max <= 1 && num_elems <= max)
+ throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but output not found"));
+
+ uint64_t t_dbmul = 0;
+ uint64_t t_dbscan = 0;
+ if (max <= 1)
+ {
+ for (const uint64_t& index : offsets)
+ {
+ mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
+ for (uint64_t i = 0; i < index; ++i)
+ {
+ mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
+ }
+
+ mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
+ uint64_t glob_index = *(const uint64_t*) v.mv_data;
+ LOG_PRINT_L3("Amount: " << amount << " M0->v: " << glob_index);
+ global_indices.push_back(glob_index);
+ }
+ }
+ else
+ {
+ uint32_t curcount = 0;
+ uint32_t blockstart = 0;
+ for (const uint64_t& index : offsets)
+ {
+ if (index >= num_elems)
+ {
+ LOG_PRINT_L1("Index: " << index << " Elems: " << num_elems << " partial results found for get_output_tx_and_index");
+ break;
+ }
+ while (index >= curcount)
+ {
+ TIME_MEASURE_START(db1);
+ if (mdb_cursor_get(cur, &k, &v, curcount == 0 ? MDB_GET_MULTIPLE : MDB_NEXT_MULTIPLE) != 0)
+ {
+ // allow partial results
+ result = false;
+ break;
+ }
+
+ int count = v.mv_size / sizeof(uint64_t);
+
+ blockstart = curcount;
+ curcount += count;
+ TIME_MEASURE_FINISH(db1);
+ t_dbmul += db1;
+ }
+
+ LOG_PRINT_L3("Records returned: " << curcount << " Index: " << index);
+ TIME_MEASURE_START(db2);
+ uint64_t actual_index = index - blockstart;
+ uint64_t glob_index = ((const uint64_t*) v.mv_data)[actual_index];
+
+ LOG_PRINT_L3("Amount: " << amount << " M1->v: " << glob_index);
+ global_indices.push_back(glob_index);
+
+ TIME_MEASURE_FINISH(db2);
+ t_dbscan += db2;
+
+ }
+ }
+
+ cur.close();
+ if(!m_batch_active)
+ txn.commit();
+
+ TIME_MEASURE_FINISH(txx);
+ LOG_PRINT_L3("txx: " << txx << " db1: " << t_dbmul << " db2: " << t_dbscan);
+}
+
+void BlockchainLMDB::get_output_key(const uint64_t &amount, const std::vector<uint64_t> &offsets, std::vector<output_data_t> &outputs)
+{
+ LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+ TIME_MEASURE_START(db3);
+ check_open();
+ outputs.clear();
+
+ std::vector <uint64_t> global_indices;
+ get_output_global_indices(amount, offsets, global_indices);
+
+ if (global_indices.size() > 0)
+ {
+ mdb_txn_safe txn;
+ if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+ throw0(DB_ERROR("Failed to create a transaction for the db"));
+
+ for (const uint64_t &index : global_indices)
+ {
+ MDB_val_copy<uint64_t> k(index);
+ MDB_val v;
+
+ auto get_result = mdb_get(txn, m_output_keys, &k, &v);
+ if (get_result != 0)
+ throw0(DB_ERROR("Attempting to get output pubkey by global index, but key does not exist"));
+ else if (get_result)
+ throw0(DB_ERROR("Error attempting to retrieve an output pubkey from the db"));
+
+ output_data_t data = *(output_data_t *) v.mv_data;
+ outputs.push_back(data);
+ }
+
+ txn.commit();
+ }
+
+ TIME_MEASURE_FINISH(db3);
+ LOG_PRINT_L3("db3: " << db3);
+}
+
+void BlockchainLMDB::get_output_tx_and_index(const uint64_t& amount, const std::vector<uint64_t> &offsets, std::vector<tx_out_index> &indices)
+{
+ LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+ check_open();
+ indices.clear();
+
+ std::vector <uint64_t> global_indices;
+ get_output_global_indices(amount, offsets, global_indices);
+
+ TIME_MEASURE_START(db3);
+ if(global_indices.size() > 0)
+ {
+ get_output_tx_and_index_from_global(global_indices, indices);
+ }
+ TIME_MEASURE_FINISH(db3);
+ LOG_PRINT_L3("db3: " << db3);
+}
+
} // namespace cryptonote