aboutsummaryrefslogtreecommitdiff
path: root/external/db_drivers (follow)
AgeCommit message (Collapse)AuthorFilesLines
2017-11-19Add mdb_drop toolHoward Chu4-2/+183
2017-10-10ITS#8339 Solaris 10/11 robust mutex fixesHoward Chu1-1/+9
Check for PTHREAD_MUTEX_ROBUST_NP definition (this doesn't work on Linux/glibc because they used an enum). Zero out mutex before initing.
2017-09-20Add -a append option to mdb_loadHoward Chu2-6/+47
To allow reloading of custom-sorted DBs from mdb_dump
2017-09-09ITS#8728 fix MDB_VL32 freeing overflow pageHoward Chu1-0/+4
Fix #2420
2017-08-12ITS#8704 add MDB_PREVSNAPSHOT flag to mdb_env_openHoward Chu8-15/+68
used to open the previous snapshot, in case the latest one is corrupted
2017-04-12Fix Android recognitionhyc1-3/+3
The official macro is __ANDROID__; ANDROID may or may not be defined.
2017-02-21update copyright year, fix occasional lack of newline at line endRiccardo Spagni2-2/+2
2017-02-07ITS#8582 keep mutex at end of structHoward Chu1-10/+10
since it's variable size on Linux/glibc
2017-01-31Workaround VL32 cursor refcounting miscountHoward Chu1-7/+9
Don't try to deref cursor page if txn's pagelist is empty
2017-01-05Build wallet with Android NDKMoroccanMalinois1-0/+5
2016-09-18cmake: transitive deps and remove deprecated LINK_*redfish1-1/+1
Keep the immediate direct deps at the library that depends on them, declare deps as PUBLIC so that targets that link against that library get the library's deps as transitive deps. Break dep cycle between blockchain_db <-> crytonote_core. No code refactoring, just hide cycle from cmake so that it doesn't complain (cycles are allowed only between static libs, not shared libs). This is in preparation for supproting BUILD_SHARED_LIBS cmake built-in option for building internal libs as shared.
2016-08-11More for Issue #855Howard Chu1-6/+12
Plug rpage leak in cursor_set
2016-06-07Fix Issue #855Howard Chu1-4/+0
Use the same size dirty list for both 64 and 32 bit.
2016-04-09mdb_drop optimizationHoward Chu1-1/+10
If we know there are no sub-DBs and no overflow pages, skip leaf scan.
2016-04-05More outputs consolidationHoward Chu1-1/+1
Also bumped DB VERSION to 1 Another significant speedup and space savings: Get rid of global_output_indices, remove indirection from output to keys This is the change warptangent described on irc but never got to finish.
2016-02-17MDB_VL32 - increase max write txn sizeHoward Chu1-1/+1
2016-02-16Resync with masterHoward Chu2-23/+76
2016-01-28MDB_VL32 change overflow page scanHoward Chu1-31/+10
Just check the requested page, don't worry about any other pages
2016-01-28MDB_VL32 Fix off-by-one in mdb_midl_shrinkHoward Chu1-1/+1
2016-01-27MDB_VL32 Fix another 32bit overflowHoward Chu1-1/+1
2016-01-27Tweak mdb_strerror msg bufferHoward Chu1-3/+4
2016-01-27MDB_VL32 Fix d2a5f72f73e0e4030b521086b13b8c8efaf9ca9eHoward Chu1-1/+1
VirtualAlloc is not for MDB_VL32
2016-01-20WIN64 needs off_t redefined tooHoward Chu1-1/+1
2016-01-16Fix --db-sync-mode on Windows64Howard Chu1-1/+1
only "fastest" mode was working, others would SEGV.
2015-12-31updated copyright yearRiccardo Spagni2-2/+2
2015-12-28MDB_VL32 - resync with masterHoward Chu2-75/+75
WIN32 - close file mapping handle in env_close cursor_unref - ignore cursor with empty stack
2015-12-25Update liblmdb, unify 32/64 sourcesHoward Chu58-16992/+2464
2015-07-16hyc accidentally typo'd...we shall never speak of this againRiccardo Spagni1-1/+1
2015-07-16updated vl32 to currentRiccardo Spagni24-188/+390
2015-07-16updated in-source lmdbRiccardo Spagni10-95/+171
2015-07-16open() flag O_DSYNC isn't on BSD, use O_SYNCThomas Winget2-0/+8
If the detected OS is FreeBSD, tell LMDB to compile with MDB_DSYNC=O_SYNC instead of the default O_DSYNC, as BSD does not implement this flag.
2015-07-15** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)NoodleDoodleNoodleDoodleNoodleDoodleNoo1-20/+0
Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-04-22Require BerkeleyDB to be installed (for now) if building non-staticThomas Winget1-7/+2
2015-04-14update lmdb64Riccardo Spagni22-365/+201
2015-04-07Only compile BerkeleyDB as an option in non-staticThomas Winget1-16/+18
2015-03-17Pull blockchain changes into berkeleydb branchThomas Winget57-27/+16065
2015-03-17Move db_drivers/ to external/Thomas Winget57-0/+32257
Also change LMDB Cmake variables to CACHE rather than upgrading them through several parent scopes.
2015-03-17Revert "Moved db_drivers/ into external/ for consistency"Thomas Winget29-16219/+0
This reverts commit b21335642e75b35d3b178a754f4cdb2314989cd1.
2015-03-16BerkeleyDB Blockchain building, not working yetThomas Winget1-5/+5
Everything except actually *using* BlockchainBDB is wired up, but the db itself is not yet working. Some error about user mem not large enough. I think I know what this error means, but I can't determine the cause. Notes: BerkeleyDB does not allow 0-indexing in its recno type databases, so block numbers *in the database* will be 1-indexed. Modifications to indexing have been made as needed.
2015-03-16CMake wiring, minor cleanup, minor test additionThomas Winget1-1/+23
Make Cmake things aware of BerkeleyDB and BlockchainBDB Make the BlockchainDB unit tests aware of BlockchainBDB
2015-03-09Moved db_drivers/ into external/ for consistencyThomas Winget29-0/+16219