Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
67ade8005 Add randomized delay when forwarding txes from i2p/tor -> ipv4/6 (Lee Clagett)
|
|
7bd66b01b daemon: guard against rare 'difficulty drift' bug with checkpoints and recalculation (stoffu)
|
|
recalculation
On startup, it checks against the difficulty checkpoints, and if any mismatch is found, recalculates all the blocks with wrong difficulties. Additionally, once a week it recalculates difficulties of blocks after the last difficulty checkpoint.
|
|
|
|
Update copyright year to 2020
|
|
- New flag in NOTIFY_NEW_TRANSACTION to indicate stem mode
- Stem loops detected in tx_pool.cpp
- Embargo timeout for a blackhole attack during stem phase
|
|
Useful for wallet refresh or node sync
|
|
08635a08 blockchain: speedup fetching pruned contiguous tx blobs (moneromooo-monero)
|
|
About twice as fast, very roughly
|
|
|
|
|
|
8330e77 monerod can now sync from pruned blocks (moneromooo-monero)
|
|
If the peer (whether pruned or not itself) supports sending pruned blocks
to syncing nodes, the pruned version will be sent along with the hash
of the pruned data and the block weight. The original tx hashes can be
reconstructed from the pruned txes and theur prunable data hash. Those
hashes and the block weights are hashes and checked against the set of
precompiled hashes, ensuring the data we received is the original data.
It is currently not possible to use this system when not using the set
of precompiled hashes, since block weights can not otherwise be checked
for validity.
This is off by default for now, and is enabled by --sync-pruned-blocks
|
|
|
|
06b8f29 blockchain: keep alternative blocks in LMDB (moneromooo-monero)
|
|
|
|
Alternative blocks are cleared on startup unless --keep-alt-blocks
is passed on the command line
|
|
The db txn in add_block ending caused the entire overarching
batch txn to stop.
Also add a new guard class so a db txn can be stopped in the
face of exceptions.
Also use a read only db txn in init when the db itself is
read only, and do not save the max tx size in that case.
|
|
Use the actual block weight limit, assuming that weight is always
greater or equal to size
|
|
4b21d38d blockchain: speed up getting N blocks weights/long term weights (moneromooo-monero)
|
|
|
|
|
|
|
|
|
|
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.
|
|
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
|
|
Since the commitment has to be calculated for non rct outputs,
it slows down a lot unnecessarily if we don't need it
|
|
008647d7 blockchain_db: speedup tx output gathering (moneromooo-monero)
|
|
get_output_key method is commonly used when working with txs and their key images. Because the method is not const, passing blockchain object though const& or pointers to const is not possible in this context. This is especially problematic in external projects (e.g., projects in moneroexamples) that use monero C++ api to operate on the blockchain and txs.
Thus, having get_output_key method will simplify moving blockchain object around through const references and pointers to const objects.
|
|
We know all the data we'll want for getblocks.bin is contiguous
|
|
a48f2dab blockchain_prune_known_spent_data: blackball file is now optional (moneromooo-monero)
17b45725 Outputs where all amounts are known spent can now be pruned (moneromooo-monero)
|
|
bd98e99c Removed a lot of unnecessary includes (Martijn Otto)
|
|
Only for pre rct for obvious reasons.
Note: DO NOT use a known spent list which includes outputs
which are not known spent. If the list includes any output
that's just strongly thought to be spent, but not provably
so, you risk finding yourself unable to sync past the point
where that output is spent.
I estimate only 200 MB saved on current mainnet though,
unless the new blackballing rule unearths a good amount of
large-amount-set extra spent outs.
|
|
|
|
|
|
|
|
Coverity 136568
|
|
|
|
It was actually incorrect, as it would not return commitment
|
|
e5592c4 rpc: add blockchain disk size to getinfo (moneromooo-monero)
|
|
45e419b db: store cumulative rct output distribution in the db for speed (moneromooo-monero)
|
|
149da42 db_lmdb: enable batch transactions by default (stoffu)
34cb6b4 add --regtest and --fixed-difficulty for regression testing (vicsn)
9e1403e update get_info RPC and bump RPC version (vicsn)
207b66e first new functional tests (vicsn)
|
|
This gets rid of the temporary precalc cache.
Also make the RPC able to send data back in binary or JSON,
since there can be a lot of data
This bumps the LMDB database format to v3, with migration.
|
|
on_generateblocks RPC call combines functionality from the on_getblocktemplate and on_submitblock RPC calls to allow rapid block creation. Difficulty is set permanently to 1 for regtest.
Makes use of FAKECHAIN network type, but takes hard fork heights from mainchain
Default reserve_size in generate_blocks RPC call is now 1. If it is 0, the following error occurs 'Failed to calculate offset for'.
Queries hard fork heights info of other network types
|
|
This should help new nodes predict how much disk space will be
needed for a full sync
|
|
Avoids valgrind reporting uninitialized data usage
|
|
This bumps DB version to 2, migration code will run for v1 DBs
|
|
|
|
This skips the vast majority of "dust" output amounts with just
one instance on the chain. Clocks in at 0.15% of the original
time on testnet.
|
|
|
|
It's cleaner this way, since it's a base class field
Coverity 136568
|
|
|
|
|
|
|
|
10013e94 Protect node privacy by proper filtering in restricted-mode RPC answers (binaryFate)
|
|
This patch allows to filter out sensitive information for queries that rely on the pool state, when running in restricted mode.
This filtering is only applied to data sent back to RPC queries. Results of inline commands typed locally in the daemon are not affected.
In practice, when running with `--restricted-rpc`:
* get_transaction_pool will list relayed transactions with the fields "last relayed time" and "received time" set to zero.
* get_transaction_pool will not list transaction that have do_not_relay set to true, and will not list key images that are used only for such transactions
* get_transaction_pool_hashes.bin will not list such transaction
* get_transaction_pool_stats will not count such transactions in any of the aggregated values that are computed
The implementation does not make filtering the default, so developers should be mindful of this if they add new RPC functionality.
Fixes #2590.
|
|
Transactions in the txpool are marked when another transaction
is seen double spending one or more of its inputs.
This is then exposed wherever appropriate.
Note that being marked with this "double spend seen" flag does
NOT mean this transaction IS a double spend and will never be
mined: it just means that the network has seen at least another
transaction spending at least one of the same inputs, so care
should be taken to wait for a few confirmations before acting
upon that transaction (ie, mostly of use for merchants wanting
to accept unconfirmed transactions).
|
|
And optimize import startup:
Remember start_height position during initial count_blocks pass
to avoid having to reread entire file again to arrive at start_height
|
|
Avoids common depending on blockchain_db, which can cause
link errors.
|
|
If monerod is started with default sync mode, set it to SAFE after
synchronization completes. Set it back to FAST if synchronization
restarts (e.g. because another peer has a longer blockchain).
If monerod is started with an explicit sync mode, none of this
automation takes effect.
|
|
Hide LMDB-specific stuff behind blockchain_db.h. Nobody besides blockchain_db.cpp
should ever be including DB-specific headers any more.
|
|
Avoids exception spam for the "nope, not found" case
|
|
Changed Blockchain::for_all_blocks() to for_blocks_range()
Operate on blockchain in-place instead of building a copy first.
|
|
Integration could go further (ie, return_tx_to_pool calls should
not be needed anymore, possibly other things).
poolstate.bin is now obsolete.
|
|
BlockchainDB functions virtual again to avoid missing symbols error
|
|
should fix a cross dependency betewen cryptonote_basic and
blockchain_db
|
|
|
|
0288310e blockchain_db: add "raw" blobdata getters for block and transaction (moneromooo-monero)
|
|
This speeds up operations such as serving blocks to syncing peers
|
|
When scanning for outputs used in a set of incoming blocks,
we expect that some of the inputs in their transactions will
not be found in the blockchain, as they could be in previous
blocks in that set. Those outputs will be scanned there at
a later point. In this case, we add a flag to control wehther
an output not being found is expected or not.
|
|
|
|
3ff54bdd Check for correct thread before ending batch transaction (Howard Chu)
eaf8470b Must wait for previous batch to finish before starting new one (Howard Chu)
c903c554 Don't cache block height, always get from DB (Howard Chu)
eb1fb601 Tweak default db-sync-mode to fast:async:1 (Howard Chu)
0693cff9 Use batch transactions when syncing (Howard Chu)
|
|
Faster throughput while avoiding corruption. I.e., makes
running with --db-sync-mode safe more tolerable.
|
|
This is a normal occurence in many cases, and there is no need
to spam the log with those when it is.
|
|
25% of the outputs are selected from the last 5 days (if possible),
in order to avoid the common case of sending recently received
outputs again. 25% and 5 days are subject to review later, since
it's just a wallet level change.
|
|
Since this queries block heights for blocks that may or may not
exist, queries for non existing blocks would throw an exception,
and that would slow down the loop a lot. 7 seconds to go through
a 30 hash list.
Fix this by adding an optional return block height to block_exists
and using this instead. Actual errors will still throw an
exception.
This also cuts down on log exception spam.
|
|
Since these are needed at the same time as the output pubkeys,
this is a whole lot faster, and takes less space. Only outputs
of 0 amount store the commitment. When reading other outputs,
a fake commitment is regenerated on the fly. This avoids having
to rewrite the database to add space for fake commitments for
existing outputs.
This code relies on two things:
- LMDB must support fixed size records per key, rather than
per database (ie, all records on key 0 are the same size, all
records for non 0 keys are same size, but records from key 0
and non 0 keys do have different sizes).
- the commitment must be directly after the rest of the data
in outkey and output_data_t.
|
|
|
|
This constrains the number of instances of any amount
to the unlocked ones (as defined by the default unlock time
setting: outputs with non default unlock time are not
considered, so may be counted as unlocked even if they are
not actually unlocked).
|
|
It's not really needed, it used to be an optimization for when
that code was not using the db and needed to recalculate things
fast on startup.
|
|
Delete old indices and recreate them, rather than updating them
Maybe not quite as slow as before.
|
|
|
|
drop obsolete remove_output()
fix get_output_key(global), fix crash in blockchain_dump
|
|
Try to rationalize the variable names, document usage.
|
|
|
|
|
|
|
|
This speeds up wallet refresh by directly retrieving a tx's amount output indices.
It removes the indirection and walking the amount output duplicate list
for every amount in each requested tx.
"tx_outputs" is used by:
Amount output indices are needed for wallet refresh.
Global output indices are needed for removing a tx.
Both amount output indices and global output indices are now stored in
an array of 64-bit unsigned ints:
tx_outputs[<tx_hash>] -> [ <a1_oi, a1_gi, a2_oi, a2_gi, ...> ]
Previously it was:
tx_outputs[<tx_hash>] -> duplicate list of <a1_gi, a2_gi, a3_gi, ...>
The amount output list had to be walked for every amount in order to
find each amount's output index, by comparing the amount's global output
index with each one in the duplicate list until a match was found.
See also d045dfa7ce0bf131681193c97560da26f9f37900
|
|
This is a list of existing output amounts along with the number
of outputs of that amount in the blockchain.
The daemon command takes:
- no parameters: all outputs with at least 3 instances
- one parameter: all outputs with at least that many instances
- two parameters: all outputs within that many instances
The default starts at 3 to avoid massive spamming of all dust
outputs in the blockchain, and is the current minimum mixin
requirement.
An optional vector of amounts may be passed, to request
histogram only for those outputs.
|
|
bfd4a28 Update BlockchainDB documentation (Thomas Winget)
797357e Change Doxyfile, Blockchain not blockchain_storage (Thomas Winget)
c835215 remove defunct code from cryptonote::core (Thomas Winget)
50dba6d cryptonote::core doxygen documentation (Thomas Winget)
8ac329d doxygen documentation for difficulty functions (Thomas Winget)
540a76c Move checkpoint functions into checkpoints class (Thomas Winget)
1b0c98e doxygen documentation for checkpoints.{h,cpp} (Thomas Winget)
89c24ac Remove unnecessary or defunct code (Thomas Winget)
ab0ed14 doxygen include private and static members (Thomas Winget)
3a48449 Updated documentation for blockchain.* (Thomas Winget)
|
|
This reverts commit 7fa63a82a1c3a0243f6757c1689855ed3ca61695, reversing
changes made to cb6be986c36b78eddb4b7f16e9ad440af8567dc4.
|
|
BlockchainDB is now Doxygen-compliant and its documentation is
up-to-date with recent changes.
|
|
Ain't nobody got time for link/cmake skullduggery.
This reverts commit fff238ec94ac6d45fc18c315d7bc590ddfaad63d.
|
|
Useful for debugging users' logs
|
|
Could wrap more later.
|
|
1995923 BlockchainLMDB: Deal with DB exceptions at block level with particularity (warptangent)
c16cc20 BlockchainLMDB: Add sanity check for inconsistent state (warptangent)
9118d0a BlockchainLMDB: Call destructor on allocated txn if setup fails (warptangent)
f5581c3 BlockchainLMDB: Replace remaining txn pointer NULLs with nullptr (warptangent)
|
|
Add another DB error exception type to distinguish failed txn setup from
general use of txn.
This keeps the error handling flow the same as before the block-level
txn setup changes that moved control up a layer to BlockchainDB.
|
|
|
|
This will later allow the HardFork object's DB update functions to be
called when the DB transaction that persists across block add/remove is
open.
|
|
|
|
|
|
Delete the hf tables, so the next open will rescan and regenerate
|
|
|
|
|
|
Remove trailing whitespace in same files.
|
|
Early DB versions did not store key images for inputs if the
transaction spending them had no outputs (ie, all fee). This
is not correct, as this would allow these outputs to be double
spent. This was fixed in 533acc30eda7792c802ea8b6417917fa99b8bc2b
a few months ago, but databases having synced blocks 2021612 and
685498 with a faulty version will be missing those key images
in the spent keys database. This code checks for this, and adds
those key images if they are missing.
|
|
|
|
git history's here if needed to get any of this back
|
|
|
|
There will be a delay on first load of an existing blockchain
as it gets reparsed for this state data.
|
|
It was only used by the older blockchain_storage.
We also move the code to the calling blockchain level, to avoid
replicating the code in every DB implementation. This also makes
the get_random_out method obsolete, and we delete it.
|
|
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
|
|
|
|
This will assist in a DB resize check.
|
|
|
|
There will need to be some more refactoring for these changes to be
considered complete/correct, but for now it's working.
new daemon cli argument "--db-type", works for LMDB and BerkeleyDB.
A good deal of refactoring is also present in this commit, namely
Blockchain no longer instantiates BlockchainDB, but rather is passed a
pointer to an already-instantiated BlockchainDB on init().
|
|
Add support to:
- BlockchainDB, BlockchainLMDB
- blockchain_import utility to open LMDB database with one or more
LMDB flags.
Sample use:
$ blockchain_import --database lmdb#nosync
$ blockchain_import --database lmdb#nosync,nometasync
|
|
In order to make things more general, BlockchainDB now has get_db_name()
which should return a string with the "name" of that type of db.
This "name" will be the subfolder name that holds that db type's files
within the monero folder.
Small bugfix: blockchain_converter was not correctly appending this in
the prior hard-coded-string implementation of the subfolder data
directory concept.
|
|
Ostensibly janitorial work, but should be more relevant later down the
line. Things that depend on core cryptonote things (i.e.
cryptonote_core) don't necessarily depend on BlockchainDB and thus
have no need to have BlockchainDB baked in with them.
|