Age | Commit message (Collapse) | Author | Files | Lines |
|
The check interferes with raw device/partition support.
|
|
This reverts commit bd96536637724413173271e8d5df1777f7879c29.
The check interferes with raw device/partition support.
|
|
|
|
Implements view tags as proposed by @UkoeHB in MRL issue
https://github.com/monero-project/research-lab/issues/73
At tx construction, the sender adds a 1-byte view tag to each
output. The view tag is derived from the sender-receiver
shared secret. When scanning for outputs, the receiver can
check the view tag for a match, in order to reduce scanning
time. When the view tag does not match, the wallet avoids the
more expensive EC operations when deriving the output public
key using the shared secret.
|
|
|
|
It avoids dividing by 8 when deserializing a tx, which is a slow
operation, and multiplies by 8 when verifying and extracing the
amount, which is much faster as well as less frequent
|
|
|
|
|
|
|
|
Turns out at least one arch (armel based) does not have unique_path
implemented and throws
|
|
If an invalid input type were to get to this, the code could
remove key images that might be present already in the chain,
which could allow a double spend, even if this is impossible
with the current code.
Reported by KeyboardWarrior.
|
|
There are quite a few variables in the code that are no longer
(or perhaps never were) in use. These were discovered by enabling
compiler warnings for unused variables and cleaning them up.
In most cases where the unused variables were the result
of a function call the call was left but the variable
assignment removed, unless it was obvious that it was
a simple getter with no side effects.
|
|
|
|
|
|
These are functions that check whether a tx is in the db, so whether
it is there or not is really not interesting, and it seems to scare
people from time to time
|
|
|
|
recalculation
On startup, it checks against the difficulty checkpoints, and if any mismatch is found, recalculates all the blocks with wrong difficulties. Additionally, once a week it recalculates difficulties of blocks after the last difficulty checkpoint.
|
|
Reported by adrelanos
|
|
|
|
It'll make it clearer when a DB init failure is due to being
on a filesystem which does not support mmap
|
|
|
|
Update copyright year to 2020
|
|
- New flag in NOTIFY_NEW_TRANSACTION to indicate stem mode
- Stem loops detected in tx_pool.cpp
- Embargo timeout for a blackhole attack during stem phase
|
|
for example, in the RPC server
|
|
If a db resize happened, the txpool meta cursor might be stale,
and was not being renewed when necessary.
It would cause this SEGSEGV:
in mdb_cursor_set ()
in mdb_cursor_get ()
in cryptonote::BlockchainLMDB::get_txpool_tx_blob(crypto::hash const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, cryptonote::relay_category) const ()
in cryptonote::tx_memory_pool::get_transaction(crypto::hash const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, cryptonote::relay_category) const ()
in cryptonote::t_cryptonote_protocol_handler<cryptonote::core>::handle_notify_new_fluffy_block(int, epee::misc_utils::struct_init<cryptonote::NOTIFY_NEW_FLUFFY_BLOCK::request_t>&, cryptonote::cryptonote_connection_context&) ()
|
|
Useful for wallet refresh or node sync
|
|
Cleaning up a little around the code base.
|
|
|
|
About twice as fast, very roughly
|
|
|
|
|
|
as opposed to an absent record
|
|
If the peer (whether pruned or not itself) supports sending pruned blocks
to syncing nodes, the pruned version will be sent along with the hash
of the pruned data and the block weight. The original tx hashes can be
reconstructed from the pruned txes and theur prunable data hash. Those
hashes and the block weights are hashes and checked against the set of
precompiled hashes, ensuring the data we received is the original data.
It is currently not possible to use this system when not using the set
of precompiled hashes, since block weights can not otherwise be checked
for validity.
This is off by default for now, and is enabled by --sync-pruned-blocks
|
|
|
|
|
|
We've added a lot of new indices recently, and 20 isn't enough for them plus
new DBs opened during format migrations.
|
|
to avoid errors when the txn is too large
|
|
NULL is valid when size is 0, but memcpy uses nonnull attributes,
so let's not poke the bear
|
|
|
|
Alternative blocks are cleared on startup unless --keep-alt-blocks
is passed on the command line
|
|
|
|
|
|
and delete obsolete BlockchainBDB::get_tx_output_indices along the way
|
|
The db txn in add_block ending caused the entire overarching
batch txn to stop.
Also add a new guard class so a db txn can be stopped in the
face of exceptions.
Also use a read only db txn in init when the db itself is
read only, and do not save the max tx size in that case.
|
|
Use the actual block weight limit, assuming that weight is always
greater or equal to size
|
|
Based on Boolberry work by:
jahrsg <jahr@jahr.me>
cr.zoidberg <crypto.zoidberg@gmail.com>
|
|
|
|
|
|
If mdb_block_info changes again, the v2 to v3 conversion would
convert to an incorrect format.
|
|
|
|
|
|
by avoiding repeated (de)serialization
|
|
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.
|
|
Fixed by hyc
|
|
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
|
|
|
|
Since the commitment has to be calculated for non rct outputs,
it slows down a lot unnecessarily if we don't need it
|
|
get_output_key method is commonly used when working with txs and their key images. Because the method is not const, passing blockchain object though const& or pointers to const is not possible in this context. This is especially problematic in external projects (e.g., projects in moneroexamples) that use monero C++ api to operate on the blockchain and txs.
Thus, having get_output_key method will simplify moving blockchain object around through const references and pointers to const objects.
|
|
We know all the data we'll want for getblocks.bin is contiguous
|
|
|
|
|
|
|
|
|
|
Only for pre rct for obvious reasons.
Note: DO NOT use a known spent list which includes outputs
which are not known spent. If the list includes any output
that's just strongly thought to be spent, but not provably
so, you risk finding yourself unable to sync past the point
where that output is spent.
I estimate only 200 MB saved on current mainnet though,
unless the new blackballing rule unearths a good amount of
large-amount-set extra spent outs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Coverity 136568
|
|
Should only stop the rtxn if we actually started it
Fixes Coverity 184960
|
|
bcf3f6af fuzz_tests: catch unhandled exceptions (moneromooo-monero)
3ebd05d4 miner: restore stream flags after changing them (moneromooo-monero)
a093092e levin_protocol_handler_async: do not propagate exception through dtor (moneromooo-monero)
1eebb82b net_helper: do not propagate exceptions through dtor (moneromooo-monero)
fb6a3630 miner: do not propagate exceptions through dtor (moneromooo-monero)
2e2139ff epee: do not propagate exception through dtor (moneromooo-monero)
0749a8bd db_lmdb: do not propagate exceptions in dtor (moneromooo-monero)
1b0afeeb wallet_rpc_server: exit cleanly on unhandled exceptions (moneromooo-monero)
418a9936 unit_tests: catch unhandled exceptions (moneromooo-monero)
ea7f9543 threadpool: do not propagate exceptions through the dtor (moneromooo-monero)
6e855422 gen_multisig: nice exit on unhandled exception (moneromooo-monero)
53df2deb db_lmdb: catch error in mdb_stat calls during migration (moneromooo-monero)
e67016dd blockchain_blackball: catch failure to commit db transaction (moneromooo-monero)
661439f4 mlog: don't remove old logs if we failed to rename the current file (moneromooo-monero)
5fdcda50 easylogging++: test for NULL before dereference (moneromooo-monero)
7ece1550 performance_test: fix bad last argument calling add_arg (moneromooo-monero)
a085da32 unit_tests: add check for page size > 0 before dividing (moneromooo-monero)
d8b1ec8b unit_tests: use std::shared_ptr to shut coverity up about leaks (moneromooo-monero)
02563bf4 simplewallet: top level exception catcher to print nicer messages (moneromooo-monero)
c57a65b2 blockchain_blackball: fix shift range for 32 bit archs (moneromooo-monero)
|
|
fe125647 Fixup RENAME_DB() macro (Howard Chu)
|
|
Make sure target DB's record is on a writable page
|
|
it's confusing and needlessly complicated
|
|
|
|
|
|
|
|
instead of a random ratio from 60% to 90%.
|
|
Blocks have a very wide range, whereas actual size is the relevant
quantity to consider when syncing
|
|
It was actually incorrect, as it would not return commitment
|
|
|
|
This gets rid of the temporary precalc cache.
Also make the RPC able to send data back in binary or JSON,
since there can be a lot of data
This bumps the LMDB database format to v3, with migration.
|
|
This would only throw
|
|
on_generateblocks RPC call combines functionality from the on_getblocktemplate and on_submitblock RPC calls to allow rapid block creation. Difficulty is set permanently to 1 for regtest.
Makes use of FAKECHAIN network type, but takes hard fork heights from mainchain
Default reserve_size in generate_blocks RPC call is now 1. If it is 0, the following error occurs 'Failed to calculate offset for'.
Queries hard fork heights info of other network types
|
|
This should help new nodes predict how much disk space will be
needed for a full sync
|
|
|
|
|
|
|
|
Avoids valgrind reporting uninitialized data usage
|
|
|
|
This bumps DB version to 2, migration code will run for v1 DBs
|
|
|
|
This skips the vast majority of "dust" output amounts with just
one instance on the chain. Clocks in at 0.15% of the original
time on testnet.
|
|
|
|
|
|
reported by Brad Richards
|
|
|
|
It's cleaner this way, since it's a base class field
Coverity 136568
|
|
Coverity 136364
|
|
|
|
|
|
Reset thread-local info if it doesn't match the current env.
Only happens when a process opens/closes env multiple times in the
same process, doesn't affect monerod.
|
|
Reset thread-specific flags when a write txn is started.
Also remove some redundant start-readtxn code.
|
|
|
|
|
|
|
|
|
|
It could trip on a corrupt/crafted file if the user has disabled
input verification.
|
|
It's nasty, and actually breaks on Solaris, where if.h fails to
build due to:
struct map *if_memmap;
|
|
This patch allows to filter out sensitive information for queries that rely on the pool state, when running in restricted mode.
This filtering is only applied to data sent back to RPC queries. Results of inline commands typed locally in the daemon are not affected.
In practice, when running with `--restricted-rpc`:
* get_transaction_pool will list relayed transactions with the fields "last relayed time" and "received time" set to zero.
* get_transaction_pool will not list transaction that have do_not_relay set to true, and will not list key images that are used only for such transactions
* get_transaction_pool_hashes.bin will not list such transaction
* get_transaction_pool_stats will not count such transactions in any of the aggregated values that are computed
The implementation does not make filtering the default, so developers should be mindful of this if they add new RPC functionality.
Fixes #2590.
|
|
Transactions in the txpool are marked when another transaction
is seen double spending one or more of its inputs.
This is then exposed wherever appropriate.
Note that being marked with this "double spend seen" flag does
NOT mean this transaction IS a double spend and will never be
mined: it just means that the network has seen at least another
transaction spending at least one of the same inputs, so care
should be taken to wait for a few confirmations before acting
upon that transaction (ie, mostly of use for merchants wanting
to accept unconfirmed transactions).
|
|
|
|
|
|
To help debugging logs.
|
|
Level 1 logs map to INFO, so setting log level to 1 should
show these. Demote some stuff to DEBUG to avoid spam, though.
|
|
|
|
And optimize import startup:
Remember start_height position during initial count_blocks pass
to avoid having to reread entire file again to arrive at start_height
|
|
|
|
Avoids common depending on blockchain_db, which can cause
link errors.
|
|
If monerod is started with default sync mode, set it to SAFE after
synchronization completes. Set it back to FAST if synchronization
restarts (e.g. because another peer has a longer blockchain).
If monerod is started with an explicit sync mode, none of this
automation takes effect.
|
|
Hide DB types from db_types.h - no reason to recompile dependencies
when DB types change.
Also remove lingering in-memory DB references, they've been
obsolete since 9e82b694da120708652871b55f639d1ef306a7ec
|
|
Hide LMDB-specific stuff behind blockchain_db.h. Nobody besides blockchain_db.cpp
should ever be including DB-specific headers any more.
|
|
Use to load the database when the primary meta page is corrupted
|
|
|
|
Avoids exception spam for the "nope, not found" case
|
|
Changed Blockchain::for_all_blocks() to for_blocks_range()
Operate on blockchain in-place instead of building a copy first.
|
|
Integration could go further (ie, return_tx_to_pool calls should
not be needed anymore, possibly other things).
poolstate.bin is now obsolete.
|
|
fix a cmakelist
|
|
Don't allow use of existing batch txn if it's from the wrong thread
|
|
BlockchainDB functions virtual again to avoid missing symbols error
|
|
Cleanup of bf1348b7e2b2c72a6d40b23567afaa46b53e6cb7
|
|
should fix a cross dependency betewen cryptonote_basic and
blockchain_db
|
|
|
|
|
|
Same reason as 3ff54bdd7a8b5e08e4e8ac17b7fff23ad3a82312
|
|
Slight perf gain, but mainly to reduce spam at loglevel 3
|
|
This speeds up operations such as serving blocks to syncing peers
|
|
When scanning for outputs used in a set of incoming blocks,
we expect that some of the inputs in their transactions will
not be found in the blockchain, as they could be in previous
blocks in that set. Those outputs will be scanned there at
a later point. In this case, we add a flag to control wehther
an output not being found is expected or not.
|
|
The recent change to not keep separate track of the blockchain
height caused the reported height to jump early in the lmdb
transaction (when the block data is added to the blocks table),
rather than at the end, after everything succeeded. Since the
block data is added before the transaction data, this caused
the transaction data to be saved with a height one more than
its expected value.
Fix this by saving the block data last. This should have no
side effects.
|
|
|
|
|
|
|
|
|
|
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
|
|
|
|
|
|
Faster throughput while avoiding corruption. I.e., makes
running with --db-sync-mode safe more tolerable.
|
|
|
|
This is a normal occurence in many cases, and there is no need
to spam the log with those when it is.
|
|
Will be useful to debug
|
|
m_num_outputs keeps track of the number of outputs, which should
be the same as the size of both the output_txs and output_amounts
databases. If one goes out of sync, we need to throw to abort
whatever it is we were doing.
|
|
Add consts in a few places where it makes sense, avoid unnecessary
memory reallocation where we know the full size needed at the outset,
simplify and avoid memory copy.
|
|
For safety, though it seems to have been the case already.
Also add a comment about the necessary layout identity.
|
|
25% of the outputs are selected from the last 5 days (if possible),
in order to avoid the common case of sending recently received
outputs again. 25% and 5 days are subject to review later, since
it's just a wallet level change.
|
|
|
|
|
|
|
|
Message observed while synchronizing a node from scratch.
"LMDB memory map needs resized"
Proposing a change to:
"LMDB memory map needs to be resized"
|
|
Keep the immediate direct deps at the library that depends on them,
declare deps as PUBLIC so that targets that link against that library
get the library's deps as transitive deps.
Break dep cycle between blockchain_db <-> crytonote_core.
No code refactoring, just hide cycle from cmake so that
it doesn't complain (cycles are allowed only between
static libs, not shared libs).
This is in preparation for supproting BUILD_SHARED_LIBS cmake
built-in option for building internal libs as shared.
|
|
Since this queries block heights for blocks that may or may not
exist, queries for non existing blocks would throw an exception,
and that would slow down the loop a lot. 7 seconds to go through
a 30 hash list.
Fix this by adding an optional return block height to block_exists
and using this instead. Actual errors will still throw an
exception.
This also cuts down on log exception spam.
|
|
|
|
When RingCT is enabled, outputs from coinbase transactions
are created as a single output, and stored as RingCT output,
with a fake mask. Their amount is not hidden on the blockchain
itself, but they are then able to be used as fake inputs in
a RingCT ring. Since the output amounts are hidden, their
"dustiness" is not an obstacle anymore to mixing, and this
makes the coinbase transactions a lot smaller, as well as
helping the TXO set to grow more slowly.
Also add a new "Null" type of rct signature, which decreases
the size required when no signatures are to be stored, as
in a coinbase tx.
|
|
Since these are needed at the same time as the output pubkeys,
this is a whole lot faster, and takes less space. Only outputs
of 0 amount store the commitment. When reading other outputs,
a fake commitment is regenerated on the fly. This avoids having
to rewrite the database to add space for fake commitments for
existing outputs.
This code relies on two things:
- LMDB must support fixed size records per key, rather than
per database (ie, all records on key 0 are the same size, all
records for non 0 keys are same size, but records from key 0
and non 0 keys do have different sizes).
- the commitment must be directly after the rest of the data
in outkey and output_data_t.
|
|
It is not yet constrained to a fork, so don't use on the real network
or you'll be orphaned or rejected.
|
|
- we need to drop the new m_tx_indices database
- we reset the version to current version
This fixes the core tests failing to initialize.
|
|
|
|
|
|
This db is now dropped unconditionally, so may or may not be there
in the first place.
|
|
This constrains the number of instances of any amount
to the unlocked ones (as defined by the default unlock time
setting: outputs with non default unlock time are not
considered, so may be counted as unlocked even if they are
not actually unlocked).
|
|
|
|
It's not really needed, it used to be an optimization for when
that code was not using the db and needed to recalculate things
fast on startup.
|
|
It sets the max number of threads to use for a parallel job.
This is different that the number of total threads, since monero
binaries typically start a lot of them.
|
|
Delete old indices and recreate them, rather than updating them
Maybe not quite as slow as before.
|
|
Migrate from DB version 0 to version 1 on startup
|
|
drop obsolete remove_output()
fix get_output_key(global), fix crash in blockchain_dump
|
|
Try to rationalize the variable names, document usage.
|
|
Helps when they're called repeatedly in one txn
|
|
Saves another ~150MB or so on the full blockchain
|
|
Also bumped DB VERSION to 1
Another significant speedup and space savings:
Get rid of global_output_indices, remove indirection from output to keys
This is the change warptangent described on irc but never got to finish.
|
|
Saves another 90MB on 200000 block import.
Had to bring back compare_uint64 for this, but it's safe since
this table is always 64-bit aligned.
|
|
Small space savings, no measurable speedup
|
|
Only a small savings...
|
|
|
|
m_tx_outputs doesn't need to be changed, as it's no longer dup list.
|
|
This is possible on those using a tx index as a key.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This speeds up wallet refresh by directly retrieving a tx's amount output indices.
It removes the indirection and walking the amount output duplicate list
for every amount in each requested tx.
"tx_outputs" is used by:
Amount output indices are needed for wallet refresh.
Global output indices are needed for removing a tx.
Both amount output indices and global output indices are now stored in
an array of 64-bit unsigned ints:
tx_outputs[<tx_hash>] -> [ <a1_oi, a1_gi, a2_oi, a2_gi, ...> ]
Previously it was:
tx_outputs[<tx_hash>] -> duplicate list of <a1_gi, a2_gi, a3_gi, ...>
The amount output list had to be walked for every amount in order to
find each amount's output index, by comparing the amount's global output
index with each one in the duplicate list until a match was found.
See also d045dfa7ce0bf131681193c97560da26f9f37900
|
|
|
|
This is a list of existing output amounts along with the number
of outputs of that amount in the blockchain.
The daemon command takes:
- no parameters: all outputs with at least 3 instances
- one parameter: all outputs with at least that many instances
- two parameters: all outputs within that many instances
The default starts at 3 to avoid massive spamming of all dust
outputs in the blockchain, and is the current minimum mixin
requirement.
An optional vector of amounts may be passed, to request
histogram only for those outputs.
|