Age | Commit message (Collapse) | Author | Files | Lines |
|
- Fixed undefined behavior after a call to `remove_tx_from_transient_lists` (it used an invalid iterator)
- Fixed `txCompare` (it wasn't strictly weak ordered)
|
|
std::sort is unstable, so it can return random sets of transactions when mempool has many transactions with the same fee/byte. It can result in p2pool mining empty blocks sometimes because it doesn't pick up "new" transactions immediately.
|
|
Nodes who see different txs in a double spend attack will drop each other, splitting the network.
Issue found by @boog900.
|
|
|
|
|
|
The long term block weight cache was doing a wrong calculation when
adding a new block to the cache.
|
|
|
|
|
|
reported by sech1
|
|
for a slight performance boost in functional tests
|
|
Force sync every 100k blocks instead of every 1k blocks. Bumping this
value is reported to make a big difference in sync performance, eg:
https://github.com/monero-project/monero/issues/8189
|
|
Also: txs with tx_extra which is too large will not get published to ZMQ
Co-authored-by: SChernykh <sergey.v.chernykh@gmail.com>
|
|
|
|
- `/getblocks.bin` respects the `RESTRICTED_TX_COUNT` (=100) when
returning pool txs via a restricted RPC daemon.
- A restricted RPC daemon includes a max of `RESTRICTED_TX_COUNT` txs
in the `added_pool_txs` field, and returns any remaining pool hashes
in the `remaining_added_pool_txids` field. The client then requests
the remaining txs via `/gettransactions` in chunks.
- `/gettransactions` no longer does expensive no-ops for ALL pool txs
if the client requests a subset of pool txs. Instead it searches for
the txs the client explicitly requests.
- Reset `m_pool_info_query_time` when a user:
(1) rescans the chain (so the wallet re-requests the whole pool)
(2) changes the daemon their wallets points to (a new daemon would
have a different view of the pool)
- `/getblocks.bin` respects the `req.prune` field when returning
pool txs.
- Pool extension fields in response to `/getblocks.bin` are optional
with default 0'd values.
|
|
|
|
Co-authored-by: plowsof <plowsof@protonmail.com>
extra files
|
|
- Straight-forward call interface: `void rx_slow_hash(const char *seedhash, const void *data, size_t length, char *result_hash)`
- Consensus chain seed hash is now updated by calling `rx_set_main_seedhash` whenever a block is added/removed or a reorg happens
- `rx_slow_hash` will compute correct hash no matter if `rx_set_main_seedhash` was called or not (the only difference is performance)
- New environment variable `MONERO_RANDOMX_FULL_MEM` to force use the full dataset for PoW verification (faster block verification)
- When dataset is used for PoW verification, dataset updates don't stall other threads (verification is done in light mode then)
- When mining is running, PoW checks now also use dataset for faster verification
|
|
|
|
|
|
|
|
update_checkpoints() makes a few DNS requests and can take up to 20-30 seconds to complete (3-6 seconds on average). It is currently called from core::handle_incoming_block() which holds m_incoming_tx_lock, so it blocks all incoming transactions and blocks processing while update_checkpoints() is running. This PR moves it to until after a new block has been processed and relayed, to avoid full monerod locking.
|
|
|
|
|
|
|
|
|
|
Before the fix, it processed all transactions in the mempool which could be very slow when mempool grows to several MBs in size. I observed `get_block_template_backlog` taking up to 15 seconds of CPU time under high mempool load.
After the fix, only transactions that can potentially be mined in the next block will be processed (a bit more than the current block median weight).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Implements view tags as proposed by @UkoeHB in MRL issue
https://github.com/monero-project/research-lab/issues/73
At tx construction, the sender adds a 1-byte view tag to each
output. The view tag is derived from the sender-receiver
shared secret. When scanning for outputs, the receiver can
check the view tag for a match, in order to reduce scanning
time. When the view tag does not match, the wallet avoids the
more expensive EC operations when deriving the output public
key using the shared secret.
|
|
https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2021-02.pdf
with a change to use 1.7 instead of 2.0 for the max long term increase rate
|
|
|
|
|
|
This comment suggests this check is unnecessary, when it is completely necessary as miner TXs can have multiple outputs *which is a statement directly, and incorrectly, contradicted by this comment*. While I don't ever see someone removing this code and getting their edits merged into Monero, someone inexperienced who thinks they're cleaning old code may break their own work, and then there's really just zero benefit to keeping this around.
|
|
|
|
|
|
|
|
Calculate PoW hash for a block candidate
|
|
|
|
avoids mining txes after a fork that are invalid by this fork's
rules, but were valid by the previous fork rules at the time
they were verified and added to the txpool.
|
|
|
|
Adds the following:
- "get_miner_data" to RPC API
- "json-miner-data" to ZeroMQ subscriber contexts
Both provide the necessary data to create a custom block template. They are used by p2pool.
Data provided:
- major fork version
- current height
- previous block id
- RandomX seed hash
- network difficulty
- median block weight
- coins mined by the network so far
- mineable mempool transactions
|
|
Co-authored-by: selsta <selsta@sent.at>
Co-authored-by: tobtoht <thotbot@protonmail.com>
|
|
|
|
|
|
|
|
The heavy rolling median reset only has to be performed after
all blocks are popped
|
|
This reverts commit 63c7ca07fba2f063c760f786a986fb3e02fb040e, reversing
changes made to 2218e23e84a89e9a1e4c0be5d50f891ab836754f.
|
|
|
|
It only needs to parse the tx headers, not the full tx data
|
|
On Mac, size_t is a distinct type from uint64_t, and some
types (in wallet cache as well as cold/hot wallet transfer
data) use pairs/containers with size_t as fields. Mac would
save those as full size, while other platforms would save
them as varints. Might apply to other platforms where the
types are distinct.
There's a nasty hack for backward compatibility, which can
go after a couple forks.
|
|
There are quite a few variables in the code that are no longer
(or perhaps never were) in use. These were discovered by enabling
compiler warnings for unused variables and cleaning them up.
In most cases where the unused variables were the result
of a function call the call was left but the variable
assignment removed, unless it was obvious that it was
a simple getter with no side effects.
|
|
|
|
|
|
|
|
it is accessed both when adding and when prevalidating a set
of new hashes from a peer
|
|
|
|
|
|
|
|
A 20% fluff probability increases the precision of a spy connected to
every node by 10% on average, compared to a network using 0% fluff
probability. The current value (10% fluff) should increase precision by
~5% compared to baseline.
This decreases the expected stem length from 10 to 5. The embargo
timeout was therefore lowered to 39s; the fifth node in a stem is
expected to have a 90% chance of being the first to timeout, which is
the same probability we currently have with an expected stem length of
10 nodes.
|
|
|
|
This is already done
|
|
This also removes potential thread safety bug in that function.
|
|
|
|
Miners with MLSAG txes which they'd already verified included
a couple in that block, but the consensus rules had changed
in the meantime, so that block is technically invalid and any
node which did not already have those two txes in their txpool
could not sync. Grandfather them in, since it has no effect in
practice.
|
|
|
|
|
|
Based on a patch by TheCharlatan <seb.kung@gmail.com>
|
|
|
|
|
|
Those would, if uncaught, exit run and leave the waiter to wait
indefinitely for the number of active jobs to reach 0
|
|
|
|
|
|
They are allowed from v12, and MLSAGs are rejected from v13.
|
|
Claiming a slightly lesser amount does not yield the size gains
that were seen pre rct, so this closes a fingerprinting vector
|
|
|
|
Thanks iDunk for testing
|
|
Reporter requested credit to be given to Decred
|
|
|
|
This fixes the functional tests, since txes would not be mined
after being sent to the daemon (they'd be waiting for the
dandelion timeout first)
|
|
The cache is discarded when a block is popped, but then gets
rebuilt when the difficulty for next block is requested.
While this is all properly locked, it does not take into account
the delay caused by a database transaction being only committed
(and thus its effects made visible to other threads) later on,
which means another thread could request difficulty between
the pop and the commit, which would end up using stale database
view to build the cache, but that cache would not be invalidated
again when the transaction gets committed, which would cause the
cache to not match the new database data.
To fix this, we now keep track of when the cache is invalidated
so we can invalidate it again upon database transaction commit
to ensure it gets calculated again with fresh data next time it
is nedeed.
|
|
|
|
|
|
recalculation
On startup, it checks against the difficulty checkpoints, and if any mismatch is found, recalculates all the blocks with wrong difficulties. Additionally, once a week it recalculates difficulties of blocks after the last difficulty checkpoint.
|
|
|
|
It's time based and we don't have forks every 6 months anymore
|
|
|
|
|
|
|
|
|
|
Update copyright year to 2020
|
|
It doesn't really work anymore since we don't have a fork soon
|
|
|
|
|
|
|
|
An automatic tx variable is initialized properly on the first
run through the loop, but not the second. Moving the variable
inside the loop ensures the ctor is called again to init it.
|
|
|
|
|
|
|
|
|
|
- New flag in NOTIFY_NEW_TRANSACTION to indicate stem mode
- Stem loops detected in tx_pool.cpp
- Embargo timeout for a blackhole attack during stem phase
|
|
A newly synced Alice sends a (typically quite small) list of
txids in the local tpxool to a random peer Bob, who then uses
the existing tx relay system to send Alice any tx in his txpool
which is not in the list Alice sent
|
|
|
|
|
|
for example, in the RPC server
|
|
Useful for wallet refresh or node sync
|
|
|
|
Cleaning up a little around the code base.
|
|
Coverity 196626
|
|
|
|
|
|
M100 = max{300kb, min{100block_median, m_long_term_effective_median_block_weight}}
not
M100 = max{300kb, m_long_term_effective_median_block_weight}
Fix base reward in get_dynamic_base_fee_estimate().
get_dynamic_base_fee_estimate() should match check_fee()
Fee is calculated based on block reward, and the reward penalty takes into account 0.5*max_block_weight (both before and after HF_VERSION_EFFECTIVE_SHORT_TERM_MEDIAN_IN_PENALTY).
Moved median calculation according to best practice of 'keep definitions close to where they are used'.
|
|
|
|
The tail emission will bring the total above 64 bits
|
|
Flushes m_invalid_blocks in Blockchain.
|
|
About twice as fast, very roughly
|
|
|
|
|
|
|
|
|
|
e4d1674e8 0.15.0.0 release engineering (Riccardo Spagni)
|
|
|
|
This is handy when doing tests that generate a lot of transactions, since that
takes time it's preferable to re-use the database for future runs.
|
|
It causes link errors at least on mac
|
|
It causes link errors at least on mac
|
|
|
|
|
|
|
|
|
|
|
|
This allows flushing internal caches (for now, the bad tx cache,
which will allow debugging a stuck monerod after it has failed to
verify a transaction in a block, since it would otherwise not try
again, making subsequent log changes pointless)
|
|
Coverity 205394
|
|
add a 128/64 division routine so we can use a > 32 bit median block
size in calculations
|
|
as a safety to reject if it somehow does not get initialised
|
|
It was using the raw block weight median, which was not what was
intended in ArticMine's design
|
|
In case of a 0 tx weight, we use a placeholder value to insert in the
fee-per-byte set. This is used for pruning and mining, and those txes
are pruned, so will not be too large, nor added to the block template
if mining, so this is safe.
CID 204465
|
|
CID 204467
|
|
|
|
|
|
|
|
|
|
Use the lesser of the short and long terms medians, rather then
the long term median alone
From ArticMine:
I found a bug in the new fee calculation formula with using only the long term median
It actually needs to be the lesser of the long term median and the old (modified short term median)
short term median with the last 10 blocks calculated as empty
Yes the issue occurs if there is a large long term median and, the short term median then falls and tries to then rise again
The fees are could be not high enough
for example LTM and STM rise to say 2000000 bytes
STM falls back to 300000 bytes
Fees are now based on 2000000 bytes until LTM also falls
So the STM is could prevented from rising back up
STM short term median LTM long term median
|
|
If the peer (whether pruned or not itself) supports sending pruned blocks
to syncing nodes, the pruned version will be sent along with the hash
of the pruned data and the block weight. The original tx hashes can be
reconstructed from the pruned txes and theur prunable data hash. Those
hashes and the block weights are hashes and checked against the set of
precompiled hashes, ensuring the data we received is the original data.
It is currently not possible to use this system when not using the set
of precompiled hashes, since block weights can not otherwise be checked
for validity.
This is off by default for now, and is enabled by --sync-pruned-blocks
|
|
Support RandomX PoW algorithm
|
|
PoW is expensive to verify, so be strict
|
|
So it can be used by others without encumbrance
|
|
Some custom wallet code apparently ignores this, which causes users
of that code to be fingerprinted
|
|
As a side effect, colouring on Windows should now work
regardless of version
|
|
|
|
Such a template would yield an invalid block, though would require
an attacker to have mined a long blockchain with drifting times
(assuming the miner's clock is roughly correct)
Fixed by crCr62U0
|
|
|
|
|
|
The 98th percentile position in the agebytes map was incorrectly
calculated: it assumed the transactions in the mempool all have unique
timestamps at second-granularity. This commit fixes this by correctly
finding the right cumulative number of transactions in the map suffix.
This bug could lead to an out-of-bounds write in the rare case that
all transactions in the mempool were received (and added to the mempool)
at a rate of at least 50 transactions per second. (More specifically,
the number of *unique* receive_time values, which have second-
granularity, must be at most 2% of the number of transactions in the
mempool for this crash to trigger.) If this condition is satisfied, 'it'
points to *before* the agebytes map, 'delta' gets a nonsense value, and
the value of 'i' in the first stats.histo-filling loop will be out of
bounds of stats.histo.
|
|
|
|
|
|
We're supposed to have a fixed ring size now
Already checked by MLSAG verification, but here seems more intuitive
|
|
|
|
Apply the overflow logic used for computing already_generated_coins in
the main chain to alternative chains.
|
|
According to [1], std::random_shuffle is deprecated in C++14 and removed
in C++17. Since std::shuffle is available since C++11 as a replacement
and monero already requires C++11, this is a good replacement.
A cryptographically secure random number generator is used in all cases
to prevent people from perhaps copying an insecure std::shuffle call
over to a place where a secure one would be warranted. A form of
defense-in-depth.
[1]: https://en.cppreference.com/w/cpp/algorithm/random_shuffle
|
|
|
|
|
|
|
|
|
|
|
|
we don't want to prevent bona fide txes, just obvious bad ones
|
|
|
|
This happens often when a pre-pruning node asks a pruned node
for data it does not have
|
|
|
|
In that case, we'll still keep the "Monero is now disconnected
from the network" near the end of the log
|
|
|
|
|
|
Avoids a massive amount of spurious warnings if the last update before
the daemon exited was a while ago and the daemon was syncing
|
|
|
|
|
|
Make sure the tip hash still matches the cached block
|
|
|
|
Alternative blocks are cleared on startup unless --keep-alt-blocks
is passed on the command line
|
|
|
|
|
|
|
|
|
|
|
|
It triggers easily on testnet
|
|
We want to get all blocks here, even pruned ones
|
|
|
|
The db txn in add_block ending caused the entire overarching
batch txn to stop.
Also add a new guard class so a db txn can be stopped in the
face of exceptions.
Also use a read only db txn in init when the db itself is
read only, and do not save the max tx size in that case.
|
|
This will weed out some transactions with silly rings
|
|
Related to emission, reorgs, getting tx data back, output
distribution and histogram
|
|
|
|
|
|
Use the actual block weight limit, assuming that weight is always
greater or equal to size
|
|
|
|
This check is now not needed anymore, and would prevent people
from using --prune-blockchain when starting a new sync
|
|
It's now needed for CNv4, and was not retained when cached
|
|
|
|
|
|
New scheme key destination contrfol
Fix dummy decryption in debug mode
|