Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The 10 minute one will never trigger for 0 blocks, as it's still
fairly likely to happen even without the actual hash rate changing
much, so we add a 20 minute window, where it will (for 0 blocks)
and a one hour window.
|
|
This runs a command whenever the block rate deviates too much
from the expectation
|
|
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.
|
|
Implies protocol version management.
|
|
This was noticed because GCC warned about using an enum value in a
boolean context.
|
|
The original code did not compile with GCC 8.2.1 in C++17 mode, since
comparison functions for std::set's must be invocable as const.
|
|
This will trigger if a reorg is seen. This may be used to do things
like stop automated withdrawals on large reorgs.
%s is replaced by the height at the split point
%h is replaced by the height of the new chain
%n is replaced by the number of new blocks after the reorg
|
|
|
|
|
|
This makes it easier to modify the bulletproof format
|
|
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
|
|
For better transaction uniformity, even though this wastes space.
|
|
|
|
extra is arbitrary, and the user may well want to send custom data
|
|
Coverity 190660
|
|
Since we keep track of the hf version in the db, we pick it up
from there instead of doing the full reorg call, which is quite
expensive
|
|
|
|
While the lookups are faster, the zeroCommit calls have to be
done again when storing the new outputs in the db, which ends
up making the whole thing slower after all, and the ways this
can be cached aren't very nice code wise, so let's forget it
since the gains aren't very large anyway.
|
|
|
|
We know all the data we'll want for getblocks.bin is contiguous
|
|
|
|
|
|
This ensures the io service that runs in another thread cannot
access data after it's deleted
|
|
|
|
|
|
This avoids the miner erroring out trying to submit blocks
to a core that's already shut down (and avoids pegging
the CPU while we're busy shutting down).
|
|
add new public method to Blockchain and update according to code review
update after review: better lock/unlock, try catch and coding style
|
|
Some of the inputs for block in a span will be from other earlier
blocks in that span. Keep track of those outputs so we don't have
to look them up again after those early blocks are added to the
blockchain.
|
|
|
|
|
|
Found by codacy.com
|
|
Found by codacy.com
|
|
block already has a default ctor, and the extra object
churn due to its innards (vectors, etc) is pointless.
|
|
|
|
This prevents asking for just 0, and the RPC layer already does this
|
|
This reverts commit b2bb9312a75781e714acf3c406634b3d4cded418.
|
|
|
|
To help protect one's privacy from traffic volume analysis
for people using Tor or I2P. This will really fly once we
relay txes on a timer rather than on demand, though.
Off by default for now since it's wasteful and doesn't bring
anything until I2P's in.
|
|
This inconsistent state would not actually be used in practice
|
|
for consistency
|
|
|
|
|
|
|
|
|
|
This happens for every historical tx when syncing, and the
unnecessary parsing is actually showing up on profile.
Since these are kept cached for just one block, this does
not increase memory usage after syncing.
|
|
|
|
|
|
This should make it possible to have two daemons running on the
same database again.
|
|
It was leftover from a change that was undone before commit,
but the comment change was let through
|
|
|
|
|
|
|
|
|
|
Lest we get people get scared again
|
|
|
|
|
|
get_transaction_pool RPC
Inspired by https://github.com/masari-project/masari/issues/93
|
|
This reverts commit 79d46c4d551a9b1261801960095bf4d24967211a, reversing
changes made to c9fc61dbb56cca442c775faa2554a7460879b637.
|
|
Coverity 188616
|
|
|
|
This removes some small amount of fingerprinting entropy.
There is no consensus rule to require this since this field
is technically free form, and a transaction is free to have
custom data in it.
|
|
|
|
|
|
73403004 add --block-notify to monerod and --tx-notify to monero-wallet-{cli,rpc} (moneromooo-monero)
|
|
|
|
|
|
|
|
|
|
The warning threshold is set to allow a false positive every
ten days on average.
|
|
|
|
Make it easier for a user to be told when to update
|
|
|
|
|
|
So that bulletproofs become mandatory
|
|
|
|
|
|
Also constrains bulletproofs to simple rct, for simplicity
|
|
Ported from sarang's java code
|
|
|
|
and v9 a day later
|
|
|
|
|
|
This avoids constant rechecking of the same things each time
a miner asks for the block template. The tx pool maintains
a cookie to allow users to detect when the pool state changed,
which means the block template needs rebuilding.
|
|
Blocks have a very wide range, whereas actual size is the relevant
quantity to consider when syncing
|
|
|
|
|
|
This gets rid of the temporary precalc cache.
Also make the RPC able to send data back in binary or JSON,
since there can be a lot of data
This bumps the LMDB database format to v3, with migration.
|
|
It's not 100% certain it'll be needed, but it avoids getinfo
needing the blockchain lock and potentially blocking
|
|
|
|
on_generateblocks RPC call combines functionality from the on_getblocktemplate and on_submitblock RPC calls to allow rapid block creation. Difficulty is set permanently to 1 for regtest.
Makes use of FAKECHAIN network type, but takes hard fork heights from mainchain
Default reserve_size in generate_blocks RPC call is now 1. If it is 0, the following error occurs 'Failed to calculate offset for'.
Queries hard fork heights info of other network types
|
|
when a block being added to the main chain is invalid.
This ensures the peer is banned after a number of these.
|
|
|
|
Decrease the number of worker threads by one to account
for the fact the calling thread acts as a worker thread now
|
|
|
|
|
|
also use reserve where appropriate
|
|
|
|
This is called a lot when creating a block template, and
does not change until the blockchain changes.
This also avoids tx parsing when cached.
|
|
Signed-off-by: Jean Pierre Dudey <jeandudey@hotmail.com>
|
|
config::testnet::X : stagenet ? config::stagenet::X : config::X
|
|
|
|
|
|
Avoids valgrind reporting uninitialized data usage
|
|
|
|
This data comes from untrusted peers, and validation failures
are therefore normal.
|
|
|
|
|
|
avoids RPC thread dying, causing the wallet to timeout
|
|
|
|
|
|
This bumps DB version to 2, migration code will run for v1 DBs
|
|
|
|
|
|
|
|
|
|
This gets rid of an innocuous race trying to add the same tx
twice to the txpool
|
|
|
|
This can happen if a peer tries to obtain the next span from other
peers if that span is needed for not downloaded yet. Also if the
peer maliciously requests a non existent block hash.
|
|
Might be a bit heavy handed, but conservative.
|
|
|
|
|
|
|
|
Eases up debugging
|
|
This skips the vast majority of "dust" output amounts with just
one instance on the chain. Clocks in at 0.15% of the original
time on testnet.
|
|
|
|
A key image may be present more than once if all but one of the
txes spending that key image are coming from blocks. When loading
a txpool from storage, we must load the one that's not from a
block first to avoid rejection
|
|
|
|
|
|
|
|
|
|
|
|
Takes about 10 ms, which takes pretty much all of the get_info
RPC, which is called pretty often from wallets.
Also add a new lock so we don't need to lock the blockchain lock,
which will avoid blocking for a long time when calling the getinfo
RPC while syncing. Users of get_difficulty_for_next_block who need
the lock will have locked it already.
|
|
|
|
|
|
They were already forbidden implicitely, but let's make that
explicit for robustness
|
|
|
|
|
|
|
|
This function isn't used in the codebase.
Signed-off-by: Jean Pierre Dudey <jeandudey@hotmail.com>
|
|
|
|
When #3303 was merged, a cyclic dependency chain was generated:
libdevice <- libcncrypto <- libringct <- libdevice
This was because libdevice needs access to a set of basic crypto operations
implemented in libringct such as scalarmultBase(), while libringct also needs
access to abstracted crypto operations implemented in libdevice such as
ecdhEncode(). To untangle this cyclic dependency chain, this patch splits libringct
into libringct_basic and libringct, where the basic crypto ops previously in
libringct are moved into libringct_basic. The cyclic dependency is now resolved
thanks to this separation:
libcncrypto <- libringct_basic <- libdevice <- libcryptonote_basic <- libringct
This eliminates the need for crypto_device.cpp and rctOps_device.cpp.
Also, many abstracted interfaces of hw::device such as encrypt_payment_id() and
get_subaddress_secret_key() were previously implemented in libcryptonote_basic
(cryptonote_format_utils.cpp) and were then called from hw::core::device_default,
which is odd because libdevice is supposed to be independent of libcryptonote_basic.
Therefore, those functions were moved to device_default.cpp.
|
|
Fix the way the REAL mode is handle:
Let create_transactions_2 and create_transactions_from construct the vector of transactions.
Then iterate on it and resign.
We just need to add 'outs' list in the TX struct for that.
Fix default secret keys value when DEBUG_HWDEVICE mode is off
The magic value (00...00 for view key and FF..FF for spend key) was not correctly set
when DEBUG_HWDEVICE was off. Both was set to 00...00.
Add sub-address info in ABP map in order to correctly display destination sub-address on device
Fix DEBUG_HWDEVICE mode:
- Fix compilation errors.
- Fix control device init in ledger device.
- Add more log.
Fix sub addr control
Fix debug Info
|
|
|
|
|
|
|
|
|
|
|
|
* src/cryptnote_config.h: The constant `config::testnet::GENESIS_TX` was
changed to be the same as `config::GENESIS_TX` (the mainnet's transaction)
because the mainnet's transaction was being used for both networks.
* src/cryptonote_core/cryptonote_tx_utils.cpp: The `generate_genesis_block` function
was ignoring the `genesis_tx` parameter, and instead it was using the `config::GENESIS_TX`
constant. That's why the testnet genesis transaction was changed. Also five lines of unused
code were removed.
Signed-off-by: Jean Pierre Dudey <jeandudey@hotmail.com>
|
|
The basic approach it to delegate all sensitive data (master key, secret
ephemeral key, key derivation, ....) and related operations to the device.
As device has low memory, it does not keep itself the values
(except for view/spend keys) but once computed there are encrypted (with AES
are equivalent) and return back to monero-wallet-cli. When they need to be
manipulated by the device, they are decrypted on receive.
Moreover, using the client for storing the value in encrypted form limits
the modification in the client code. Those values are transfered from one
C-structure to another one as previously.
The code modification has been done with the wishes to be open to any
other hardware wallet. To achieve that a C++ class hw::Device has been
introduced. Two initial implementations are provided: the "default", which
remaps all calls to initial Monero code, and the "Ledger", which delegates
all calls to Ledger device.
|
|
It would fail to send, thinking it needs a destination address,
since the destination matches the change address in this case.
|
|
When a block is added as part of a chunk (when syncing historical
blocks), a block may end up already in the blockchain if it was
added to the queue before being added to the chain (though it's
not clear how that could happen, but it's an implementation detail)
and thus may not be added to the chain when add_block is called.
This would cause m_blocks_txs_check to not be cleared, causing it
to get out of sync at next call, and thus wrongfully reject the
next block.
|
|
An udpate may or may not be available now, but should be soon if not.
This will prevent too many people freaking out.
|
|
|
|
|
|
Previously, when blob_size == tx_size_limit, the "m_too_big" property was set
and the transaction was rejected. This should not have been the case.
|
|
|
|
* src/cryptnote_config.h: The constant `config::testnet::GENESIS_TX` was
changed to be the same as `config::GENESIS_TX` (the mainnet's transaction)
because the mainnet's transaction was being used for both networks.
* src/cryptonote_core/cryptonote_tx_utils.cpp: The `generate_genesis_block` function
was ignoring the `genesis_tx` parameter, and instead it was using the `config::GENESIS_TX`
constant. That's why the testnet genesis transaction was changed. Also five lines of unused
code were removed.
Signed-off-by: Jean Pierre Dudey <jeandudey@hotmail.com>
|
|
It's freed when we've synced past its end, but we might still
find an old chain somewhere
|
|
|
|
Coverity 142951
|
|
Previously, when outputs_amount == inputs_amount, the "m_overspend" property
was set, whereas "m_fee_too_low" would have been the correct property to set.
This is unlikely to ever occur and just something I've noticed while reading
through the code.
|
|
and set v7 height to 1057027 on testnet (one block earlier)
This is to easily dump current nodes since we're going to change
the v7 rules with this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Thanks to kenshi84 for help getting this work
|
|
|
|
Scheme by luigi1111:
Multisig for RingCT on Monero
2 of 2
User A (coordinator):
Spendkey b,B
Viewkey a,A (shared)
User B:
Spendkey c,C
Viewkey a,A (shared)
Public Address: C+B, A
Both have their own watch only wallet via C+B, a
A will coordinate spending process (though B could easily as well, coordinator is more needed for more participants)
A and B watch for incoming outputs
B creates "half" key images for discovered output D:
I2_D = (Hs(aR)+c) * Hp(D)
B also creates 1.5 random keypairs (one scalar and 2 pubkeys; one on base G and one on base Hp(D)) for each output, storing the scalar(k) (linked to D),
and sending the pubkeys with I2_D.
A also creates "half" key images:
I1_D = (Hs(aR)+b) * Hp(D)
Then I_D = I1_D + I2_D
Having I_D allows A to check spent status of course, but more importantly allows A to actually build a transaction prefix (and thus transaction).
A builds the transaction until most of the way through MLSAG_Gen, adding the 2 pubkeys (per input) provided with I2_D
to his own generated ones where they are needed (secret row L, R).
At this point, A has a mostly completed transaction (but with an invalid/incomplete signature). A sends over the tx and includes r,
which allows B (with the recipient's address) to verify the destination and amount (by reconstructing the stealth address and decoding ecdhInfo).
B then finishes the signature by computing ss[secret_index][0] = ss[secret_index][0] + k - cc[secret_index]*c (secret indices need to be passed as well).
B can then broadcast the tx, or send it back to A for broadcasting. Once B has completed the signing (and verified the tx to be valid), he can add the full I_D
to his cache, allowing him to verify spent status as well.
NOTE:
A and B *must* present key A and B to each other with a valid signature proving they know a and b respectively.
Otherwise, trickery like the following becomes possible:
A creates viewkey a,A, spendkey b,B, and sends a,A,B to B.
B creates a fake key C = zG - B. B sends C back to A.
The combined spendkey C+B then equals zG, allowing B to spend funds at any time!
The signature fixes this, because B does not know a c corresponding to C (and thus can't produce a signature).
2 of 3
User A (coordinator)
Shared viewkey a,A
"spendkey" j,J
User B
"spendkey" k,K
User C
"spendkey" m,M
A collects K and M from B and C
B collects J and M from A and C
C collects J and K from A and B
A computes N = nG, n = Hs(jK)
A computes O = oG, o = Hs(jM)
B anc C compute P = pG, p = Hs(kM) || Hs(mK)
B and C can also compute N and O respectively if they wish to be able to coordinate
Address: N+O+P, A
The rest follows as above. The coordinator possesses 2 of 3 needed keys; he can get the other
needed part of the signature/key images from either of the other two.
Alternatively, if secure communication exists between parties:
A gives j to B
B gives k to C
C gives m to A
Address: J+K+M, A
3 of 3
Identical to 2 of 2, except the coordinator must collect the key images from both of the others.
The transaction must also be passed an additional hop: A -> B -> C (or A -> C -> B), who can then broadcast it
or send it back to A.
N-1 of N
Generally the same as 2 of 3, except participants need to be arranged in a ring to pass their keys around
(using either the secure or insecure method).
For example (ignoring viewkey so letters line up):
[4 of 5]
User: spendkey
A: a
B: b
C: c
D: d
E: e
a -> B, b -> C, c -> D, d -> E, e -> A
Order of signing does not matter, it just must reach n-1 users. A "remaining keys" list must be passed around with
the transaction so the signers know if they should use 1 or both keys.
Collecting key image parts becomes a little messy, but basically every wallet sends over both of their parts with a tag for each.
Thia way the coordinating wallet can keep track of which images have been added and which wallet they come from. Reasoning:
1. The key images must be added only once (coordinator will get key images for key a from both A and B, he must add only one to get the proper key actual key image)
2. The coordinator must keep track of which helper pubkeys came from which wallet (discussed in 2 of 2 section). The coordinator
must choose only one set to use, then include his choice in the "remaining keys" list so the other wallets know which of their keys to use.
You can generalize it further to N-2 of N or even M of N, but I'm not sure there's legitimate demand to justify the complexity. It might
also be straightforward enough to support with minimal changes from N-1 format.
You basically just give each user additional keys for each additional "-1" you desire. N-2 would be 3 keys per user, N-3 4 keys, etc.
The process is somewhat cumbersome:
To create a N/N multisig wallet:
- each participant creates a normal wallet
- each participant runs "prepare_multisig", and sends the resulting string to every other participant
- each participant runs "make_multisig N A B C D...", with N being the threshold and A B C D... being the strings received from other participants (the threshold must currently equal N)
As txes are received, participants' wallets will need to synchronize so that those new outputs may be spent:
- each participant runs "export_multisig FILENAME", and sends the FILENAME file to every other participant
- each participant runs "import_multisig A B C D...", with A B C D... being the filenames received from other participants
Then, a transaction may be initiated:
- one of the participants runs "transfer ADDRESS AMOUNT"
- this partly signed transaction will be written to the "multisig_monero_tx" file
- the initiator sends this file to another participant
- that other participant runs "sign_multisig multisig_monero_tx"
- the resulting transaction is written to the "multisig_monero_tx" file again
- if the threshold was not reached, the file must be sent to another participant, until enough have signed
- the last participant to sign runs "submit_multisig multisig_monero_tx" to relay the transaction to the Monero network
|
|
As a followon side effect, this makes a lot of inline code
included only in particular cpp files (and instanciated
when necessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Add period to second sentence
|
|
|
|
|