Age | Commit message (Collapse) | Author | Files | Lines |
|
It seems to be buggy on reorgs, and prevents the use of
a blockchain with two nodes.
We'll speed this up again if/when the need arises.
|
|
|
|
|
|
|
|
|
|
|
|
It introduces random integer math into the main loop.
|
|
saves space in the tx and is safe
Found by knaccc
|
|
Found by knaccc
|
|
This makes it easier to modify the bulletproof format
|
|
|
|
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
|
|
Apparently some people seem to think it's a censorship list...
|
|
9acf42d3 Multisig M/N functionality core tests added (naughtyfox)
9f3963e8 Arbitrary M/N multisig schemes: * support in wallet2 * support in monero-wallet-cli * support in monero-wallet-rpc * support in wallet api * support in monero-gen-trusted-multisig * unit tests for multisig wallets creation (naughtyfox)
|
|
f9485a36 tests: update crypto tests data file after PRNG changes (moneromooo-monero)
|
|
7f2ad1a7 functional_tests: fix linking on Windows (iDunk5400)
|
|
bef1750f unit_tests: fix longstanding DNS related unit test (moneromooo-monero)
|
|
Coverity 175293, 175312, 175266
|
|
Coverity 182560
|
|
Coverity 182572
|
|
Coverity 188426
|
|
Coverity 188436, 188433, 188428, 188415, 188416, 188410, 188400,
188298, 188299, 188321, 188342, 188343, 188355, 188357, 188361,
188366, 188374
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- fix integer overflow in n_bulletproof_amounts
- check input scalars are in range
- remove use of environment variable to tweak straus performance
- do not use implementation defined signed shift for signum
|
|
|
|
|
|
|
|
Stats are: min, median, standard deviation
|
|
|
|
Based on sarang's python code
|
|
|
|
|
|
|
|
|
|
Also constrains bulletproofs to simple rct, for simplicity
|
|
|
|
|
|
Ported from sarang's java code
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Contains two modifications to improve ASIC resistance: shuffle and integer math.
Shuffle makes use of the whole 64-byte cache line instead of 16 bytes only, making Cryptonight 4 times more demanding for memory bandwidth.
Integer math adds 64:32 bit integer division followed by 64 bit integer square root, adding large and unavoidable computational latency to the main loop.
More details and performance numbers: https://github.com/SChernykh/xmr-stak-cpu/blob/master/README.md
|
|
|
|
|
|
|
|
|
|
|
|
This class will allow mlocking small objects, of which there
may be several per page. It adds refcounting so pages are only
munlocked when the last object on that page munlocks.
|
|
The secret spend key is kept encrypted in memory, and
decrypted on the fly when needed.
Both spend and view secret keys are kept encrypted in a JSON
field in the keys file. This avoids leaving the keys in
memory due to being manipulated by the JSON I/O API.
|
|
|
|
|
|
This actually prevents copy elision
|
|
|
|
alleged to speed things up
|
|
|
|
|
|
- Support for classes
- Added `remove_prefix` function
- Added `to_mut_span` and `as_mut_byte_span`
|
|
|
|
It was actually incorrect, as it would not return commitment
|
|
|
|
Fixes failing test during Arch package build (due to attempt to write to
~/.bitmonero/...).
Prefix temp dir path with "monero-" because we are not putting it on the
system, so good to identify ourselves in case the dir gets left over due
to crash, etc.
|
|
|
|
This gets rid of the temporary precalc cache.
Also make the RPC able to send data back in binary or JSON,
since there can be a lot of data
This bumps the LMDB database format to v3, with migration.
|
|
|
|
|
|
|
|
on_generateblocks RPC call combines functionality from the on_getblocktemplate and on_submitblock RPC calls to allow rapid block creation. Difficulty is set permanently to 1 for regtest.
Makes use of FAKECHAIN network type, but takes hard fork heights from mainchain
Default reserve_size in generate_blocks RPC call is now 1. If it is 0, the following error occurs 'Failed to calculate offset for'.
Queries hard fork heights info of other network types
|
|
For some reason, this confuses and kills ASAN on startup
as it thinks const uint8_t ipv4_network_address::ID is
defined multiple times.
|
|
|
|
Helps a bit when running with valgrind
|
|
Decrease the number of worker threads by one to account
for the fact the calling thread acts as a worker thread now
|
|
also use reserve where appropriate
|
|
|
|
This should help new nodes predict how much disk space will be
needed for a full sync
|
|
for privacy reasons, so an untrusted node can't easily track
wallets from IP address to IP address, etc. The granularity
is 1024 blocks, which is about a day and a half.
|
|
a connection's timeout is halved for every extra connection
from the same host.
Also keep track of when we don't need to use a connection
anymore, so we can close it and free the resource for another
connection.
Also use the longer timeout for non routable local addresses.
|
|
|
|
|
|
|
|
|
|
|
|
non-existent versions
|
|
This bumps DB version to 2, migration code will run for v1 DBs
|
|
|
|
|
|
This reverts commit 20ef37bbcac7715d5299dd77d401583420e07ced, reversing
changes made to 40070a661fd2ff503e07f4ed48dfe9fe67cfa297.
|
|
|
|
|
|
|
|
|
|
Annoyingly, its locking semantics are borked since it does not
do any locking
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
hash: add prehashed version cn_slow_hash_prehashed
slow-hash: let cn_slow_hash take 4th parameter for deciding prehashed or not
slow-hash: add support for prehashed version for the other 3 platforms
|
|
|
|
|
|
When #3303 was merged, a cyclic dependency chain was generated:
libdevice <- libcncrypto <- libringct <- libdevice
This was because libdevice needs access to a set of basic crypto operations
implemented in libringct such as scalarmultBase(), while libringct also needs
access to abstracted crypto operations implemented in libdevice such as
ecdhEncode(). To untangle this cyclic dependency chain, this patch splits libringct
into libringct_basic and libringct, where the basic crypto ops previously in
libringct are moved into libringct_basic. The cyclic dependency is now resolved
thanks to this separation:
libcncrypto <- libringct_basic <- libdevice <- libcryptonote_basic <- libringct
This eliminates the need for crypto_device.cpp and rctOps_device.cpp.
Also, many abstracted interfaces of hw::device such as encrypt_payment_id() and
get_subaddress_secret_key() were previously implemented in libcryptonote_basic
(cryptonote_format_utils.cpp) and were then called from hw::core::device_default,
which is odd because libdevice is supposed to be independent of libcryptonote_basic.
Therefore, those functions were moved to device_default.cpp.
|
|
|
|
versions (and make libwallet_api_tests compilable)
|
|
|
|
This is the first variant of many, with the intent to improve
Monero's resistance to ASICs and encourage mining decentralization.
|
|
|
|
|
|
The basic approach it to delegate all sensitive data (master key, secret
ephemeral key, key derivation, ....) and related operations to the device.
As device has low memory, it does not keep itself the values
(except for view/spend keys) but once computed there are encrypted (with AES
are equivalent) and return back to monero-wallet-cli. When they need to be
manipulated by the device, they are decrypted on receive.
Moreover, using the client for storing the value in encrypted form limits
the modification in the client code. Those values are transfered from one
C-structure to another one as previously.
The code modification has been done with the wishes to be open to any
other hardware wallet. To achieve that a C++ class hw::Device has been
introduced. Two initial implementations are provided: the "default", which
remaps all calls to initial Monero code, and the "Ledger", which delegates
all calls to Ledger device.
|
|
|
|
|
|
|
|
DNSSEC aware servers picked from https://wiki.ipfire.org/dns/public-servers
|
|
|
|
Coverity 136394 136397 136409 136526 136529 136533 175302
|
|
Coverity 182572
|
|
and comment it out, it's only used to generate a starting test case
Coverity 182506
|
|
|
|
|
|
|
|
|
|
decodeRct returns the amount, which may be zero
|
|
|
|
|
|
monero/tests/unit_tests/memwipe.cpp:50:8: Warning: suggest explicit braces to avoid ambiguous 'else' [-Wdangling-else]
if (wipe) ASSERT_TRUE(memcmp(quux, "bar", 3));
|
|
Signed-off-by: Maxithi <34792056+Maxithi@users.noreply.github.com>
|
|
Removes a good bit of annoyance running those
|
|
|
|
See https://wiki.debian.org/Hardening#User_Space
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Looks like there's some kind of header/signature
|
|
|
|
|
|
|
|
The author doesn't seem to be finishing/fixing this, and it
doesn't do anything.
|
|
|
|
|
|
and remove a leftover debugging sanity check
|
|
|
|
|
|
|
|
While there, move the wallet2 ctor to the cpp file as it's a huge
amount of init list now, and remove an unused one.
|
|
Thanks to kenshi84 for help getting this work
|
|
|
|
|
|
|
|
|
|
Useful to speed tests up and avoid unnecessary leftover files
|
|
|
|
Scheme by luigi1111:
Multisig for RingCT on Monero
2 of 2
User A (coordinator):
Spendkey b,B
Viewkey a,A (shared)
User B:
Spendkey c,C
Viewkey a,A (shared)
Public Address: C+B, A
Both have their own watch only wallet via C+B, a
A will coordinate spending process (though B could easily as well, coordinator is more needed for more participants)
A and B watch for incoming outputs
B creates "half" key images for discovered output D:
I2_D = (Hs(aR)+c) * Hp(D)
B also creates 1.5 random keypairs (one scalar and 2 pubkeys; one on base G and one on base Hp(D)) for each output, storing the scalar(k) (linked to D),
and sending the pubkeys with I2_D.
A also creates "half" key images:
I1_D = (Hs(aR)+b) * Hp(D)
Then I_D = I1_D + I2_D
Having I_D allows A to check spent status of course, but more importantly allows A to actually build a transaction prefix (and thus transaction).
A builds the transaction until most of the way through MLSAG_Gen, adding the 2 pubkeys (per input) provided with I2_D
to his own generated ones where they are needed (secret row L, R).
At this point, A has a mostly completed transaction (but with an invalid/incomplete signature). A sends over the tx and includes r,
which allows B (with the recipient's address) to verify the destination and amount (by reconstructing the stealth address and decoding ecdhInfo).
B then finishes the signature by computing ss[secret_index][0] = ss[secret_index][0] + k - cc[secret_index]*c (secret indices need to be passed as well).
B can then broadcast the tx, or send it back to A for broadcasting. Once B has completed the signing (and verified the tx to be valid), he can add the full I_D
to his cache, allowing him to verify spent status as well.
NOTE:
A and B *must* present key A and B to each other with a valid signature proving they know a and b respectively.
Otherwise, trickery like the following becomes possible:
A creates viewkey a,A, spendkey b,B, and sends a,A,B to B.
B creates a fake key C = zG - B. B sends C back to A.
The combined spendkey C+B then equals zG, allowing B to spend funds at any time!
The signature fixes this, because B does not know a c corresponding to C (and thus can't produce a signature).
2 of 3
User A (coordinator)
Shared viewkey a,A
"spendkey" j,J
User B
"spendkey" k,K
User C
"spendkey" m,M
A collects K and M from B and C
B collects J and M from A and C
C collects J and K from A and B
A computes N = nG, n = Hs(jK)
A computes O = oG, o = Hs(jM)
B anc C compute P = pG, p = Hs(kM) || Hs(mK)
B and C can also compute N and O respectively if they wish to be able to coordinate
Address: N+O+P, A
The rest follows as above. The coordinator possesses 2 of 3 needed keys; he can get the other
needed part of the signature/key images from either of the other two.
Alternatively, if secure communication exists between parties:
A gives j to B
B gives k to C
C gives m to A
Address: J+K+M, A
3 of 3
Identical to 2 of 2, except the coordinator must collect the key images from both of the others.
The transaction must also be passed an additional hop: A -> B -> C (or A -> C -> B), who can then broadcast it
or send it back to A.
N-1 of N
Generally the same as 2 of 3, except participants need to be arranged in a ring to pass their keys around
(using either the secure or insecure method).
For example (ignoring viewkey so letters line up):
[4 of 5]
User: spendkey
A: a
B: b
C: c
D: d
E: e
a -> B, b -> C, c -> D, d -> E, e -> A
Order of signing does not matter, it just must reach n-1 users. A "remaining keys" list must be passed around with
the transaction so the signers know if they should use 1 or both keys.
Collecting key image parts becomes a little messy, but basically every wallet sends over both of their parts with a tag for each.
Thia way the coordinating wallet can keep track of which images have been added and which wallet they come from. Reasoning:
1. The key images must be added only once (coordinator will get key images for key a from both A and B, he must add only one to get the proper key actual key image)
2. The coordinator must keep track of which helper pubkeys came from which wallet (discussed in 2 of 2 section). The coordinator
must choose only one set to use, then include his choice in the "remaining keys" list so the other wallets know which of their keys to use.
You can generalize it further to N-2 of N or even M of N, but I'm not sure there's legitimate demand to justify the complexity. It might
also be straightforward enough to support with minimal changes from N-1 format.
You basically just give each user additional keys for each additional "-1" you desire. N-2 would be 3 keys per user, N-3 4 keys, etc.
The process is somewhat cumbersome:
To create a N/N multisig wallet:
- each participant creates a normal wallet
- each participant runs "prepare_multisig", and sends the resulting string to every other participant
- each participant runs "make_multisig N A B C D...", with N being the threshold and A B C D... being the strings received from other participants (the threshold must currently equal N)
As txes are received, participants' wallets will need to synchronize so that those new outputs may be spent:
- each participant runs "export_multisig FILENAME", and sends the FILENAME file to every other participant
- each participant runs "import_multisig A B C D...", with A B C D... being the filenames received from other participants
Then, a transaction may be initiated:
- one of the participants runs "transfer ADDRESS AMOUNT"
- this partly signed transaction will be written to the "multisig_monero_tx" file
- the initiator sends this file to another participant
- that other participant runs "sign_multisig multisig_monero_tx"
- the resulting transaction is written to the "multisig_monero_tx" file again
- if the threshold was not reached, the file must be sent to another participant, until enough have signed
- the last participant to sign runs "submit_multisig multisig_monero_tx" to relay the transaction to the Monero network
|
|
free might overwrite the memory, so we can't expect to see
the NULs we overwrote with, but at least we shouldn't see
the original data.
|
|
As a followon side effect, this makes a lot of inline code
included only in particular cpp files (and instanciated
when necessary.
|
|
|
|
Partially implements #74.
Securely erases keys from memory after they are no longer needed. Might have a
performance impact, which I haven't measured (perf measurements aren't
generally reliable on laptops).
Thanks to @stoffu for the suggestion to specialize the pod_to_hex/hex_to_pod
functions. Using overloads + SFINAE instead generalizes it so other types can
be marked as scrubbed without adding more boilerplate.
|
|
|
|
|
|
Based on Java code from Sarang Noether
|
|
|
|
|
|
It's meant to avoid being optimized out
memory_cleanse lifted from bitcoin
|
|
|
|
This speeds up building a lot when wallet2.h (or something it
includes) changes, since all the API includes wallet2.h
|
|
While there, also use the new is_arg_defaulted API instead of
going to poke the internal API directly.
|
|
|
|
Those have no reason to be in a generic module
|
|
|
|
It's nasty, and actually breaks on Solaris, where if.h fails to
build due to:
struct map *if_memmap;
|
|
|
|
This patch allows to filter out sensitive information for queries that rely on the pool state, when running in restricted mode.
This filtering is only applied to data sent back to RPC queries. Results of inline commands typed locally in the daemon are not affected.
In practice, when running with `--restricted-rpc`:
* get_transaction_pool will list relayed transactions with the fields "last relayed time" and "received time" set to zero.
* get_transaction_pool will not list transaction that have do_not_relay set to true, and will not list key images that are used only for such transactions
* get_transaction_pool_hashes.bin will not list such transaction
* get_transaction_pool_stats will not count such transactions in any of the aggregated values that are computed
The implementation does not make filtering the default, so developers should be mindful of this if they add new RPC functionality.
Fixes #2590.
|
|
|