Age | Commit message (Collapse) | Author | Files | Lines |
|
Unrelated, but similar code-wise to #8643. There is a check in `DNSResolver` which automatically fails to resolve hostnames which do not contain the `.` character. This PR removes that check.
|
|
Fixes #8633. The function `append_net_address` did not parse hostname + port addresses (e.g. `bar:29080`) correctly if the hostname did not contain a `'.'` character.
@vtnerd comments 1
clear up 2nd conditional statement
|
|
|
|
update_checkpoints() makes a few DNS requests and can take up to 20-30 seconds to complete (3-6 seconds on average). It is currently called from core::handle_incoming_block() which holds m_incoming_tx_lock, so it blocks all incoming transactions and blocks processing while update_checkpoints() is running. This PR moves it to until after a new block has been processed and relayed, to avoid full monerod locking.
|
|
|
|
|
|
|
|
Being offline is not a good enough heuristic, so we keep track
of whether the wallet ever refreshed from a daemon, which is a
lot better, and probably the best we can do without manual user
designation (which would break existing cold wallet setups till
the user designates those wallets)
|
|
this will make it easier huge wallets to do so without hitting
random limits (eg, max string size in node).
|
|
- only allow offline wallets to import outputs
- don't import empty outputs
- export subaddress indexes when exporting outputs
|
|
- spend secret key is no longer the sum of multisig key shares;
no need to check that is the case upon restore.
- restoring a multisig wallet from multisig info means that the
wallet must have already completed all setup rounds. Upon restore,
set the number of rounds completed accordingly.
|
|
- BP+ support added for Trezor
- old Trezor firmware version support removed, code cleanup
|
|
|
|
key images
|
|
|
|
|
|
|
|
|
|
|
|
There are vulnerabilities in multisig protocol if the parties do not
trust each other, and while there is a patch for it, it has not been
throroughly reviewed yet, so it is felt safer to disable multisig by
default for now.
If all parties in a multisig setup trust each other, then it is safe
to enable multisig.
|
|
P2Pool can create transactions with more than 128 outputs, which make output_index's varint larger than 1 byte. Added this test case.
|
|
have completed the multisig address
|
|
Update Makefile and LICENSE
|
|
|
|
|
|
|
|
Implements view tags as proposed by @UkoeHB in MRL issue
https://github.com/monero-project/research-lab/issues/73
At tx construction, the sender adds a 1-byte view tag to each
output. The view tag is derived from the sender-receiver
shared secret. When scanning for outputs, the receiver can
check the view tag for a match, in order to reduce scanning
time. When the view tag does not match, the wallet avoids the
more expensive EC operations when deriving the output public
key using the shared secret.
|
|
https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2021-02.pdf
with a change to use 1.7 instead of 2.0 for the max long term increase rate
|
|
|
|
The integrated address functional test fails in the workflows due
to an assertion for missing payment id that is no longer needed.
Remove the assertion and update the assertion count.
Fixes 7dcfccb: ("wallet_rpc_server: fix make_integrated_address with no payment id")
|
|
It avoids dividing by 8 when deserializing a tx, which is a slow
operation, and multiplies by 8 when verifying and extracing the
amount, which is much faster as well as less frequent
|
|
|
|
|
|
* Remove `match_string()`, `match_number()`, and `match_word()`
* Remove `match_word_with_extrasymb()` and `match_word_til_equal_mark()`
* Adapt unit test for `match_number()` to `match_number2()`
* Adapt unit test for `match_string()` to `match_string2()`
Note: the unit tests were testing for the old version of the functions, and
the interfaces for these functions changed slightly, so I had to also edit
the tests.
As of writing, this PR has no merge conflicts with #8211
Additional changes during review:
* Explicitly set up is_[float/signed]_val to be changed before each call
* Structify the tests and fix uninitialized variables
|
|
|
|
|
|
|
|
|
|
haven't been reduced by the field order
|
|
|
|
avoids mining txes after a fork that are invalid by this fork's
rules, but were valid by the previous fork rules at the time
they were verified and added to the txpool.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
boosted_tcp_server: check condition before sleep too
cryptonote_protocol_handler: each instance of BlockchainLMDB requires separate thread due to private thread local fields
|
|
|
|
CID 1446559
|
|
|
|
|
|
|
|
|
|
MINING_SILENT and MINING_NO_MEASUREMENT env vars
|
|
|
|
|
|
This reverts commit 63c7ca07fba2f063c760f786a986fb3e02fb040e, reversing
changes made to 2218e23e84a89e9a1e4c0be5d50f891ab836754f.
|
|
|
|
|
|
Printing also available RAM. Add comprehensive description.
|
|
|
|
|
|
|
|
|
|
On Mac, size_t is a distinct type from uint64_t, and some
types (in wallet cache as well as cold/hot wallet transfer
data) use pairs/containers with size_t as fields. Mac would
save those as full size, while other platforms would save
them as varints. Might apply to other platforms where the
types are distinct.
There's a nasty hack for backward compatibility, which can
go after a couple forks.
|
|
|
|
|
|
|
|
|
|
There are quite a few variables in the code that are no longer
(or perhaps never were) in use. These were discovered by enabling
compiler warnings for unused variables and cleaning them up.
In most cases where the unused variables were the result
of a function call the call was left but the variable
assignment removed, unless it was obvious that it was
a simple getter with no side effects.
|
|
|
|
|
|
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=973196
|
|
|
|
|
|
|
|
|
|
|
|
Only build fuzz tests in a fuzz build, and don't build other tests.
Keeps fuzz compilers from instrumenting other tests, which are not fuzzed.
Resolves #7232
|
|
|
|
it'd trigger on reorgs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
It would otherwise be possible for a peer to send bad blocks,
then disconnect and reconnect again, escaping bans
|
|
|
|
|
|
|
|
|
|
monero_add_executable/monero_add_library where possible (mj-xmr)
Add monero_add_minimal_executable and use in tests
This is done in order not to have to relink targets, when just an .so changed, but not its interface.
|
|
|
|
|
|
|
|
|
|
|
|
Tests running after being compiled with `make debug-test` failed with
```
[ FAILED ] block_reward_and_current_block_weight.fails_on_huge_median_size
[ FAILED ] block_reward_and_current_block_weight.fails_on_huge_block_weight
```
With the introduction of the patch in
https://github.com/monero-project/monero/commit/be82c40703d267184ee07bf7be71002122c86656#diff-1a57d4e6013984c420da98d1adde0eafL113
the assertions checking the weight of the median and current block
against a size limit were removed. Since the limit is now enforced by a
long divisor and a uint64_t type, checking in a separate test makes
little sense, so they are removed here.
|
|
|
|
|
|
|
|
Those would, if uncaught, exit run and leave the waiter to wait
indefinitely for the number of active jobs to reach 0
|
|
|
|
|
|
v13 enforces claiming the full block reward, so we need to keep
track of tx fees to add them to the coinbase
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
They are allowed from v12, and MLSAGs are rejected from v13.
|
|
|
|
|
|
Tests tx/block propagation and reorgs
|
|
Claiming a slightly lesser amount does not yield the size gains
that were seen pre rct, so this closes a fingerprinting vector
|
|
|
|
This reverts commit 921dd8dde5d381052d0aa2936304a3541a230c55.
|
|
This reduces the attack surface for data that can come from
malicious sources (exported output and key images, multisig
transactions...) since the monero serialization is already
exposed to the outside, and the boost lib we were using had
a few known crashers.
For interoperability, a new load-deprecated-formats wallet
setting is added (off by default). This allows loading boost
format data if there is no alternative. It will likely go
at some point, along with the ability to load those.
Notably, the peer lists file still uses the boost serialization
code, as the data it stores is define in epee, while the new
serialization code is in monero, and migrating it was fairly
hairy. Since this file is local and not obtained from anyone
else, the marginal risk is minimal, but it could be migrated
later if needed.
Some tests and tools also do, this will stay as is for now.
|
|
|
|
|
|
|
|
include all public proof parameters in Schnorr challenges, along with hash function domain separators. Includes new randomized unit tests.
|
|
unit test
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The Bug:
1. Construct `byte_slice.portion_` with `epee::span(buffer)` which copies a pointer to the SSO buffer to `byte_slice.portion_`
2. It constructs `byte_slice.storage_` with `std::move(buffer)` (normally this swap pointers, but SSO means a memcpy and clear on the original SSO buffer)
3. `slice.data()` returns a pointer from `slice.portion_` that points to the original SSO cleared buffer, `slice.storage_` has the actual string.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Update copyright year to 2020
|
|
|
|
Too many iterations cause std::bad_alloc for the timings vector,
and the micro prefix displays as some other character, so use u.
Reported by iDunk
|
|
- choice where to enter passphrase is now made on the host
- use wipeable string in the comm stack
- wipe passphrase memory
- protocol optimizations, prepare for new firmware version
- minor fixes and improvements
- tests fixes, HF12 support
|
|
|
|
|
|
- Add abstract_http_client.h which http_client.h extends.
- Replace simple_http_client with abstract_http_client in wallet2,
message_store, message_transporter, and node_rpc_proxy.
- Import and export wallet data in wallet2.
- Use #if defined __EMSCRIPTEN__ directives to skip incompatible code.
|
|
|
|
|
|
|
|
This fixes a test failure now that timestamps are more constrained
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- New flag in NOTIFY_NEW_TRANSACTION to indicate stem mode
- Stem loops detected in tx_pool.cpp
- Embargo timeout for a blackhole attack during stem phase
|
|
Executing a new binary for each signature can get really slow
|
|
Otherwise the daemon will start rejecting
|
|
A newly synced Alice sends a (typically quite small) list of
txids in the local tpxool to a random peer Bob, who then uses
the existing tx relay system to send Alice any tx in his txpool
which is not in the list Alice sent
|
|
|
|
|
|
|
|
|
|
- Finding handling function in ZMQ JSON-RPC now uses binary search
- Temporary `std::vector`s in JSON output now use `epee::span` to
prevent allocations.
- Binary -> hex in JSON output no longer allocates temporary buffer
- C++ structs -> JSON skips intermediate DOM creation, and instead
write directly to an output stream.
|
|
|
|
|
|
Cleaning up a little around the code base.
|
|
|
|
|
|
|
|
|
|
- e.g., fixes gen_block_big_major_version test, error: generation failed: what=events not set, cannot compute valid RandomX PoW
- ask for events only if difficulty > 1 (when it really matters)
- throwing an exception changed to logging, so it is easy to spot a problem if tests start to fail.
|
|
|
|
|
|
The tail emission will bring the total above 64 bits
|
|
It was intended to check a case which is actually valid (0 gamma),
but was actually duplicating the bad amount test.
Reported by WhatDo_ on IRC.
|
|
Coverity 205411
|
|
|
|
|
|
|
|
Avoids a DB error (leading to an assert) where a thread uses
a read txn previously created with an environment that was
since closed and reopened. While this usually works since
BlockchainLMDB renews txns if it detects the environment has
changed, this will not work if objects end up being allocated
at the same address as the previous instance, leading to stale
data usage.
Thanks hyc for the LMDB debugging.
|
|
|
|
- Removed copy of field names in binary deserialization
- Removed copy of array values in binary deserialization
- Removed copy of string values in json deserialization
- Removed unhelpful allocation in json string value parsing
- Removed copy of blob data on binary and json serialization
|
|
|
|
|
|
It causes link errors at least on mac
|
|
It causes link errors at least on mac
|
|
|
|
|
|
this prevents messing up any subsequent test too
|
|
|
|
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
|
|
add a 128/64 division routine so we can use a > 32 bit median block
size in calculations
|