Age | Commit message (Collapse) | Author | Files | Lines |
|
[release-v0.18]
- `/getblocks.bin` respects the `RESTRICTED_TX_COUNT` (=100) when
returning pool txs via a restricted RPC daemon.
- A restricted RPC daemon includes a max of `RESTRICTED_TX_COUNT` txs
in the `added_pool_txs` field, and returns any remaining pool hashes
in the `remaining_added_pool_txids` field. The client then requests
the remaining txs via `/gettransactions` in chunks.
- `/gettransactions` no longer does expensive no-ops for ALL pool txs
if the client requests a subset of pool txs. Instead it searches for
the txs the client explicitly requests.
- Reset `m_pool_info_query_time` when a user:
(1) rescans the chain (so the wallet re-requests the whole pool)
(2) changes the daemon their wallets points to (a new daemon would
have a different view of the pool)
- `/getblocks.bin` respects the `req.prune` field when returning
pool txs.
- Pool extension fields in response to `/getblocks.bin` are optional
with default 0'd values.
|
|
[release-v0.18]
|
|
|
|
|
|
- Straight-forward call interface: `void rx_slow_hash(const char *seedhash, const void *data, size_t length, char *result_hash)`
- Consensus chain seed hash is now updated by calling `rx_set_main_seedhash` whenever a block is added/removed or a reorg happens
- `rx_slow_hash` will compute correct hash no matter if `rx_set_main_seedhash` was called or not (the only difference is performance)
- New environment variable `MONERO_RANDOMX_FULL_MEM` to force use the full dataset for PoW verification (faster block verification)
- When dataset is used for PoW verification, dataset updates don't stall other threads (verification is done in light mode then)
- When mining is running, PoW checks now also use dataset for faster verification
|
|
|
|
|
|
This reverts commit 50410d1f7d04bf60053f2263410c39e81d3ddad1, reversing
changes made to d054def63f9b8950fe20b2d8e841f5a9ae09418f.
|
|
While copying my data dir to another drive, I missed copying the rpc_ssl.key file b/c of the file permissions.
This change will give a much more clear, descriptive error in that scenario.
|
|
reported by m31007
|
|
|
|
https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2021-02.pdf
with a change to use 1.7 instead of 2.0 for the max long term increase rate
|
|
- grab an lmdb db_rtxn_guard to ensure consistent data from the db
- fixed on_getblockhash error resp when requested height >= blockchain height
- left functions that read shared memory untouched for now
|
|
|
|
This PR removes the requirement for --rpc-login to be specified if --rpc-access-control-origins is.
This will allow public nodes to serve cross-origin requests. You can still use --rpc-login with
--rpc-access-control-origins, but it is no longer mandatory.
Original Issue: #8168
|
|
|
|
Calculate PoW hash for a block candidate
|
|
This will prevent people spending old pre-rct outputs using a
stranger's node, which may be a good thing
|
|
|
|
|
|
Adds the following:
- "get_miner_data" to RPC API
- "json-miner-data" to ZeroMQ subscriber contexts
Both provide the necessary data to create a custom block template. They are used by p2pool.
Data provided:
- major fork version
- current height
- previous block id
- RandomX seed hash
- network difficulty
- median block weight
- coins mined by the network so far
- mineable mempool transactions
|
|
Co-authored-by: selsta <selsta@sent.at>
Co-authored-by: tobtoht <thotbot@protonmail.com>
|
|
|
|
if the wallet does it, it would get a wrong result (possibly even
negative) if its local chain is not synced up to the daemon's yet
|
|
|
|
Some RPC return an error string in status, and the code must return
true on error (with a status string).
|
|
|
|
|
|
There are quite a few variables in the code that are no longer
(or perhaps never were) in use. These were discovered by enabling
compiler warnings for unused variables and cleaning them up.
In most cases where the unused variables were the result
of a function call the call was left but the variable
assignment removed, unless it was obvious that it was
a simple getter with no side effects.
|
|
|
|
|
|
|
|
|
|
do not include blocked hosts in peer lists or public node lists by default,
warn about no https on clearnet and about untrusted peers likely being spies
|
|
true if and pretty much only if new blocks are being added
|
|
It would otherwise be possible for a peer to send bad blocks,
then disconnect and reconnect again, escaping bans
|
|
since it only makes sense when syncing, and it confuses people
|
|
|
|
|
|
|
|
|
|
|
|
Fixes #6369
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
They are allowed from v12, and MLSAGs are rejected from v13.
|
|
|
|
This reduces the attack surface for data that can come from
malicious sources (exported output and key images, multisig
transactions...) since the monero serialization is already
exposed to the outside, and the boost lib we were using had
a few known crashers.
For interoperability, a new load-deprecated-formats wallet
setting is added (off by default). This allows loading boost
format data if there is no alternative. It will likely go
at some point, along with the ability to load those.
Notably, the peer lists file still uses the boost serialization
code, as the data it stores is define in epee, while the new
serialization code is in monero, and migrating it was fairly
hairy. Since this file is local and not obtained from anyone
else, the marginal risk is minimal, but it could be migrated
later if needed.
Some tests and tools also do, this will stay as is for now.
|
|
It turns out that some remote (bootstrap) nodes silently drop /
don't broadcast client's transactions.
|
|
|
|
|
|
|
|
Reporter requested credit to be given to Decred
|
|
It's more obvious there's no txid, and it saves space
|
|
Got broken after making one of those micro optimizations requested on review..
|
|
|
|
|
|
|
|
It's not something the user needs to know, and will display
attacker controlled data
|
|
|
|
|
|
Update copyright year to 2020
|
|
|
|
|
|
|
|
warning)
|
|
|
|
|
|
|
|
|
|
|
|
- New flag in NOTIFY_NEW_TRANSACTION to indicate stem mode
- Stem loops detected in tx_pool.cpp
- Embargo timeout for a blackhole attack during stem phase
|
|
This allows RPC coming from the loopback interface to not have
to pay for service. This makes it possible to run an externally
accessible RPC server for payment while also having a local RPC
server that can be run unrestricted and payment free.
|
|
|
|
- Finding handling function in ZMQ JSON-RPC now uses binary search
- Temporary `std::vector`s in JSON output now use `epee::span` to
prevent allocations.
- Binary -> hex in JSON output no longer allocates temporary buffer
- C++ structs -> JSON skips intermediate DOM creation, and instead
write directly to an output stream.
|
|
|
|
|
|
|
|
|
|
It was removed to save duplicated generation time, but it can
be copied from another instance instead
|
|
Since we now get pruned data in the first place, the "unpruned" data
size will in fact be the pruned data size, leading to confusion
|
|
The tail emission will bring the total above 64 bits
|
|
|
|
Flushes m_invalid_blocks in Blockchain.
|
|
|
|
Coverity 205410
|
|
Coverity 205414
|
|
Coverity 205415
|
|
Coverity 205416
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
It causes link errors at least on mac
|
|
It causes link errors at least on mac
|
|
|
|
|
|
|
|
|
|
This allows flushing internal caches (for now, the bad tx cache,
which will allow debugging a stuck monerod after it has failed to
verify a transaction in a block, since it would otherwise not try
again, making subsequent log changes pointless)
|
|
Lists nodes exposing their RPC port for public use
|
|
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
|
|
|
|
|
|
|
|
|
|
If the peer (whether pruned or not itself) supports sending pruned blocks
to syncing nodes, the pruned version will be sent along with the hash
of the pruned data and the block weight. The original tx hashes can be
reconstructed from the pruned txes and theur prunable data hash. Those
hashes and the block weights are hashes and checked against the set of
precompiled hashes, ensuring the data we received is the original data.
It is currently not possible to use this system when not using the set
of precompiled hashes, since block weights can not otherwise be checked
for validity.
This is off by default for now, and is enabled by --sync-pruned-blocks
|
|
Support RandomX PoW algorithm
|
|
PoW is expensive to verify, so be strict
|
|
Also fix part of the RPC results being returned as binary.
This makes the RPC backward incompatible.
|
|
|
|
The issue is triggered by the captured `this` in RPC server, which
passes reference to throwable `core_rpc_server`:
`core_rpc_server.cpp:164: m_bootstrap_daemon.reset(new bootstrap_daemon([this]{ return get_random_public_node(); }));`
The solution is to simply remove noexcept from the remaining `bootstrap_daemon`
constructors because noexcept is false in this context.
>"An exception of type "boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::asio::invalid_service_owner>>" is thrown but the throw list "noexcept" doesn't allow it to be thrown. This will cause a call to unexpected() which usually calls terminate()."
|
|
|
|
Best case is an address mined previously and it'll get returned,
worst case it was never initialized in the first place
|
|
|
|
|
|
It does not leak much since you can make a fair guess by RPC
version already, and some people want to avoid non release
clients when using third parties' nodes (because they'd never
lie about it)
|
|
|
|
|
|
new cli options (RPC ones also apply to wallet):
--p2p-bind-ipv6-address (default = "::")
--p2p-bind-port-ipv6 (default same as ipv4 port for given nettype)
--rpc-bind-ipv6-address (default = "::1")
--p2p-use-ipv6 (default false)
--rpc-use-ipv6 (default false)
--p2p-require-ipv4 (default true, if ipv4 bind fails and this is
true, will not continue even if ipv6 bind
successful)
--rpc-require-ipv4 (default true, description as above)
ipv6 addresses are to be specified as "[xx:xx:xx::xx:xx]:port" except
in the cases of the cli args for bind address. For those the square
braces can be omitted.
|
|
|
|
|
|
|
|
rather than their string representation
|
|
Circumvents the need to create a new blockhashing blob when you already
know the data you want to set in the extra_nonce (so use this instead of
reserve_size).
|
|
|
|
|
|
|
|
Make bans control RPC sessions too. And auto-ban some bad requests.
Drops HTTP connections whenever response code is 500.
|
|
|
|
In static member function ‘static boost::optional<cryptonote::rpc::output_distribution_data> cryptonote::rpc::RpcHandler::get_output_distribution(const std::function<bool(long unsigned int, long unsigned int, long unsigned int, long unsigned int&, std::vector<long unsigned int>&, long unsigned int&)>&, uint64_t, uint64_t, uint64_t, const std::function<crypto::hash(long unsigned int)>&, bool, uint64_t)’:
cc1plus: warning: ‘void* __builtin_memset(void*, int, long unsigned int)’: specified size 18446744073709551536 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
|
|
It would try to get their prunable hash, but v1 txes don't have one
|
|
|
|
issue: #5568
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SHA1 is too close to bruteforceable
|
|
It can now handle small reorgs without having to rescan the
whole blockchain.
Also add a test for it.
|
|
|
|
|
|
We want to get all blocks here, even pruned ones
|
|
|
|
|
|
|
|
Coverity 197653
|
|
This will weed out some transactions with silly rings
|
|
|
|
|
|
Related to emission, reorgs, getting tx data back, output
distribution and histogram
|
|
|
|
Currently if a user specifies a ca file or fingerprint to verify peer,
the default behavior is SSL autodetect which allows for mitm downgrade
attacks. It should be investigated whether a manual override should be
allowed - the configuration is likely always invalid.
|
|
Specifying SSL certificates for peer verification does an exact match,
making it a not-so-obvious alias for the fingerprints option. This
changes the checks to OpenSSL which loads concatenated certificate(s)
from a single file and does a certificate-authority (chain of trust)
check instead. There is no drop in security - a compromised exact match
fingerprint has the same worse case failure. There is increased security
in allowing separate long-term CA key and short-term SSL server keys.
This also removes loading of the system-default CA files if a custom
CA file or certificate fingerprint is specified.
|
|
|
|
This should be friendlier for clients which don't have bignum support
|
|
The setup-background-mining option can be used to select
background mining when a wallet loads. The user will be asked
the first time the wallet is created.
|
|
|
|
|
|
Based on Boolberry work by:
jahrsg <jahr@jahr.me>
cr.zoidberg <crypto.zoidberg@gmail.com>
|
|
|
|
|
|
|
|
It's not nothing to do with it
|
|
|
|
It's slow work, so let's not expose it
|
|
|
|
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
|
|
|
|
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.
|
|
|
|
|
|
|
|
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
|
|
- Support for ".onion" in --add-exclusive-node and --add-peer
- Add --anonymizing-proxy for outbound Tor connections
- Add --anonymous-inbounds for inbound Tor connections
- Support for sharing ".onion" addresses over Tor connections
- Support for broadcasting transactions received over RPC exclusively
over Tor (else broadcast over public IP when Tor not enabled).
|
|
|
|
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
|
|
|
|
We know all the data we'll want for getblocks.bin is contiguous
|
|
|
|
|
|
add new public method to Blockchain and update according to code review
update after review: better lock/unlock, try catch and coding style
|
|
|
|
|
|
Found by codacy.com
|
|
Makes more sense than uint64_t for an offset, and agrees with
the %zu used to print results.
Found by codacy.com
|
|
|