Age | Commit message (Collapse) | Author | Files | Lines |
|
if zmq was built with libnorm, libproto or libpgm, pkg-config can inform
us directly instead of having to find them. The location of zmq.h can be
obtained the same way
motivation is to remove opengpm from the list of mandatory dependencies
when monero does not have direct dependency on opengpm, but only when
zeromq is buily with pgm support which itself has hard dependency on
pyton2 which is now EOL.
|
|
|
|
It causes link errors at least on mac
|
|
|
|
|
|
This allows flushing internal caches (for now, the bad tx cache,
which will allow debugging a stuck monerod after it has failed to
verify a transaction in a block, since it would otherwise not try
again, making subsequent log changes pointless)
|
|
Lists nodes exposing their RPC port for public use
|
|
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
|
|
|
|
|
|
|
|
|
|
If the peer (whether pruned or not itself) supports sending pruned blocks
to syncing nodes, the pruned version will be sent along with the hash
of the pruned data and the block weight. The original tx hashes can be
reconstructed from the pruned txes and theur prunable data hash. Those
hashes and the block weights are hashes and checked against the set of
precompiled hashes, ensuring the data we received is the original data.
It is currently not possible to use this system when not using the set
of precompiled hashes, since block weights can not otherwise be checked
for validity.
This is off by default for now, and is enabled by --sync-pruned-blocks
|
|
Support RandomX PoW algorithm
|
|
PoW is expensive to verify, so be strict
|
|
Also fix part of the RPC results being returned as binary.
This makes the RPC backward incompatible.
|
|
|
|
The issue is triggered by the captured `this` in RPC server, which
passes reference to throwable `core_rpc_server`:
`core_rpc_server.cpp:164: m_bootstrap_daemon.reset(new bootstrap_daemon([this]{ return get_random_public_node(); }));`
The solution is to simply remove noexcept from the remaining `bootstrap_daemon`
constructors because noexcept is false in this context.
>"An exception of type "boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::asio::invalid_service_owner>>" is thrown but the throw list "noexcept" doesn't allow it to be thrown. This will cause a call to unexpected() which usually calls terminate()."
|
|
|
|
Best case is an address mined previously and it'll get returned,
worst case it was never initialized in the first place
|
|
|
|
|
|
It does not leak much since you can make a fair guess by RPC
version already, and some people want to avoid non release
clients when using third parties' nodes (because they'd never
lie about it)
|
|
|
|
|
|
new cli options (RPC ones also apply to wallet):
--p2p-bind-ipv6-address (default = "::")
--p2p-bind-port-ipv6 (default same as ipv4 port for given nettype)
--rpc-bind-ipv6-address (default = "::1")
--p2p-use-ipv6 (default false)
--rpc-use-ipv6 (default false)
--p2p-require-ipv4 (default true, if ipv4 bind fails and this is
true, will not continue even if ipv6 bind
successful)
--rpc-require-ipv4 (default true, description as above)
ipv6 addresses are to be specified as "[xx:xx:xx::xx:xx]:port" except
in the cases of the cli args for bind address. For those the square
braces can be omitted.
|
|
|
|
|
|
|
|
rather than their string representation
|
|
Circumvents the need to create a new blockhashing blob when you already
know the data you want to set in the extra_nonce (so use this instead of
reserve_size).
|
|
|
|
|
|
|
|
Make bans control RPC sessions too. And auto-ban some bad requests.
Drops HTTP connections whenever response code is 500.
|
|
|
|
In static member function ‘static boost::optional<cryptonote::rpc::output_distribution_data> cryptonote::rpc::RpcHandler::get_output_distribution(const std::function<bool(long unsigned int, long unsigned int, long unsigned int, long unsigned int&, std::vector<long unsigned int>&, long unsigned int&)>&, uint64_t, uint64_t, uint64_t, const std::function<crypto::hash(long unsigned int)>&, bool, uint64_t)’:
cc1plus: warning: ‘void* __builtin_memset(void*, int, long unsigned int)’: specified size 18446744073709551536 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
|
|
It would try to get their prunable hash, but v1 txes don't have one
|
|
|
|
issue: #5568
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SHA1 is too close to bruteforceable
|
|
It can now handle small reorgs without having to rescan the
whole blockchain.
Also add a test for it.
|
|
|
|
|
|
We want to get all blocks here, even pruned ones
|
|
|
|
|
|
|
|
Coverity 197653
|
|
This will weed out some transactions with silly rings
|
|
|
|
|
|
Related to emission, reorgs, getting tx data back, output
distribution and histogram
|
|
|
|
Currently if a user specifies a ca file or fingerprint to verify peer,
the default behavior is SSL autodetect which allows for mitm downgrade
attacks. It should be investigated whether a manual override should be
allowed - the configuration is likely always invalid.
|
|
Specifying SSL certificates for peer verification does an exact match,
making it a not-so-obvious alias for the fingerprints option. This
changes the checks to OpenSSL which loads concatenated certificate(s)
from a single file and does a certificate-authority (chain of trust)
check instead. There is no drop in security - a compromised exact match
fingerprint has the same worse case failure. There is increased security
in allowing separate long-term CA key and short-term SSL server keys.
This also removes loading of the system-default CA files if a custom
CA file or certificate fingerprint is specified.
|
|
|
|
This should be friendlier for clients which don't have bignum support
|
|
The setup-background-mining option can be used to select
background mining when a wallet loads. The user will be asked
the first time the wallet is created.
|
|
|
|
|
|
Based on Boolberry work by:
jahrsg <jahr@jahr.me>
cr.zoidberg <crypto.zoidberg@gmail.com>
|
|
|
|
|
|
|
|
It's not nothing to do with it
|
|
|
|
It's slow work, so let's not expose it
|
|
|
|
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
|
|
|
|
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.
|
|
|
|
|
|
|
|
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
|
|
- Support for ".onion" in --add-exclusive-node and --add-peer
- Add --anonymizing-proxy for outbound Tor connections
- Add --anonymous-inbounds for inbound Tor connections
- Support for sharing ".onion" addresses over Tor connections
- Support for broadcasting transactions received over RPC exclusively
over Tor (else broadcast over public IP when Tor not enabled).
|
|
|
|
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
|
|
|
|
We know all the data we'll want for getblocks.bin is contiguous
|
|
|
|
|
|
add new public method to Blockchain and update according to code review
update after review: better lock/unlock, try catch and coding style
|
|
|
|
|
|
Found by codacy.com
|
|
Makes more sense than uint64_t for an offset, and agrees with
the %zu used to print results.
Found by codacy.com
|
|
|
|
and decrease the amount of data carried around
|
|
block already has a default ctor, and the extra object
churn due to its innards (vectors, etc) is pointless.
|
|
|
|
This can go out of sync with m_core's nettype if you run in fakechain
mode since entering fakechain mode is done through code not the command
line and core_rpc_server only looks at the command line to figure out
the nettype.
|
|
|
|
Undefined symbols for architecture x86_64:
"cryptonote::core::get_output_distribution(unsigned long long, unsigned long long, unsigned long long, unsigned long long&, std::__1::vector<unsigned long long, std::__1::allocator<unsigned long long> >&, unsigned long long&) const", referenced from:
cryptonote::rpc::RpcHandler::get_output_distribution(cryptonote::core&, unsigned long long, unsigned long long, unsigned long long, bool) in rpc_handler.cpp.o
|
|
|
|
|
|
|
|
|
|
|
|
Fix for #4399.
Also unifies code for serializing pruned tx to binary/json into one.
|
|
0 is placeholder for whole chain, so we should compare chain
height changes rather than chain-height-or-zero. Even this isn't
totally foolproof if a blocks are popped and the same number
added again, but it is much better as it prevents the data from
slowly going out of sync.
|
|
|
|
Coverity 182501
|
|
Also prevents coverity from moaning about them not initializing fields
|
|
|
|
|
|
|
|
|
|
Make it easier for a user to be told when to update
|
|
|
|
|
|
|
|
|
|
- passing by parameter is insecure as it is shown in the process list
|
|
|
|
This gets rid of the temporary precalc cache.
Also make the RPC able to send data back in binary or JSON,
since there can be a lot of data
This bumps the LMDB database format to v3, with migration.
|
|
|
|
on_generateblocks RPC call combines functionality from the on_getblocktemplate and on_submitblock RPC calls to allow rapid block creation. Difficulty is set permanently to 1 for regtest.
Makes use of FAKECHAIN network type, but takes hard fork heights from mainchain
Default reserve_size in generate_blocks RPC call is now 1. If it is 0, the following error occurs 'Failed to calculate offset for'.
Queries hard fork heights info of other network types
|
|
For some reason, this confuses and kills ASAN on startup
as it thinks const uint8_t ipv4_network_address::ID is
defined multiple times.
|
|
|
|
also use reserve where appropriate
|
|
|
|
|
|
|
|
|
|
|
|
This should help new nodes predict how much disk space will be
needed for a full sync
|
|
|
|
|
|
This avoids double conversion on a later cache hit
|
|
The distribution was not converted to cumulative after a cache hit
|
|
|
|
This bumps DB version to 2, migration code will run for v1 DBs
|
|
|
|
|
|
This skips the vast majority of "dust" output amounts with just
one instance on the chain. Clocks in at 0.15% of the original
time on testnet.
|
|
This should cache the vast majority of calls for long running wallets
|
|
|
|
and get them pruned in find_and_save_rings, since it does not need
the pruned data in the first place.
Also set decode_to_json to false where missing, we don't need this
either.
|
|
|
|
|
|
so that those nodes can still be used for sending transactions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
It was already possible to limit outgoing connections. One might want
to do this on home network connections with high bandwidth but low
usage caps.
|
|
This rename is needed so that delete_in_connections can be added.
|
|
This is needed so that a max_in_connection_count can be added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The shared RPC code is now moved off into a separate lib
|
|
As a followon side effect, this makes a lot of inline code
included only in particular cpp files (and instanciated
when necessary.
|
|
|
|
|
|
Deleted 3 out of 4 calls to method connection_basic::sleep_before_packet
that were erroneous / superfluous, which enabled the elimination of a
"fudge" factor of 2.1 in connection_basic::set_rate_up_limit;
also ended the multiplying of limit values and numbers of bytes
transferred by 1024 before handing them over to the global throttle
objects
|
|
It's getting hit too easily
|
|
It's sent as JSON, so raw binary is not appropriate
|
|
|
|
|
|
wallet2 is a library, and should not prompt for stdin. Instead,
pass a function so simplewallet can prompt on stdin, and a GUI
might display a window, etc.
|
|
Those have no reason to be in a generic module
|
|
It's nasty, and actually breaks on Solaris, where if.h fails to
build due to:
struct map *if_memmap;
|
|
|
|
|
|
This patch allows to filter out sensitive information for queries that rely on the pool state, when running in restricted mode.
This filtering is only applied to data sent back to RPC queries. Results of inline commands typed locally in the daemon are not affected.
In practice, when running with `--restricted-rpc`:
* get_transaction_pool will list relayed transactions with the fields "last relayed time" and "received time" set to zero.
* get_transaction_pool will not list transaction that have do_not_relay set to true, and will not list key images that are used only for such transactions
* get_transaction_pool_hashes.bin will not list such transaction
* get_transaction_pool_stats will not count such transactions in any of the aggregated values that are computed
The implementation does not make filtering the default, so developers should be mindful of this if they add new RPC functionality.
Fixes #2590.
|
|
|
|
Transactions in the txpool are marked when another transaction
is seen double spending one or more of its inputs.
This is then exposed wherever appropriate.
Note that being marked with this "double spend seen" flag does
NOT mean this transaction IS a double spend and will never be
mined: it just means that the network has seen at least another
transaction spending at least one of the same inputs, so care
should be taken to wait for a few confirmations before acting
upon that transaction (ie, mostly of use for merchants wanting
to accept unconfirmed transactions).
|
|
This branch fixes a file permission issue introduced by https://github.com/monero-project/monero/commit/69c37200aa87f100f731e755bdca7a0dc6ae820a
|
|
|
|
Enable with perf:DEBUG
|
|
|
|
|
|
It's pretty slow and I/O intensive
|
|
|
|
|
|
- internal nullptr checks
- prevent modifications to network_address (shallow copy issues)
- automagically works with any type containing interface functions
- removed fnv1a hashing
- ipv4_network_address now flattened with no base class
|