diff options
author | moneromooo-monero <moneromooo-monero@users.noreply.github.com> | 2018-02-11 15:15:56 +0000 |
---|---|---|
committer | moneromooo-monero <moneromooo-monero@users.noreply.github.com> | 2019-10-25 09:34:38 +0000 |
commit | 2899379791b7542e4eb920b5d9d58cf232806937 (patch) | |
tree | 7e97a95103837852d5c4e86ee544815d9a7a6671 /src/p2p | |
parent | add a quick early out to get_blocks.bin when up to date (diff) | |
download | monero-2899379791b7542e4eb920b5d9d58cf232806937.tar.xz |
daemon, wallet: new pay for RPC use system
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
Diffstat (limited to 'src/p2p')
-rw-r--r-- | src/p2p/net_node.h | 7 | ||||
-rw-r--r-- | src/p2p/net_node.inl | 11 | ||||
-rw-r--r-- | src/p2p/net_peerlist.h | 5 | ||||
-rw-r--r-- | src/p2p/net_peerlist_boost_serialization.h | 9 | ||||
-rw-r--r-- | src/p2p/p2p_protocol_defs.h | 9 |
5 files changed, 33 insertions, 8 deletions
diff --git a/src/p2p/net_node.h b/src/p2p/net_node.h index 0c9c285e8..29db5403e 100644 --- a/src/p2p/net_node.h +++ b/src/p2p/net_node.h @@ -231,6 +231,7 @@ namespace nodetool : m_payload_handler(payload_handler), m_external_port(0), m_rpc_port(0), + m_rpc_credits_per_hash(0), m_allow_local_ip(false), m_hide_my_port(false), m_igd(no_igd), @@ -431,6 +432,11 @@ namespace nodetool m_rpc_port = rpc_port; } + void set_rpc_credits_per_hash(uint32_t rpc_credits_per_hash) + { + m_rpc_credits_per_hash = rpc_credits_per_hash; + } + private: std::string m_config_folder; @@ -440,6 +446,7 @@ namespace nodetool uint32_t m_listening_port_ipv6; uint32_t m_external_port; uint16_t m_rpc_port; + uint32_t m_rpc_credits_per_hash; bool m_allow_local_ip; bool m_hide_my_port; igd_t m_igd; diff --git a/src/p2p/net_node.inl b/src/p2p/net_node.inl index 9150ebb1b..a5921b8bf 100644 --- a/src/p2p/net_node.inl +++ b/src/p2p/net_node.inl @@ -1057,7 +1057,8 @@ namespace nodetool pi = context.peer_id = rsp.node_data.peer_id; context.m_rpc_port = rsp.node_data.rpc_port; - m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.set_peer_just_seen(rsp.node_data.peer_id, context.m_remote_address, context.m_pruning_seed, context.m_rpc_port); + context.m_rpc_credits_per_hash = rsp.node_data.rpc_credits_per_hash; + m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.set_peer_just_seen(rsp.node_data.peer_id, context.m_remote_address, context.m_pruning_seed, context.m_rpc_port, context.m_rpc_credits_per_hash); // move for (auto const& zone : m_network_zones) @@ -1123,7 +1124,7 @@ namespace nodetool add_host_fail(context.m_remote_address); } if(!context.m_is_income) - m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.set_peer_just_seen(context.peer_id, context.m_remote_address, context.m_pruning_seed, context.m_rpc_port); + m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.set_peer_just_seen(context.peer_id, context.m_remote_address, context.m_pruning_seed, context.m_rpc_port, context.m_rpc_credits_per_hash); if (!m_payload_handler.process_payload_sync_data(rsp.payload_data, context, false)) { m_network_zones.at(context.m_remote_address.get_zone()).m_net_server.get_config_object().close(context.m_connection_id ); @@ -1292,6 +1293,7 @@ namespace nodetool pe_local.last_seen = static_cast<int64_t>(last_seen); pe_local.pruning_seed = con->m_pruning_seed; pe_local.rpc_port = con->m_rpc_port; + pe_local.rpc_credits_per_hash = con->m_rpc_credits_per_hash; zone.m_peerlist.append_with_peer_white(pe_local); //update last seen and push it to peerlist manager @@ -1922,6 +1924,7 @@ namespace nodetool else node_data.my_port = 0; node_data.rpc_port = zone.m_can_pingback ? m_rpc_port : 0; + node_data.rpc_credits_per_hash = zone.m_can_pingback ? m_rpc_credits_per_hash : 0; node_data.network_id = m_network_id; return true; } @@ -2366,6 +2369,7 @@ namespace nodetool context.peer_id = arg.node_data.peer_id; context.m_in_timedsync = false; context.m_rpc_port = arg.node_data.rpc_port; + context.m_rpc_credits_per_hash = arg.node_data.rpc_credits_per_hash; if(arg.node_data.my_port && zone.m_can_pingback) { @@ -2393,6 +2397,7 @@ namespace nodetool pe.id = peer_id_l; pe.pruning_seed = context.m_pruning_seed; pe.rpc_port = context.m_rpc_port; + pe.rpc_credits_per_hash = context.m_rpc_credits_per_hash; this->m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.append_with_peer_white(pe); LOG_DEBUG_CC(context, "PING SUCCESS " << context.m_remote_address.host_str() << ":" << port_l); }); @@ -2710,7 +2715,7 @@ namespace nodetool } else { - zone.second.m_peerlist.set_peer_just_seen(pe.id, pe.adr, pe.pruning_seed, pe.rpc_port); + zone.second.m_peerlist.set_peer_just_seen(pe.id, pe.adr, pe.pruning_seed, pe.rpc_port, pe.rpc_credits_per_hash); LOG_PRINT_L2("PEER PROMOTED TO WHITE PEER LIST IP address: " << pe.adr.host_str() << " Peer ID: " << peerid_type(pe.id)); } } diff --git a/src/p2p/net_peerlist.h b/src/p2p/net_peerlist.h index c65b9dd82..58b704f73 100644 --- a/src/p2p/net_peerlist.h +++ b/src/p2p/net_peerlist.h @@ -111,7 +111,7 @@ namespace nodetool bool append_with_peer_white(const peerlist_entry& pr); bool append_with_peer_gray(const peerlist_entry& pr); bool append_with_peer_anchor(const anchor_peerlist_entry& ple); - bool set_peer_just_seen(peerid_type peer, const epee::net_utils::network_address& addr, uint32_t pruning_seed, uint16_t rpc_port); + bool set_peer_just_seen(peerid_type peer, const epee::net_utils::network_address& addr, uint32_t pruning_seed, uint16_t rpc_port, uint32_t rpc_credits_per_hash); bool set_peer_unreachable(const peerlist_entry& pr); bool is_host_allowed(const epee::net_utils::network_address &address); bool get_random_gray_peer(peerlist_entry& pe); @@ -315,7 +315,7 @@ namespace nodetool } //-------------------------------------------------------------------------------------------------- inline - bool peerlist_manager::set_peer_just_seen(peerid_type peer, const epee::net_utils::network_address& addr, uint32_t pruning_seed, uint16_t rpc_port) + bool peerlist_manager::set_peer_just_seen(peerid_type peer, const epee::net_utils::network_address& addr, uint32_t pruning_seed, uint16_t rpc_port, uint32_t rpc_credits_per_hash) { TRY_ENTRY(); CRITICAL_REGION_LOCAL(m_peerlist_lock); @@ -326,6 +326,7 @@ namespace nodetool ple.last_seen = time(NULL); ple.pruning_seed = pruning_seed; ple.rpc_port = rpc_port; + ple.rpc_credits_per_hash = rpc_credits_per_hash; return append_with_peer_white(ple); CATCH_ENTRY_L0("peerlist_manager::set_peer_just_seen()", false); } diff --git a/src/p2p/net_peerlist_boost_serialization.h b/src/p2p/net_peerlist_boost_serialization.h index c2773981c..bd5063510 100644 --- a/src/p2p/net_peerlist_boost_serialization.h +++ b/src/p2p/net_peerlist_boost_serialization.h @@ -42,7 +42,7 @@ #include "common/pruning.h" #endif -BOOST_CLASS_VERSION(nodetool::peerlist_entry, 2) +BOOST_CLASS_VERSION(nodetool::peerlist_entry, 3) namespace boost { @@ -241,6 +241,13 @@ namespace boost return; } a & pl.rpc_port; + if (ver < 3) + { + if (!typename Archive::is_saving()) + pl.rpc_credits_per_hash = 0; + return; + } + a & pl.rpc_credits_per_hash; } template <class Archive, class ver_type> diff --git a/src/p2p/p2p_protocol_defs.h b/src/p2p/p2p_protocol_defs.h index 85774fcd5..44b278589 100644 --- a/src/p2p/p2p_protocol_defs.h +++ b/src/p2p/p2p_protocol_defs.h @@ -77,6 +77,7 @@ namespace nodetool int64_t last_seen; uint32_t pruning_seed; uint16_t rpc_port; + uint32_t rpc_credits_per_hash; BEGIN_KV_SERIALIZE_MAP() KV_SERIALIZE(adr) @@ -85,6 +86,7 @@ namespace nodetool KV_SERIALIZE_OPT(last_seen, (int64_t)0) KV_SERIALIZE_OPT(pruning_seed, (uint32_t)0) KV_SERIALIZE_OPT(rpc_port, (uint16_t)0) + KV_SERIALIZE_OPT(rpc_credits_per_hash, (uint32_t)0) END_KV_SERIALIZE_MAP() }; typedef peerlist_entry_base<epee::net_utils::network_address> peerlist_entry; @@ -132,6 +134,7 @@ namespace nodetool { ss << pe.id << "\t" << pe.adr.str() << " \trpc port " << (pe.rpc_port > 0 ? std::to_string(pe.rpc_port) : "-") + << " \trpc credits per hash " << (pe.rpc_credits_per_hash > 0 ? std::to_string(pe.rpc_credits_per_hash) : "-") << " \tpruning seed " << pe.pruning_seed << " \tlast_seen: " << (pe.last_seen == 0 ? std::string("never") : epee::misc_utils::get_time_interval_string(now_time - pe.last_seen)) << std::endl; @@ -166,6 +169,7 @@ namespace nodetool uint64_t local_time; uint32_t my_port; uint16_t rpc_port; + uint32_t rpc_credits_per_hash; peerid_type peer_id; BEGIN_KV_SERIALIZE_MAP() @@ -174,6 +178,7 @@ namespace nodetool KV_SERIALIZE(local_time) KV_SERIALIZE(my_port) KV_SERIALIZE_OPT(rpc_port, (uint16_t)(0)) + KV_SERIALIZE_OPT(rpc_credits_per_hash, (uint32_t)0) END_KV_SERIALIZE_MAP() }; @@ -220,7 +225,7 @@ namespace nodetool { const epee::net_utils::network_address &na = p.adr; const epee::net_utils::ipv4_network_address &ipv4 = na.as<const epee::net_utils::ipv4_network_address>(); - local_peerlist.push_back(peerlist_entry_base<network_address_old>({{ipv4.ip(), ipv4.port()}, p.id, p.last_seen, p.pruning_seed, p.rpc_port})); + local_peerlist.push_back(peerlist_entry_base<network_address_old>({{ipv4.ip(), ipv4.port()}, p.id, p.last_seen, p.pruning_seed, p.rpc_port, p.rpc_credits_per_hash})); } else MDEBUG("Not including in legacy peer list: " << p.adr.str()); @@ -235,7 +240,7 @@ namespace nodetool std::vector<peerlist_entry_base<network_address_old>> local_peerlist; epee::serialization::selector<is_store>::serialize_stl_container_pod_val_as_blob(local_peerlist, stg, hparent_section, "local_peerlist"); for (const auto &p: local_peerlist) - ((response&)this_ref).local_peerlist_new.push_back(peerlist_entry({epee::net_utils::ipv4_network_address(p.adr.ip, p.adr.port), p.id, p.last_seen, p.pruning_seed, p.rpc_port})); + ((response&)this_ref).local_peerlist_new.push_back(peerlist_entry({epee::net_utils::ipv4_network_address(p.adr.ip, p.adr.port), p.id, p.last_seen, p.pruning_seed, p.rpc_port, p.rpc_credits_per_hash})); } } END_KV_SERIALIZE_MAP() |