Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Signed-off-by: Jean Pierre Dudey <me@jeandudey.tech>
|
|
|
|
|
|
|
|
Update copyright year to 2020
|
|
- choice where to enter passphrase is now made on the host
- use wipeable string in the comm stack
- wipe passphrase memory
- protocol optimizations, prepare for new firmware version
- minor fixes and improvements
- tests fixes, HF12 support
|
|
- Add abstract_http_client.h which http_client.h extends.
- Replace simple_http_client with abstract_http_client in wallet2,
message_store, message_transporter, and node_rpc_proxy.
- Import and export wallet data in wallet2.
- Use #if defined __EMSCRIPTEN__ directives to skip incompatible code.
|
|
When a handshake fails, it can fail due to timeout or destroyed
connection, in which case the connection will be, or already is,
closed, and we don't want to do it twice.
Additionally, when closing a connection directly from the top
level code, ensure the connection is gone from the m_connects
list so it won't be used again.
AFAICT this is now clean in netstat, /proc/PID/fd and print_cn.
This fixes a noisy (but harmless) exception.
|
|
Cleaning up a little around the code base.
|
|
|
|
|
|
This is a bug waiting to happen
|
|
|
|
This fixes rapid reconnections failing as the peer hasn't yet
worked out the other side is gone, and will reject "duplicate"
connections until a timeout.
|
|
|
|
|
|
If adding a response handler after the protocol is released,
they could never be cancelled again, and would end up keeping
a ref that never goes away
|
|
and fix the message grammar
|
|
|
|
|
|
|
|
|
|
Resetting the timer after shutdown was initiated would keep
a reference to the object inside ASIO, which would keep the
connection alive until the timer timed out
|
|
|
|
The problem actually exists in two parts:
1. When sending chunks over a connection, if the queue size is
greater than N, the seed is predictable across every monero node.
>"If rand() is used before any calls to srand(), rand() behaves as if
it was seeded with srand(1). Each time rand() is seeded with the same seed, it
must produce the same sequence of values."
2. The CID speaks for itself: "'rand' should not be used for security-related
applications, because linear congruential algorithms are too easy to break."
*But* this is an area of contention.
One could argue that a CSPRNG is warranted in order to fully mitigate any
potential timing attacks based on crafting chunk responses. Others could argue
that the existing LCG, or even an MTG, would suffice (if properly seeded). As a
compromise, I've used an MTG with a full bit space. This should give a healthy
balance of security and speed without relying on the existing crypto library
(which I'm told might break on some systems since epee is not (shouldn't be)
dependent upon the existing crypto library).
|
|
IP addresses are stored in network byte order even on little
endian hosts
|
|
IPv4 addresses are kept in network byte order in memory
|
|
|
|
Fixed by Fixed by crCr62U0
|
|
new cli options (RPC ones also apply to wallet):
--p2p-bind-ipv6-address (default = "::")
--p2p-bind-port-ipv6 (default same as ipv4 port for given nettype)
--rpc-bind-ipv6-address (default = "::1")
--p2p-use-ipv6 (default false)
--rpc-use-ipv6 (default false)
--p2p-require-ipv4 (default true, if ipv4 bind fails and this is
true, will not continue even if ipv6 bind
successful)
--rpc-require-ipv4 (default true, description as above)
ipv6 addresses are to be specified as "[xx:xx:xx::xx:xx]:port" except
in the cases of the cli args for bind address. For those the square
braces can be omitted.
|
|
|
|
|
|
|
|
|
|
Make bans control RPC sessions too. And auto-ban some bad requests.
Drops HTTP connections whenever response code is 500.
|
|
|
|
|
|
GCC wants operator= aand copy ctor to be both defined, or neither
|
|
|
|
The lock is meant for the network throttle object only,
and this should help coverity get unconfused
|
|
|
|
|
|
add two RSA based ciphers for Windows/depends compatibility
also enforce server cipher ordering
also set ECDH to auto because vtnerd says it is good :)
When built with the depends system, openssl does not include any
cipher on the current whitelist, so add this one, which fixes the
problem, and does seem sensible.
|
|
SHA1 is too close to bruteforceable
|
|
|
|
It can allocate a lot when getting a lot of connections
(in particular, the stress test on windows apparently pushes
that memory to actual use, rather than just allocated)
|
|
It will avoid connecting to a daemon (so useful for cold signing
using a RPC wallet), and not perform DNS queries.
|
|
|
|
When closing connections due to exiting, the IO service is
already gone, so the data exchange needed for a gracious SSL
shutdown cannot happen. We just close the socket in that case.
|
|
|
|
displays total sent and received bytes
|
|
If `--daemon-ssl enabled` is set in the wallet, then a user certificate,
fingerprint, or onion/i2p address must be provided.
|
|
An override for the wallet to daemon connection is provided, but not for
other SSL contexts. The intent is to prevent users from supplying a
system CA as the "user" whitelisted certificate, which is less secure
since the key is controlled by a third party.
|
|
If the verification mode is `system_ca`, clients will now do hostname
verification. Thus, only certificates from expected hostnames are
allowed when SSL is enabled. This can be overridden by forcible setting
the SSL mode to autodetect.
Clients will also send the hostname even when `system_ca` is not being
performed. This leaks possible metadata, but allows servers providing
multiple hostnames to respond with the correct certificate. One example
is cloudflare, which getmonero.org is currently using.
|
|
If SSL is "enabled" via command line without specifying a fingerprint or
certificate, the system CA list is checked for server verification and
_now_ fails the handshake if that check fails. This change was made to
remain consistent with standard SSL/TLS client behavior. This can still
be overridden by using the allow any certificate flag.
If the SSL behavior is autodetect, the system CA list is still checked
but a warning is logged if this fails. The stream is not rejected
because a re-connect will be attempted - its better to have an
unverified encrypted stream than an unverified + unencrypted stream.
|
|
|
|
Specifying SSL certificates for peer verification does an exact match,
making it a not-so-obvious alias for the fingerprints option. This
changes the checks to OpenSSL which loads concatenated certificate(s)
from a single file and does a certificate-authority (chain of trust)
check instead. There is no drop in security - a compromised exact match
fingerprint has the same worse case failure. There is increased security
in allowing separate long-term CA key and short-term SSL server keys.
This also removes loading of the system-default CA files if a custom
CA file or certificate fingerprint is specified.
|
|
|
|
|
|
|
|
get_io_service was deprecated, and got removed
|
|
|
|
Use SSL API directly, skip boost layer
|
|
|
|
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
|
|
|
|
|
|
|
|
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
|
|
|
|
|
|
- Support for ".onion" in --add-exclusive-node and --add-peer
- Add --anonymizing-proxy for outbound Tor connections
- Add --anonymous-inbounds for inbound Tor connections
- Support for sharing ".onion" addresses over Tor connections
- Support for broadcasting transactions received over RPC exclusively
over Tor (else broadcast over public IP when Tor not enabled).
|
|
|
|
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
|
|
avoids pointless allocs and memcpy
|
|
|
|
|
|
|
|
server
Updated assert message
Use a local variable that won't destruct at the end of the if-branch
Updated comment
|
|
It's useful info to have when investigating logs
|
|
|
|
|
|
This is where it is actually used, and initialized
|
|
|
|
bcf3f6af fuzz_tests: catch unhandled exceptions (moneromooo-monero)
3ebd05d4 miner: restore stream flags after changing them (moneromooo-monero)
a093092e levin_protocol_handler_async: do not propagate exception through dtor (moneromooo-monero)
1eebb82b net_helper: do not propagate exceptions through dtor (moneromooo-monero)
fb6a3630 miner: do not propagate exceptions through dtor (moneromooo-monero)
2e2139ff epee: do not propagate exception through dtor (moneromooo-monero)
0749a8bd db_lmdb: do not propagate exceptions in dtor (moneromooo-monero)
1b0afeeb wallet_rpc_server: exit cleanly on unhandled exceptions (moneromooo-monero)
418a9936 unit_tests: catch unhandled exceptions (moneromooo-monero)
ea7f9543 threadpool: do not propagate exceptions through the dtor (moneromooo-monero)
6e855422 gen_multisig: nice exit on unhandled exception (moneromooo-monero)
53df2deb db_lmdb: catch error in mdb_stat calls during migration (moneromooo-monero)
e67016dd blockchain_blackball: catch failure to commit db transaction (moneromooo-monero)
661439f4 mlog: don't remove old logs if we failed to rename the current file (moneromooo-monero)
5fdcda50 easylogging++: test for NULL before dereference (moneromooo-monero)
7ece1550 performance_test: fix bad last argument calling add_arg (moneromooo-monero)
a085da32 unit_tests: add check for page size > 0 before dividing (moneromooo-monero)
d8b1ec8b unit_tests: use std::shared_ptr to shut coverity up about leaks (moneromooo-monero)
02563bf4 simplewallet: top level exception catcher to print nicer messages (moneromooo-monero)
c57a65b2 blockchain_blackball: fix shift range for 32 bit archs (moneromooo-monero)
|
|
|
|
When this throws in a loop, stack trace generation can take
a significant amount of CPU
|
|
|
|
|
|
It was accepting any character for the dot (yeah, massive big I know)
|
|
|
|
|
|
|
|
|
|
a connection's timeout is halved for every extra connection
from the same host.
Also keep track of when we don't need to use a connection
anymore, so we can close it and free the resource for another
connection.
Also use the longer timeout for non routable local addresses.
|
|
This would make monerod use 100% CPU when running with torsocks
without Tor running
|
|
|
|
|
|
|
|
|
|
In file included from src/cryptonote_basic/hardfork.cpp:33:
In file included from src/blockchain_db/blockchain_db.h:42:
In file included from src/cryptonote_basic/hardfork.h:31:
contrib/epee/include/syncobj.h:37:10: fatal error: 'boost/thread/v2/thread.hpp' file not found
#include <boost/thread/v2/thread.hpp>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from src/rpc/daemon_handler.cpp:29:
In file included from src/rpc/daemon_handler.h:36:
In file included from src/p2p/net_node.h:41:
In file included from contrib/epee/include/net/levin_server_cp2.h:32:
In file included from contrib/epee/include/net/abstract_tcp_server2.h:324:
contrib/epee/include/net/abstract_tcp_server2.inl:44:10: fatal error: 'boost/thread/v2/thread.hpp' file not found
#include <boost/thread/v2/thread.hpp> // TODO
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
contrib/epee/include/math_helper.h: In member function 'bool epee::math_helper::average<val, default_base>::set_base()':
contrib/epee/include/syncobj.h:227:56: error: 'sleep_for' is not a member of 'boost::this_thread'
#define CRITICAL_REGION_LOCAL(x) {boost::this_thread::sleep_for(boost::chrono::milliseconds(epee::debug::g_test_dbg_lock_sleep()));} epee::critical_region_t<decltype(x)> critical_region_var(x)
^
contrib/epee/include/syncobj.h:227:56: note: in definition of macro 'CRITICAL_REGION_LOCAL'
#define CRITICAL_REGION_LOCAL(x) {boost::this_thread::sleep_for(boost::chrono::milliseconds(epee::debug::g_test_dbg_lock_sleep()));} epee::critical_region_t<decltype(x)> critical_region_var(x)
^~~~~~~~~
contrib/epee/include/syncobj.h:227:56: note: suggested alternative: 'sleep'
#define CRITICAL_REGION_LOCAL(x) {boost::this_thread::sleep_for(boost::chrono::milliseconds(epee::debug::g_test_dbg_lock_sleep()));} epee::critical_region_t<decltype(x)> critical_region_var(x)
^
contrib/epee/include/syncobj.h:227:56: note: in definition of macro 'CRITICAL_REGION_LOCAL'
#define CRITICAL_REGION_LOCAL(x) {boost::this_thread::sleep_for(boost::chrono::milliseconds(epee::debug::g_test_dbg_lock_sleep()));} epee::critical_region_t<decltype(x)> critical_region_var(x)
^~~~~~~~~
|
|
|
|
|
|
Coverity 136394 136397 136409 136526 136529 136533 175302
|
|
It was already possible to limit outgoing connections. One might want
to do this on home network connections with high bandwidth but low
usage caps.
|
|
|
|
|
|
|
|
boost::regex is stupendously atrocious at parsing malformed data
|
|
|
|
|
|
so they show up by default
|
|
|
|
|
|
|
|
These even had the epee namespace.
This fixes some ugly circular dependencies.
|
|
|
|
|
|
This reverts commit f2939bdce8c86b0f96921f731184c361106390c8.
|
|
|
|
|
|
Fields are written with their "name" as key, and that name changed.
|
|
Deleted 3 out of 4 calls to method connection_basic::sleep_before_packet
that were erroneous / superfluous, which enabled the elimination of a
"fudge" factor of 2.1 in connection_basic::set_rate_up_limit;
also ended the multiplying of limit values and numbers of bytes
transferred by 1024 before handing them over to the global throttle
objects
|
|
|
|
|
|
It's nasty, and actually breaks on Solaris, where if.h fails to
build due to:
struct map *if_memmap;
|
|
|
|
This branch fixes a file permission issue introduced by https://github.com/monero-project/monero/commit/69c37200aa87f100f731e755bdca7a0dc6ae820a
|
|
|
|
|
|
Fixes compile error when building with OpenSSL v1.1:
contrib/epee/include/net/net_helper.h: In member function ‘void epee::net_utils::blocked_mode_client::shutdown_ssl()’:
contrib/epee/include/net/net_helper.h:579:106: error: ‘SSL_R_SHORT_READ’ was not declared in this scope
if (ec.category() == boost::asio::error::get_ssl_category() && ec.value() != ERR_PACK(ERR_LIB_SSL, 0, SSL_R_SHORT_READ))
^
contrib/epee/include/net/net_helper.h:579:106: note: suggested alternative: ‘SSL_F_SSL_READ’
See boost/asio/ssl/error.hpp.
Boost handles differences between OpenSSL versions.
cmake: fail if Boost is too old for OpenSSL v1.1
|
|
|
|
The commands handler must not be destroyed before the config
object, or we'll be accessing freed memory.
An earlier attempt at using boost::shared_ptr to control object
lifetime turned out to be very invasive, though would be a
better solution in theory.
|
|
Level 1 logs map to INFO, so setting log level to 1 should
show these. Demote some stuff to DEBUG to avoid spam, though.
|
|
- internal nullptr checks
- prevent modifications to network_address (shallow copy issues)
- automagically works with any type containing interface functions
- removed fnv1a hashing
- ipv4_network_address now flattened with no base class
|
|
CID 161879
|
|
close might end up dropping a ref, ending up removing the
connection from m_connects, as the lock is recursive. This'd
cause an out of bounds exception and kill the idle connection
maker thread
|
|
It has virtual functions and is used as a base class
|
|
|
|
We don't actually need to keep them past the call to start, as this
adds them to the config object list, and so they'll then be cancelled
already when the stop signal arrives. This allows removing the periodic
call to cleanup connections.
|
|
A block queue is now placed between block download and
block processing. Blocks are now requested only from one
peer (unless starved).
Includes a new sync_info coommand.
|
|
It's got no place in the base class as it's P2P specific field
|
|
According to the HTTP spec: "The HEAD method is identical to GET
except that the server MUST NOT return a message-body in the
response".
|
|
|
|
Due to bad refactoring in PR #2073.
timeout_handler() doesn't work as a virtual function.
|
|
|
|
Since I had to add an ID to the derived classes anyway,
this can be used instead. This removes an apparently
pointless warning from CLANG too.
|
|
|
|
A timedsync is issued every minute on a connection, but the input
tineout is 2 minutes. This means a new sync request could be issued
while a slow sync request was already in progress. The additional
request will further clog the network on a slow connection, and
cause a premature timeout.
|
|
If we got at least MIN_BYTES_WANTED (default 512) during any network
poll, reset the timeout to allow more time for data to arrive.
|
|
All code which was using ip and port now uses a new IPv4 object,
subclass of a new network_address class. This will allow easy
addition of I2P addresses later (and also IPv6, etc).
Both old style and new style peer lists are now sent in the P2P
protocol, which is inefficient but allows peers using both
codebases to talk to each other. This will be removed in the
future. No other subclasses than IPv4 exist yet.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- http_simple_client now uses std::chrono for timeouts
- http_simple_client accepts timeouts per connect / invoke call
- shortened names of epee http invoke functions
- invoke command functions only take relative path, connection
is not automatically performed
|
|
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
|
|
|
|
|
|
|
|
|
|
|
|
This is more canonical, and avoids some helgrind spam
|
|
|
|
When receiving an answer packet, the command code was passed
to the callback instead of the error code. This was hiding
the "command not found" failure from the peer, and in turn
causing the code to attempt to deserialize a non existent
reply string.
|
|
|
|
This is intended to catch traffic coming from a web browser,
so we avoid issues with a web page sending a transfer RPC to
the wallet. Requiring a particular user agent can act as a
simple password scheme, while we wait for 0MQ and proper
authentication to be merged.
|
|
Fixes the wallet being unable to connect to the daemon
when there is no NIC.
|
|
The noexcept specs were added to make GCC 6.1.1 happy (#846), but this
one was missing (because GCC did not complain about it on Linux, but
does complain on OSX).
|
|
The destructors get a noexcept(true) spec by default, but these
destructors in fact throw exceptions. An alternative fix might be to not
throw (most if not all of these throws are non-essential
error-reporting/logging).
|
|
restricted mode.
|
|
When the send queue limit is reached, it is likely to not drain
any time soon. If we call close on the connection, it will stay
alive, waiting for the queue to drain before actually closing,
and will hit that check again and again. Since the queue size
limit is the reason we're closing in the first place, we call
shutdown directly.
|
|
If we reach the send queue size limit, we need to release the lock,
or we will deadlock and it will never drain.
If we reach that limit, it's likely there's another problem in the
first place though, so it will probably not drain in practice either,
unless some kind of transient network timeout.
|
|
|
|
Ain't nobody got time for link/cmake skullduggery.
This reverts commit fff238ec94ac6d45fc18c315d7bc590ddfaad63d.
|
|
Also close sockets on failure, just in case
|
|
Useful for debugging users' logs
|
|
|
|
and all other associated IPC
|
|
Use boost::unordered_map instead.
|
|
|
|
Instead of using boost::uuids::generate_random, which uses
uninitialized stuff *on purpose*, just to annoy people who
use valgrind
|
|
|
|
|
|
|
|
Since connections from the ::connect method are now kept in
a deque to be able to cancel them on exit, this leaks both
memory and a file descriptor. Here, we clean those up after
30 seconds, to avoid this. 30 seconds is higher then the
5 second timeout used in the async code, so this should be
safe. However, this is an assumption which would break if
that async code was to start relying on longer timeouts.
|
|
When the boost ioservice is stopped, pending work notifications
will not happen. This includes deadline timers, which would
otherwise time out the now cancelled I/O operations. When this
happens just after starting a new connect operation, this can
leave that operations in a state where it won't receive either
the completion notification nor a timeout, causing a hang.
This is fixed by keeping a list of connections corresponding
to the connect operations, and cancelling them before stopping
the boost ioservice.
Note that the list of these connections can grow unbounded, as
they're never cleaned up. Cleaning them up would involve
working out which connections do not have any pending work,
and it's not quite clear yet how to go about this.
|
|
It does not expose the RPC for commands like start_mining, etc
(ie, commands a public node operator might want to be restricted)
|
|
With minor cleanup and fixes (spelling, indent) by moneromooo
|
|
old unbound #warning does not block compilation
unit tests build fine. Even though the RPC/P2P network type is required again
|
|
works for unit tests build, too
|
|
|
|
|
|
Daemon interactive mode is now working again.
RPC mapped calls in daemon and wallet have both had connection_context
removed as an argument as that argument was not being used anywhere.
|
|
new update of the pr with network limits
more debug options:
discarding downloaded blocks all or after given height.
trying to trigger the locking errors.
debug levels polished/tuned to sane values.
debug/logging improved.
warning: this pr should be correct code, but it could make
an existing (in master version) locking error appear more often.
it's a race on the list (map) of peers, e.g. between closing/deleting
them versus working on them in net-limit sleep in sending chunk.
the bug is not in this code/this pr, but in the master version.
the locking problem of master will be fixed in other pr.
problem is ub, and in practice is seems to usually cause program abort
(tested on debian stable with updated gcc). see --help for option
to add sleep to trigger the error faster.
|