Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
This fixes coretests, which does not register daemon specific arguments,
but uses core, which uses those arguments. Also gets rid of an unwanted
dependency on daemon code from core.
|
|
It will not try to connect to the monero network, nor listen
|
|
|
|
This is a precaution for older Berkeley DB versions.
- smooth reports an issue running with 4.7:
DB_ENV->log_set_config: DB_LOG_IN_MEMORY: method not permitted
after handle's open method
- this works just fine with 5.3
- we do not use DB_LOG_IN_MEMORY, but we use DB_LOG_AUTO_REMOVE
- libdb docs say some flags must be set before open, and some
may be set at any time, but never say some must be set after
open
- moving the call to log_set_config before open works with 5.3
Therefore, it seems best to move the call before open.
|
|
|
|
Clear any partially loaded data, and start with a default config
|
|
Early DB versions did not store key images for inputs if the
transaction spending them had no outputs (ie, all fee). This
is not correct, as this would allow these outputs to be double
spent. This was fixed in 533acc30eda7792c802ea8b6417917fa99b8bc2b
a few months ago, but databases having synced blocks 2021612 and
685498 with a faulty version will be missing those key images
in the spent keys database. This code checks for this, and adds
those key images if they are missing.
|
|
instead of a command line setting. It makes sense that is is
a long lived setting.
|
|
|
|
|
|
|
|
Berkeley DB uses 1 based indices for RECNO databases, and the
implementation of BlockchainDB for Berkeley DB assumes 1 based
indices are passed to the API, whereas the LMDB one assumes
0 based indices. This is all internally consisteny, but since
the BDB code stores 1 based indices in the database, external
users have to be aware of this, as the indices will be off by
one depending on which DB is used.
|
|
Keys in Berkeley DB are 32 bits. We don't want to read random
bits in the high part.
|
|
|
|
This reverts commit c6bf73131aaf804cb17f24c856f826be2761a566, reversing
changes made to 8a52cf4055d247dd4b162985c931e99683992e3c.
|
|
^C while in manual refresh will cancel the refresh, since that's
often an annoying thing to have to wait for. Also, a manual refresh
command will interrupt any running background refresh and take
over, rather than wait for the background refresh to be done, and
look to be hanging.
|
|
|
|
Green is now used for incoming transfers, and magenta for outgoing
transfers. This is consistent to the scheme used by other logging.
|
|
|
|
|
|
The daemon will be polled every 90 seconds for new blocks.
It is enabled by default, and can be turned on/off with
set auto-refresh 1 and set auto-refresh 0 in the wallet.
|
|
This allows them to be saved as a fixed (one byte) chunk whatever
the value. Using a varint will use two bytes as the high bit gets
set.
This is backward compatible with current usage (0-2 values).
|
|
It does not expose the RPC for commands like start_mining, etc
(ie, commands a public node operator might want to be restricted)
|
|
This needed locking the use of m_http_client, to avoid collisions
in I/O.
|
|
|
|
|
|
This fixes the hash rate being wrong on testnet after the switch
to 2 minute blocks
|
|
|
|
m_blocked_ips now stores the unblocking time, rather than the
blocking time.
Also change > to >=, since banning for 0 seconds should not ban
|
|
|
|
Also add some more tests, and rename some instances of
"version" and "add" for clarity.
NOTE: the starting height values are sometimes wrong.
I suspect this is due to the hard fork reorg code being
buggy, since they're good when syncing after the fact.
However, they're not actually used by the consensus code,
so I'm ignoring this for now, but this needs debugging.
|
|
|
|
With minor cleanup and fixes (spelling, indent) by moneromooo
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Take the opportunity to add a no-coinbase case too, for even faster
sync when an address is known to never have mined to.
|
|
Assume the whole of a coinbase goes to the same address (so that
if the first output isn't for us, none of it is), and only look
for payment id when we received something in the transaction.
|
|
- use std::vector::std::deque to not leak when exceptions happen
- use std::unique_ptr instead of the deprecated std::auto_ptr
|
|
Use the NoodleDoodle threading technique to speedup a couple
code blocks on the main path when refreshing blocks without
any transactions for us.
|
|
|
|
The info is stored encrypted, and is pretty useful, often after
the fact.
|
|
With backward compatibility
|
|
More information is now saved and displayed
|
|
|
|
|
|
|
|
|
|
Height seemed to be flying all over the place on a rescan here.
Logging to a file shows the heights are actually correct, and
this is some kind of screen refresh artifact. Flush after \r
and update less often to reduce this effect a lot.
|
|
There are various locale related bugs in various versions of boost,
where exceptions are thrown in boost::filesystem APIs when the
current locale is not to boost's liking. It's not clear what "not
to boost's liking" means in detail, though "en" and "en_US.UTF-8"
are not to its liking.
Fix it by running a test function that's known to throw in such
a case, and resetting LANG and LC_ALL to C if an exception is
thrown. In simplewallet, the locale is queried before that so the
correct translations will still be used.
|
|
If empty, it will still be fetched from the environment
|
|
The last relayed time of a transaction is maintained, and
transactions will be relayed again if they are still in the
pool after a certain amount of time, which increases with
the transaction's age. All such transactions are resent,
whether or not they originated on the local node.
|
|
It's a user friendly display of incoming and outgoing transfers,
listed by height, within an optional height range.
|
|
It looks like some of the indices passed to the DB access functions
are already bumped by 1. Moreover, the existing code was not
throwing DB errors with 0 keys, and this is unlikely if it really
was using 0 keys. Last, this patch broke sync from scratch in at
least one case. So I'm calling it bad and reverting it.
This reverts commit bfc97401ae81bb30278a318de7f048c653bf6582.
|
|
Use the correct block time for realtime fuzz on locktime
Use the correct block time to calculate next_difficulty on alt chains (will not work as-is with voting)
Lock unit tests to original block time for now
|
|
version 2
|
|
It returns the ideal version for a given height, which is
based on the minimum height for a fork, disregarding votes
|
|
And setup the first fork to not vote
|
|
|
|
The default default mixin is 4. It can now be changed per wallet.
|
|
|
|
They check whether they're running on testnet by accessing the
m_rpc_server object, which does not exist when in RPC mode.
Also, fix hard_fork_info being called with the wrong API.
|
|
Replace boolean values and exceptions where appropriate
|
|
git history's here if needed to get any of this back
|
|
|
|
|
|
|
|
|
|
|
|
|
|
I had never tested it, obviously
|
|
The method name to the "json_rpc" commands are names, not part
of URLs.
|
|
Displays current block height and target, net hash, hard fork
basic info, and connections.
Useful as a basic user friendly "what's going on here" command.
|
|
The wallet and the daemon applied different height considerations
when selecting outputs to use. This can leak information on which
input in a ring signature is the real one.
Found and originally fixed by smooth on Aeon.
|
|
It dumps data from the blockchain to a JSON format, and is
intended to help detect differences between data held in
different database formats.
|
|
|
|
|
|
|
|
|
|
Using major version would cause older daemons to reject those
blocks as they fail to deserialize blocks with a major version
which is not 1. There is no such restriction on the minor
version, so switching allows older daemons to coexist with
newer ones till the actual fork date, when most will hopefully
have updated already.
Also, for the same reason, we consider a vote for 0 to be a
vote for 1, since older daemons set minor version to 0.
|
|
Also make the number of blocks endian independant, and add
support for testnet
|
|
This will happen if the chosen output file does not have a
path specified
|
|
It speeds up a lot, which can be significant when reorganizing
from the genesis block to create the hard fork data.
|
|
There is no need to fully recalculate and rewrite state, just
refill state from the DB.
|
|
|
|
It allows one to check the amount of monero sent to a particular
address in a particular transaction, given that transaction's tx key
|
|
|
|
|
|
|
|
because it leaks your standard address
|
|
It allows enabling the rescan_spent command only for trusted
daemon
|
|
As recommended in MRL-0004
|
|
The wallet decomposes fully as of now too.
|
|
The small leftover is carried forward
|
|
|
|
upnpDiscover() takes a new argument for TTL.
Use the suggested default of 2.
|
|
|
|
Berkeley DB requires RECNO keys to be 32 bits, and forbids a key
value of 0.
|
|
An unsigned quantity is always >= 0
|
|
|
|
|
|
Add a block height before which version 1 is assumed
Use DB transactions
|
|
|
|
So they can avoid dust if they so wish
|
|
This allows knowing the hard fork a block must obey in order to be
added to the blockchain. The previous semantics would use that new
block's version vote to determine this hard fork, which made it
impossible to use the rules to validate transactions entering the
tx pool (and made it impossible to validate a block before adding
it to the blockchain).
|
|
|
|
Braino.
|
|
There will be a delay on first load of an existing blockchain
as it gets reparsed for this state data.
|
|
I coded the whole thing from scratch.
|
|
|
|
Since the state isn't actually saved anywhere, as the archive
code isn't called in the new DB version.
|
|
|
|
|
|
|
|
This keeps track of voting via block version, in order to decide
when to enable a particular fork's code.
|
|
|
|
|
|
|
|
|
|
|
|
This avoids races which could result in two objects being created
|
|
This ensures one can't instanciate a DNSResolver object by
mistake, but uses the singleton. A separate create static
function is added for cases where a new object is explicitely
needed.
|
|
(disabling it was unintentional)
|
|
|
|
crypto::rand is now used for output selection
|
|
To enable storing tx keys in the (now encrypted) wallet cache.
|
|
To get the tx keys returned via RPC, set the "get_tx_key" or
"get_tx_keys" request field to true (defaults to false).
|
|
crypto-ops with a version straight from Bernstein's ref 10
|
|
|
|
have bitmonero's crypto code come from bernstein et al's ref 10 code
|
|
|
|
|
|
It contains private data, such as a record of transactions.
The key is derived from the view and spend secret keys.
The encryption currently is one shot, so may require a lot of
memory for large wallet caches.
|
|
|
|
They are also stored in the cache file, to be retrieved using
a new get_tx_key command.
|
|
Default to "simplewallet.log" in current directory when file path isn't
obtained from epee.
In this situation previously, it defaulted to the file name of ".log"
("" + ".log") in the current directory.
(Thanks to @sammy007 for reporting bug.)
An earlier version yet used "" + "/" + ".log" = "/.log", which resulted
in silently not logging in most cases, due to lack of permission.
Test:
PATH=$PATH:</path/to/simplewallet/folder> && simplewallet --wallet-file /dev/null
This results in epee not finding the executable's file path, so
simplewallet will now use a default log filename.
|
|
And I'd like a comment from tewinget or someone else
|
|
Block addition can fail, and the old code would not update the
cumulative size in that case.
|
|
|
|
This is an unintended difference from the old code. Though I don't
think it can actually happen in practice with the current take_tx
implementation.
|
|
The height function apparently used to return the index of
the last block, rather than the height of the chain. This now
seems to be incorrect, judging the the code, so we remove the
now wrong comment, as well as a couple +/- 1 adjustments
which now cause the median calculation to differ from the
original blockchain_storage version.
|
|
Each thread can use the same resolver.
|
|
This option specifies the input file path for importing.
The default remains <data-dir>/export/blockchain.raw
|
|
|
|
This option will export to the specified file path.
The default file path remains <data-dir>/export/blockchain.raw
|
|
|
|
vector<bool> causes issues in serialization with Boost 1.56
|
|
|
|
This obsoletes the need for a lengthy blockchain rescan when
a transaction doesn't end up in the chain after being accepted
by the daemon, or any other reason why the wallet's idea of
spent and unspent outputs gets out of sync from the blockchain's.
|
|
|
|
The original code removed key images from a tx from the blockchain
when an non to-key nor gen input was found in that tx. Additionally,
the remainder of the tx data was added to the blockchain only after
the double spend check passed.
|
|
|
|
|
|
|
|
It has to stay constant as we get the blockchain lock for the
entire function. Avoids some unnecessary DB accesses.
|
|
It was only used by the older blockchain_storage.
We also move the code to the calling blockchain level, to avoid
replicating the code in every DB implementation. This also makes
the get_random_out method obsolete, and we delete it.
|
|
The string conversion already adds them
|
|
in addition to the raw hex representation
|
|
It was not even trying to
|
|
Pros:
- smaller on the blockchain
- shorter integrated addresses
Cons:
- less sparseness
- less ability to embed actual information
The boolean argument to encrypt payment ids is now gone from the
RPC calls, since the decision is made based on the length of the
payment id passed.
|
|
A payment ID may be encrypted using the tx secret key and the
receiver's public view key. The receiver can decrypt it with
the tx public key and the receiver's secret view key.
Using integrated addresses now cause the payment IDs to be
encrypted. Payment IDs used manually are not encrypted by default,
but can be encrypted using the new 'encrypt_payment_id' field
in the transfer and transfer_split RPC calls. It is not possible
to use an encrypted payment ID by specifying a manual simplewallet
transfer/transfer_new command, though this is just a limitation
due to input parsing.
|
|
If there's no blocks in database (m_height == 0):
Don't assign incorrect block range to check.
Skip average block size check.
Test:
Run blockchain_converter with an existing source blockchain.bin and
a non-existent LMDB destination database.
The converter creates a BlockchainLMDB instance with zero height, due to
not being initialized with a genesis block, normally done by
Blockchain::init(). While different than the behavior of bitmonerod,
blockchain_import, and blockchain_export, the initialization hasn't been
strictly necessary.
The db batch size estimation normally uses an average block size, or a
default minimum block size, whichever is greater. In this case, as
there's no existing blocks to check for an average block size, the
default should be used.
|
|
|
|
haven't used git to submit a PR, so we're creating the wordlist on their behalf
|
|
|
|
I ran some tests, and all prefixes seem to be unique for len(3)
|
|
|
|
It should avoid a lot of the issues sending more than half the
wallet's contents due to change.
Actual output selection is still random. Changing this would
improve the matching of transaction amounts to output sizes,
but may have non obvious effects on blockchain analysis.
Mapped to the new transfer_new command in simplewallet, and
transfer uses the existing algorithm.
To use in RPC, add "new_algorithm: true" in the transfer_split
JSON command. It is not used in the transfer command.
|
|
|
|
|
|
|
|
boost doesn't support %zu for size_t, and the previous change
to %u could technically lose bits (though it would require splitting
a transfer into 4 billion transactions, which seems unlikely).
|
|
This can be useful if you want to be given a veto over the tx fee,
or if you want to see what a tx fee would be without actually sending.
|
|
|
|
These are mainnet blocks, and would cause syncing on testnet to
reject all incoming blocks.
|
|
*Thanks to freshman for reporting bug.
|
|
This allows blockchain_import to work with batch and verify modes enabled
(the default).
|
|
Added option to cache tx-input verification results.
|
|
Fixed OSX compilation issues due to random lmdb resize points.
Fixed infinite loop bug when calling core::get_block_template(..).
|
|
|
|
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
|
|
Fix compilation error
|
|
- bugfix: prevent re-entering db->get when current buffer contains all possible index values.
|
|
|
|
|
|
|
|
|
|
The system is mostly the Qt system, but we don't use Qt to avoid
the dependencies.
See README.i18n for details.
|
|
This currently only affects blockchain_import and blockchain_converter.
When the number of blocks expected for the batch transaction is
provided, make an estimate of the DB space needed. If not enough free
space remains, resize the DB.
The estimate is made based on:
- the average size of the last 500 blocks, or if larger, a min. block
size of 4k
- a factor for the expanded size a block occupies in the DB across the
sub-dbs/tables
- a safety factor (1.7) to allow for a "reasonable" average block size
increase over the batch
Increase the DB size by whichever is greater: the estimated size needed
or a minimum increase size, currently 128 MB.
The conservative factors in the estimate help in testing that the resize
occurs when needed, and without gratuitous size increases. For common
use, the safety factor and minimum increase size could reasonably be
increased.
For testing, setting DEFAULT_MAPSIZE (blockchain_db/lmdb/db_lmdb.h) to 1
<< 27 (128 MB) and recompiling will ensure DB resizes take place sooner
and more frequently.
|
|
|
|
|
|
This will assist in a DB resize check.
|
|
to lock up.
|
|
|
|
|
|
|