Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
haven't used git to submit a PR, so we're creating the wordlist on their behalf
|
|
|
|
I ran some tests, and all prefixes seem to be unique for len(3)
|
|
|
|
It should avoid a lot of the issues sending more than half the
wallet's contents due to change.
Actual output selection is still random. Changing this would
improve the matching of transaction amounts to output sizes,
but may have non obvious effects on blockchain analysis.
Mapped to the new transfer_new command in simplewallet, and
transfer uses the existing algorithm.
To use in RPC, add "new_algorithm: true" in the transfer_split
JSON command. It is not used in the transfer command.
|
|
|
|
|
|
|
|
boost doesn't support %zu for size_t, and the previous change
to %u could technically lose bits (though it would require splitting
a transfer into 4 billion transactions, which seems unlikely).
|
|
This can be useful if you want to be given a veto over the tx fee,
or if you want to see what a tx fee would be without actually sending.
|
|
|
|
These are mainnet blocks, and would cause syncing on testnet to
reject all incoming blocks.
|
|
*Thanks to freshman for reporting bug.
|
|
This allows blockchain_import to work with batch and verify modes enabled
(the default).
|
|
Added option to cache tx-input verification results.
|
|
Fixed OSX compilation issues due to random lmdb resize points.
Fixed infinite loop bug when calling core::get_block_template(..).
|
|
|
|
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
|
|
Fix compilation error
|
|
- bugfix: prevent re-entering db->get when current buffer contains all possible index values.
|
|
|
|
|
|
|
|
|
|
The system is mostly the Qt system, but we don't use Qt to avoid
the dependencies.
See README.i18n for details.
|
|
This currently only affects blockchain_import and blockchain_converter.
When the number of blocks expected for the batch transaction is
provided, make an estimate of the DB space needed. If not enough free
space remains, resize the DB.
The estimate is made based on:
- the average size of the last 500 blocks, or if larger, a min. block
size of 4k
- a factor for the expanded size a block occupies in the DB across the
sub-dbs/tables
- a safety factor (1.7) to allow for a "reasonable" average block size
increase over the batch
Increase the DB size by whichever is greater: the estimated size needed
or a minimum increase size, currently 128 MB.
The conservative factors in the estimate help in testing that the resize
occurs when needed, and without gratuitous size increases. For common
use, the safety factor and minimum increase size could reasonably be
increased.
For testing, setting DEFAULT_MAPSIZE (blockchain_db/lmdb/db_lmdb.h) to 1
<< 27 (128 MB) and recompiling will ensure DB resizes take place sooner
and more frequently.
|
|
|
|
|
|
This will assist in a DB resize check.
|
|
to lock up.
|
|
|
|
|
|
|
|
|
|
|
|
The needed information is supplied via a triple:
--generate-from-view-key address:viewkey:filename
|
|
|
|
|
|
They do not make sense
|
|
|
|
This ensures even massive wallets full of dust can sweep.
|
|
|
|
It's supposed to load all records and pick one that it finds twice.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
OK, I admit I wanted to template this struct for fun too.
|
|
|
|
It uses the async console handler differently than simplewallet,
and wasn't running the same exit code, causing it to never actually
exit after breaking out of the console entry loop.
|
|
The new save_watch_only saves a copy of the keys file without the
spend key. It can then be given away to be used as a normal keys
file, but with no spend ability.
|
|
|
|
|
|
|
|
Sends all the dust to your own wallet. May fail (if the fee required
is more than the dust total). May end up paying most of the dust in fees.
Unlocked dust total is now also displayed in "balance".
|
|
This was likely the intent.
|
|
On an existing database, don't set LMDB map size to be the initial size
for a new database.
Check if resize is needed at startup.
|
|
|
|
|
|
|
|
It's helpful when you don't know something failed (especially as
everything ends up returning true, so caller thinks all's fine)
|
|
|
|
|
|
Looking at how these are called confirms this must have been a mistake
|
|
The log file previously used the default data dir even if --data-dir was
set to something else.
Document data dir and log file path.
|
|
Replace --set_log with --log-level for consistency.
Show default log level in usage.
Add --log-file for specifying log file path.
Document log file path.
Display log file path at startup.
|
|
As with display of seed, don't log view key and spend key.
Includes:
- display of viewkey at wallet creation
- "viewkey" command output
- "spendkey" command output
|
|
|
|
because const is always appropriate
|
|
Add fix for compile error with multiple uses of peerid_type (uint64_t)
variable in lambda expression.
- known GCC issue: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65843
epee: replace return value of nullptr for expected boolean with false.
Fixes #231.
|
|
The random selection of a node shouldn't favor repeats that occur in the
hardcoded and DNS seed node lists.
Remove hardcoded ":18080" address which gives parse error.
Test: bitmonerod --log-level 2
The seed node list displayed at startup shouldn't show duplicate
addresses (includes port).
|
|
|
|
Based on tewinget's update.
Make OpenAlias address format independent of existing DNS functions.
Add tests.
Test:
make debug-test
cd build/debug/tests/unit_tests
# test that regular DNS functions work, including IPv4 lookups.
# also test function that converts OpenAlias address format
make && ./unit_tests --gtest_filter=DNSResolver*
# test that OpenAlias addresses like donate@getmonero.org work from
# wallet tools
make && ./unit_tests --gtest_filter=AddressFromURL.Success
|
|
This reverts commit b18368b635ba08aea541ef52ebc74180822644a2.
|
|
(this was 10 for testing purposes)
|
|
|
|
|
|
It's more user friendly that an error message saying the command
does not exist.
|
|
Add public method blockchain_storage::debug_pop_block_from_blockchain()
Ensure blockchain_import calls destructors before exit.
To test:
DATABASE=memory make release
// create blockchain.bin from blockchain.raw if needed
build/release/bin/blockchain_import --block-stop 1000
// try popping a single block
build/release/bin/blockchain_import --pop-blocks 1
|
|
Some filesystems (*cough* NTFS *cough*) aren't good with sparse files,
so this makes LMDB dynamically resize its mapsize as needed. Note: the
check interval is currently every 10 blocks (for testing) and will
probably need to change to 1000 or something. Default mapsize set to
1GiB.
Blockchain conversion tools using batching will probably segfault, I'll
fix that in the next commit.
|
|
Have blockchain_export use read-only mode when source is BlockchainLMDB.
|
|
|
|
Update help output for this and other options.
|
|
This corrects an unnecessary check and fixes compile error on OS X.
|
|
|
|
Update appropriate files (CMakeLists.txt, README.md)
|
|
This enables the importer to stop after reaching a specified block
number (zero-based index), before reaching the end of the source
blockchain.
|
|
Remove repeated coinbase tx in each exported block's data.
Add resume from last exported height to blockchain_export, making it the
default behavior when the file already exists.
Start reorganizing the utilities.
Various cleanup.
Update output, including referring to both height and block numbers as
zero-based instead of one-based. This better matches the block data,
rather than just some parts of the existing codebase.
Use smaller default batch sizes for importer when verifying, so progress
is saved more frequently.
Use small default batch size (1000) for importer on Windows, due to
current issue with big transaction sizes on LMDB.
file format
-----------
[4-byte magic | variable-length header | block data]
header
------
4-byte file_info length
file_info struct
file format major version
file format minor version
header length (includes file_info struct)
[rest of header, padded with 0 bytes up to header length]
block data
----------
4-byte chunk/block_package length
block_package struct
block
txs (coinbase/miner tx included already in block)
block_size
cumulative_difficulty
coins_generated
4-byte chunk/block_package length
block_package struct
[...]
|
|
|
|
|
|
Usage: blockchain_import --pop-blocks <num_blocks>
|
|
Use filesystem path conversion to string() instead of c_str().
Windows may otherwise output an address.
|
|
Instantiate BlockchainDB in blockchain exporter to reflect recent
updates.
This applies when blockchain_export.h defines SOURCE_DB as DB_LMDB.
|
|
|
|
|
|
|
|
When a stuck tx is removed from memory pool, first remove the associated
spent key images.
|
|
This is for the "print_pool" command and "get_transaction_pool" RPC
method.
Add mempool's spent key images to the results.
|
|
Only one thread will be doing the updating.
Two valid responses must match, and the first two that match will be
used.
|
|
|
|
|
|
old unbound #warning does not block compilation
unit tests build fine. Even though the RPC/P2P network type is required again
|
|
|
|
|
|
works for unit tests build, too
|
|
|
|
|
|
|
|
|
|
Really really finall version of this changes I hope.
|
|
Due to a bug in unbound, we were passing a string containing a null
character to ub_ctx_resolvconf and ub_ctx_hosts rather than a NULL
pointer. On *nix this wasn't causing headache, but on Windows this was
causing unbound to not correctly load DNS settings from the OS.
Note on the bug: in a Windows-specific code branch in the function
ub_ctx_hosts(), if the hosts file specified was a NULL pointer, a call
to getenv() was stored in a local char* and later freed. This is
incorrect, as we do not own that data, and caused the program to crash.
|
|
|
|
-dr.monero
and once more again merged all work to current official monero version
|
|
|
|
|
|
Daemon interactive mode is now working again.
RPC mapped calls in daemon and wallet have both had connection_context
removed as an argument as that argument was not being used anywhere.
|
|
There will need to be some more refactoring for these changes to be
considered complete/correct, but for now it's working.
new daemon cli argument "--db-type", works for LMDB and BerkeleyDB.
A good deal of refactoring is also present in this commit, namely
Blockchain no longer instantiates BlockchainDB, but rather is passed a
pointer to an already-instantiated BlockchainDB on init().
|
|
|
|
DNSSEC is now implemented with the hardcoded key from unbound.
This will need to be not hardcoded in the future, but is okay for now.
Unit tests updated for DNSSEC (as well as for the fact that, contrary to
previous assumption, example.com does not have a static IP address).
|
|
Add option "--resume <on|off>" where default is on.
|
|
|
|
|
|
Where this method is used, a BlockchainDB object is always expected, so
a pointer is unnecessary and less safe.
|
|
|
|
|
|
|
|
|
|
|
|
This allows an LMDB database to be used as the blockchain to export.
Adjust SOURCE_DB in src/blockchain_converter/blockchain_export.h
depending on needs. Defaults to DB_MEMORY.
DB_MEMORY is a sensible default for users migrating to LMDB, as it
allows the exporter to use the in-memory blockchain while the other
binaries work with LMDB, without recompiling anything.
|
|
Everything except actually *using* BlockchainBDB is wired up, but the db
itself is not yet working. Some error about user mem not large enough.
I think I know what this error means, but I can't determine the cause.
Notes: BerkeleyDB does not allow 0-indexing in its recno type databases,
so block numbers *in the database* will be 1-indexed. Modifications
to indexing have been made as needed.
|
|
Make Cmake things aware of BerkeleyDB and BlockchainBDB
Make the BlockchainDB unit tests aware of BlockchainBDB
|
|
LMDB implementation code copy/paste/modified into the Berkeley DB
implementation. Need to test if it builds, then if it works, and so on,
but the code is all there.
|
|
|
|
Basically verbatim copy of LMDB implementation, but with the guts ripped
out and includes changed, etc.
|
|
Based on work by tomerkon.
See https://github.com/tomerkon/bitmonero
src/cryptonote_core/bootfilesaver.{h,cpp}
src/bootfilegen/bootfilegen.cpp
|
|
Add support to:
- BlockchainDB, BlockchainLMDB
- blockchain_import utility to open LMDB database with one or more
LMDB flags.
Sample use:
$ blockchain_import --database lmdb#nosync
$ blockchain_import --database lmdb#nosync,nometasync
|
|
|
|
This imports to the blockchain database from an exported blockchain
file.
It can be used to bootstrap a new database or to add blocks to an
existing one.
Supports:
- both the in-memory and LMDB implementations
- optional: batching, verification, testnet
See help for usage.
Based on work by tomerkon.
See https://github.com/tomerkon
src/cryptonote_core/bootfileloader.{h,cpp}
|
|
This handling may be changed in the future.
|
|
Add log level support.
Add testnet support.
Add command-line options:
--help
--data-dir
--testnet-data-dir
--testnet
--log-level
--batch
--batch-size
--block-number
See help for usage. Run at log level 1 to see profiling stats.
|
|
|
|
|
|
|
|
In order to make things more general, BlockchainDB now has get_db_name()
which should return a string with the "name" of that type of db.
This "name" will be the subfolder name that holds that db type's files
within the monero folder.
Small bugfix: blockchain_converter was not correctly appending this in
the prior hard-coded-string implementation of the subfolder data
directory concept.
|
|
|
|
Ostensibly janitorial work, but should be more relevant later down the
line. Things that depend on core cryptonote things (i.e.
cryptonote_core) don't necessarily depend on BlockchainDB and thus
have no need to have BlockchainDB baked in with them.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
+toc -doc -drmonero
Fixed the windows path, and improved logging and data
(for graph) logging, fixed some locks and added more checks.
Still there is a locking error,
not added by my patches, but present in master version
(locking of map/list of peers).
|
|
This was changed because sometimes the daemon does not complete its exit
routine with this method, but as it does correctly wind most things down
even if it gets stuck I've changed it back.
|
|
The RPC calls the daemon executable uses to talk to the running daemon
instance have mostly been added back in. Rate limiting has not been
added in upstream, but is on its way in a separate effort, so those
calls are still NOPed out.
|
|
many RPC functions added by the daemonize changes
(and related changes on the upstream dev branch that were not merged)
were commented out (apart from return). Other than that, this *should*
work...at any rate, it builds, and that's something.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
BlockchainLMDB::add_block()
BlockchainLMDB::add_transaction_data()
BlockchainDB::add_transaction()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
of height
This reflects the behavior of blockchain_storage::get_tail_id().
Fixes #27 so that RPC method getlastblockheader works.
|
|
This reverts commit 6f1c4b4c2c78c930fe30ed648e855a6ce55f7dcd.
|
|
|
|
new update of the pr with network limits
more debug options:
discarding downloaded blocks all or after given height.
trying to trigger the locking errors.
debug levels polished/tuned to sane values.
debug/logging improved.
warning: this pr should be correct code, but it could make
an existing (in master version) locking error appear more often.
it's a race on the list (map) of peers, e.g. between closing/deleting
them versus working on them in net-limit sleep in sending chunk.
the bug is not in this code/this pr, but in the master version.
the locking problem of master will be fixed in other pr.
problem is ub, and in practice is seems to usually cause program abort
(tested on debian stable with updated gcc). see --help for option
to add sleep to trigger the error faster.
|
|
|
|
Update of the PR with network limits
works very well for all speeds
(but remember that low download speed can stop upload
because we then slow down downloading of blockchain
requests too)
more debug options
fixed pedantic warnings in our code
should work again on Mac OS X and FreeBSD
fixed warning about size_t
tested on Debian, Ubuntu, Windows(testing now)
TCP options and ToS (QoS) flag
FIXED peer number limit
FIXED some spikes in ingress/download
FIXED problems when other up and down limit
|
|
commands and options for network limiting
works very well e.g. for 50 KiB/sec up and down
ToS (QoS) flag
peer number limit
TODO some spikes in ingress/download
TODO problems when other up and down limit
added "otshell utils" - simple logging (with colors, text files channels)
|
|
|
|
Usage:
default is lmdb for blockchain branch:
$ make release
same as:
$ DATABASE=lmdb make release
for original in-memory implementation:
$ DATABASE=memory make release
|
|
See commit 4ba680f2946966df2030e5765e40ee0a36b112c4
|
|
See commit cf5a8b1d6c3df615641e81328bb3d8cf80cd70e3
|
|
Update to match LOG_PRINT_RED_Lx statements.
See commit cf5a8b1d6c3df615641e81328bb3d8cf80cd70e3
|
|
get_tx_outputs_gindexs()
|
|
This would have been triggered if function was called without fourth
parameter and ring signature check failed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|