This is my second part of the sysbench guide, I’ll cover here some basic benchmarks for databases. Feel free to check my first part for an introduction.
First things first, we’ll need a test database. We’ll stick to the default “sbtest”, so go ahead and create it. I’ll wait.
Now sysbench needs the connection parameters. The actual values depend on the target. Sysbench supports MySQL and postgreSQL.
For MySQL, MariaDB or Percona Server the command line options are:
--db-driver=mysql --mysql-host= --mysql-port= --mysql-user= --mysql-password=
For postgreSQL, we have:
--db-driver=pgsql --pgsql-host= --pgsql-port= --pgsql-user= --pgsql-password=
For brevity’s sake, I will be omitting these parameters in the examples below, but they are required so don’t you forget them.
Here we have the prototypical test table
CREATE TABLE `sbtest1` ( `id` int(11) NOT NULL AUTO_INCREMENT, `k` int(11) NOT NULL DEFAULT 0, `c` char(120) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `pad` char(60) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k_1` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
That’s it. 4 columns: 2 integer and 2 chars. Unfortunately we don’t have so many built-in options to tweak the table as in other tools such as mysqlslap However, some control is possible.
--auto_incsets the id column as autoincrement/serial [on]
--create_secondarycreates a secondary index on column k [on]
--mysql_storage_enginestorage engine, only applies to MySQL [innodb]
id column is populated either by the DB (with autoincrement/serial), or by sequential integers generated by sysbench,
k is assigned a random number between 1 and whatever the table size is. The char columns are filled with random numbers,
in groups of 11 digits separated by dashes, no index is created for either
Bulk insert does concurrent multi-row inserts, we specify how many threads we want and each one inserts into its own table. So the total number of tables is the same as the number of threads.
The default is 1 thread. But that is easily changed. Let’s create 20 tables.
(remember that the db connection parameters are omitted)
sysbench bulk_insert prepare --threads=20
Now lets insert 1 million rows and see how long it takes. The rows distributed over all the tables.
# Insert 1M rows total, concurrently on the 20 tables -> sysbench bulk_insert run --threads=20 --events=1000000 --time=0 sysbench 1.0.15 (using system LuaJIT 2.0.5) Running the test with following options: Number of threads: 20 Initializing random number generator from current time Initializing worker threads... Threads started! SQL statistics: queries performed: read: 0 write: 43 other: 0 total: 43 transactions: 1000000 (144368.49 per sec.) queries: 43 (6.21 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 6.9213s total number of events: 1000000 Latency (ms): min: 0.00 avg: 0.07 max: 2932.57 95th percentile: 0.00 sum: 67550.48 Threads fairness: events (avg/stddev): 50000.0000/13956.14 execution time (avg/stddev): 3.3775/0.26
We can see that 1M rows were added using a total of 43 INSERTs statements (an average of 23255.8 rows per INSERT). The total time was about 6.92 seconds.
The latency statistics are interesting here, the average time to insert a set of rows was 0.07 milliseconds, but max was way higher, about 2.9 seconds. This is because the inserted rows are unevenly inserted among the tables, we can see this by comparing the fairness stdev vs the avg.
After we are satisfied we can drop the tables, and be ready for the next test.
sysbench bulk_insert cleanup --threads=20
OLTP (Online Transaction Processing) tests try simulate transaction-oriented loads in the database, sysbench does this by running several kinds of queries inside a transaction.
Lets start simple. We want to create 10 tables, each with 10K rows. For a total of 100K rows.
# Prepare 10 tables, each with 10K rows sysbench oltp_read_only prepare --tables=10 --table_size=100000
It’s usually a good idea to
prewarm the database, i.e. load the tables into the buffer pool. So we can more accurately simulate the steady-state performance.
sysbench oltp_read_only prewarm --tables=10 --threads=10
Run the benchmark with 10K events, with a ratio of 2 threads per table
-> sysbench oltp_read_only run --tables=10 --threads=20 --events=10000 --time=0 sysbench 1.0.15 (using system LuaJIT 2.0.5) Running the test with following options: Number of threads: 20 Initializing random number generator from current time Initializing worker threads... Threads started! SQL statistics: queries performed: read: 140000 write: 0 other: 20000 total: 160000 transactions: 10000 (1077.22 per sec.) queries: 160000 (17235.59 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 9.2813s total number of events: 10000 Latency (ms): min: 4.37 avg: 18.54 max: 69.91 95th percentile: 29.19 sum: 185396.12 Threads fairness: events (avg/stddev): 500.0000/4.46 execution time (avg/stddev): 9.2698/0.01
So, this is interesting: Why did we get 140K reads when we asked for 10K events?
Sysbench by default runs the following statements per event:
- 10 x point selects:
select c from table where id=i
- 1 x simple range:
select c from table where id between a and b
- 1 x sum range:
select sum(k) from table where id between a and b
- 1 x order range:
select c from table where id between a and b order by c
- 1 x distinct range:
select distinct c from table where id between a and b order by c
When done, delete the tables.
sysbench oltp_read_only cleanup --tables=20
Wasn’t that fun?
This is the write counterpart of the previous test. By now, you know the drill, right?
sysbench oltp_write_only prepare --tables=10 sysbench oltp_write_only prewarm --tables=10 --threads=10
Lets use a time limit instead:
-> sysbench oltp_write_only run --tables=10 --threads=20 --events=0 --time=30 sysbench 1.0.15 (using system LuaJIT 2.0.5) Running the test with following options: Number of threads: 20 Initializing random number generator from current time Initializing worker threads... Threads started! SQL statistics: queries performed: read: 0 write: 341181 other: 170918 total: 512099 transactions: 85123 (2836.18 per sec.) queries: 512099 (17062.42 per sec.) ignored errors: 672 (22.39 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 30.0117s total number of events: 85123 Latency (ms): min: 0.53 avg: 7.05 max: 67.17 95th percentile: 14.21 sum: 599892.48 Threads fairness: events (avg/stddev): 4256.1500/49.08 execution time (avg/stddev): 29.9946/0.00
Each write_only event consists of:
- 1 x index_updates:
update table set k~k+1 where id=i
- 1 x non_index_updates:
update table set c=? where id=i
- 1 x delete_inserts:
delete from table where id~i; insert into table (id, k, c, pad) values (...)
When using a time stop condition, we’ll want to compare rates, such as queries per second or transactions per second
This test combines the last two in one package. First it runs the same code as in oltp_read_only and then continues with oltp_write_only. We can even reuse the table from the previous benchmark.
The transaction content for tests can be tweaked.
--range_selectsEnable/disable all range SELECT queries [on]
--range_sizeRange size for range SELECT queries 
--simple_rangesSimple range SELECT queries per transaction 
--point_selectsPoint SELECT queries per transaction 
--order_rangesSELECT ORDER BY queries per transaction 
--distinct_rangesSELECT DISTINCT queries per transaction 
--delete_insertsDELETE/INSERT combinations per transaction 
--index_updatesUPDATE index queries per transaction 
--non_index_updatesUPDATE non-index queries per transaction 
--sum_rangesSELECT SUM() queries per transaction 
--skip_trxDon’t start explicit transactions (use AUTOCOMMIT) [off]
- sysbench’s github: https://github.com/akopytov/sysbench
These aren’t the only DB tests that ship with sysbench, but the basic procedure is basically the same.
Until next time.