summaryrefslogtreecommitdiff
path: root/HOWTO
diff options
context:
space:
mode:
authorLukas Larsson <lukas@erlang-solutions.com>2012-05-08 17:28:46 +0200
committerLukas Larsson <lukas@erlang-solutions.com>2012-07-19 15:48:58 +0200
commit671b2bd68fad4f793e485051329438a9e084db0a (patch)
tree742af072cea1a3e25b0ed846e2175fd8c7f2b740 /HOWTO
parent82470e1c820e367007338b8e188d7d985aabe77b (diff)
downloaderlang-671b2bd68fad4f793e485051329438a9e084db0a.tar.gz
Add framework to ts to run benchmarks
Diffstat (limited to 'HOWTO')
-rw-r--r--HOWTO/BENCHMARKS.md73
1 files changed, 73 insertions, 0 deletions
diff --git a/HOWTO/BENCHMARKS.md b/HOWTO/BENCHMARKS.md
new file mode 100644
index 0000000000..361d99256d
--- /dev/null
+++ b/HOWTO/BENCHMARKS.md
@@ -0,0 +1,73 @@
+Benchmarking Erlang/OTP
+=======================
+
+The Erlang/OTP source tree contains a number of benchmarks. The same framework
+is used to run these benchmarks as is used to run tests. Therefore in order to
+run benchmarks you have to [release the tests][] just as you normally would.
+
+Note that many of these benchmarks were developed to test a specific feature
+under a specific setting. We strive to keep the benchmarks up-to-date, but alas
+time is not an endless resource so some benchmarks will be outdated and
+irrelevant.
+
+Running the benchmarks
+----------------------
+
+As with testing, `ts` is used to run the benchmarks. Before running any
+benchmarks you have to [install the tests][]. To get a listing of all
+benchmarks you have available call `ts:benchmarks()`.
+
+To run all benchmarks call `ts:bench()`. This will run all benchmarks using
+the emulator which is in you `$PATH` (Note that this does not have to be the
+same as from which the benchmarks were built from). All the results of the
+benchmarks are put in a folder in `$TESTROOT/test_server/` called
+`YYYY_MO_DDTHH_MI_SS`.
+
+Each benchmark is run multiple times and the data for all runs is collected in
+the files within the benchmark folder. All benchmarks are written so that a
+higher values are better.
+
+Writing benchmarks
+------------------
+
+Benchmarks are just normal testcases in Common Test suites. They are marked as
+benchmarks by being included in the `AppName_bench.spec` which is located in
+`lib/AppName/test/` for the applications which have benchmarks. Note that you
+might want to add a skip clause to `AppName.spec` for the benchmarks if you do
+not want them to be run in the nightly tests.
+
+Results of benchmarks are sent using the ct_event mechanism and automatically
+collected and formatted by ts.
+
+ ct_event:notify(
+ #event{name = benchmark_data,
+ data = [{value,TPS}]}).
+
+The application, suite and testcase associated with the value is automatically
+detected. If you want to supply your own you can include `suite` andor `name`
+with the data. i.e.
+
+ ct_event:notify(
+ #event{name = benchmark_data,
+ data = [{suite,"erts_bench"},
+ {name,"ets_transactions_per_sec"},
+ {value,TPS}]}).
+
+The reason for using the internal ct_event and not ct is because the benchmark
+code has to be backwards compatible with at least R14.
+
+The value which is reported should be as raw as possible. i.e. you should not
+do any averaging of the value before reporting. The tools we use to collect the
+benchmark data over time will do averages, means, stddev and more with the data.
+So the more data which is sent using `ct_event` the better.
+
+Viewing benchmarks
+------------------
+
+At the moment of writing this HOWTO the tool for viewing benchmark results is
+not available as opensource. This will hopefully change in the near future.
+
+
+ [release the tests]: README.testing.md#releasing-tests
+ [install the tests]: README.testing.md#configuring-the-test-environment
+