summaryrefslogtreecommitdiff
path: root/platform/android/tests
diff options
context:
space:
mode:
authorTobrun <tobrun@mapbox.com>2016-01-06 15:15:19 +0100
committerTobrun <tobrun@mapbox.com>2016-01-12 12:40:58 +0100
commit3e4c1bd5341ec46aedd195c2e2b575e030b30e0a (patch)
tree5f0121ae9d9cd337198656582ebf58b217259844 /platform/android/tests
parent71edb38cf10bde4180e5e4c65ea229c80ee680eb (diff)
downloadqtlocation-mapboxgl-3e4c1bd5341ec46aedd195c2e2b575e030b30e0a.tar.gz
[android] #3447 - initial commit to verify current progress
Diffstat (limited to 'platform/android/tests')
-rw-r--r--platform/android/tests/README.md9
-rw-r--r--platform/android/tests/docs/EXERCISER_TESTS.md23
-rw-r--r--platform/android/tests/docs/PERFORMANCE_TESTS.md69
-rw-r--r--platform/android/tests/docs/UI_TESTS.md131
-rw-r--r--platform/android/tests/docs/UNIT_TESTS.md32
-rw-r--r--platform/android/tests/scripts/devicefarm.py132
6 files changed, 396 insertions, 0 deletions
diff --git a/platform/android/tests/README.md b/platform/android/tests/README.md
new file mode 100644
index 0000000000..69e75b693b
--- /dev/null
+++ b/platform/android/tests/README.md
@@ -0,0 +1,9 @@
+# Mapbox GL Android Test documentation
+
+## Testing
+We currently support the following types of testing on the Mapbox Android SDK:
+
+ - [Unit tests](https://github.com/mapbox/mapbox-gl-native/blob/3447-Add-test-documentation/platform/android/tests/docs/UNIT_TESTS.md) using [JUnit](http://developer.android.com/tools/testing-support-library/index.html#AndroidJUnitRunner)
+ - [UI tests](https://github.com/mapbox/mapbox-gl-native/blob/3447-Add-test-documentation/platform/android/tests/docs/UI_TESTS.md) using [Espresso](http://developer.android.com/tools/testing-support-library/index.html#Espresso)
+ - [Exerciser](https://github.com/mapbox/mapbox-gl-native/blob/3447-Add-test-documentation/platform/android/tests/docs/EXERCISER_TESTS.md) tests using [Monkey](http://developer.android.com/tools/help/monkey.html) or [Build-in Fuzz test](http://docs.aws.amazon.com/devicefarm/latest/developerguide/test-types-built-in-fuzz.html)
+ - [Performance tests](https://github.com/mapbox/mapbox-gl-native/blob/3447-Add-test-documentation/platform/android/tests/docs/PERFORMANCE_TESTS.md) using [Systrace](https://codelabs.developers.google.com/codelabs/android-perf-testing/index.html?index=..%2F..%2Fbabbq-2015&viewga=UA-68632703-1#0)
diff --git a/platform/android/tests/docs/EXERCISER_TESTS.md b/platform/android/tests/docs/EXERCISER_TESTS.md
new file mode 100644
index 0000000000..4de56912a6
--- /dev/null
+++ b/platform/android/tests/docs/EXERCISER_TESTS.md
@@ -0,0 +1,23 @@
+#UI/Application Exerciser Tests
+
+UI/application exerciser tests are stress test using random generated users events.
+
+##Running Locally
+
+The Android SDK provides a test tool called [Monkey](http://developer.android.com/tools/help/monkey.html),
+"a program that runs on your emulator or device and generates pseudo-random streams of user events
+such as clicks, touches, or gestures, as well as a number of system-level events."
+
+To exercise Monkey on the test app, install the package on the device (e.g. via Android Studio)
+and then:
+
+```
+$ adb shell monkey -p com.mapbox.mapboxgl.testapp -v 500
+```
+
+##Running on AWS Device Farm
+
+Amazon Device farm supports a similar tool called `Built-in Fuzz Test`.
+"The built-in fuzz test randomly sends user interface events to devices and then reports results."
+
+More information about [Built-in Fuzz Test](http://docs.aws.amazon.com/devicefarm/latest/developerguide/test-types-built-in-fuzz.html) \ No newline at end of file
diff --git a/platform/android/tests/docs/PERFORMANCE_TESTS.md b/platform/android/tests/docs/PERFORMANCE_TESTS.md
new file mode 100644
index 0000000000..a182af826f
--- /dev/null
+++ b/platform/android/tests/docs/PERFORMANCE_TESTS.md
@@ -0,0 +1,69 @@
+# Performance Tests
+
+## What is Systrace?
+
+From the [Android documentation](http://developer.android.com/tools/help/systrace.html):
+"Systrace is a tool that will help you analyze the performance of your application by capturing and displaying execution times of your applications processes and other Android system processes."
+
+## What are we using it for?
+
+We’re using Systrace to look for performance issues in our SDK. When we run Systrace while using our app,
+it will collect data from Android then format it into an html file we can view in a web browser.
+When we open the Systrace results, we’re hoping it will be able to help us resolve some of the
+underlying issues associated with our SDK.
+
+## Run systrace locally
+
+Execute following command in your terminal will run Systrace for 10 seconds:
+
+```
+python $ANDROID_HOME/platform-tools/systrace/systrace.py --time=10 -o ~/trace.html gfx view res
+```
+
+This command will output a file called `trace.html`.
+More information how to interpret the values can be found [here](http://developer.android.com/tools/help/systrace.html).
+
+More information about this topic can be found [here](https://codelabs.developers.google.com/codelabs/android-perf-testing/index.html?index=..%2F..%2Fbabbq-2015&viewga=UA-68632703-1#0)
+
+## Automating Systrace with Espresso
+
+The following annotation is being used to isolate the tests and classes that should be included in the performance tests.
+
+```java
+@PerfTest
+```
+
+You can optionally define extra rules to gather more data during a performance test:
+
+```java
+@Rule
+public EnableTestTracing mEnableTestTracing = new EnableTestTracing();
+
+@Rule
+public EnablePostTestDumpsys mEnablePostTestDumpsys = new EnablePostTestDumpsys();
+
+@Rule
+public EnableLogcatDump mEnableLogcatDump = new EnableLogcatDump();
+
+@Rule
+public EnableNetStatsDump mEnableNetStatsDump = new EnableNetStatsDump();
+```
+
+## Automating Systrace with MonkeyRunner
+
+An example of such a script can be found [here](https://github.com/googlecodelabs/android-perf-testing/blob/master/run_perf_tests.py).
+The script is following this structure:
+
+- Check environment variables
+- Define functions
+- Clear local data from previous runs
+- Find an Android device
+- Enable and clear graphics info dumpsys
+- Start a systrace thread & test suite thread in parallel
+- Wait for both threads to complete
+- Download files from device
+- Run analysis on downloaded files
+
+## Note
+Testing on a device with minimum SDK 6.0 is preferred.
+The tools above work on older versions of Android, but less data will be collected. \ No newline at end of file
diff --git a/platform/android/tests/docs/UI_TESTS.md b/platform/android/tests/docs/UI_TESTS.md
new file mode 100644
index 0000000000..e74be3ca42
--- /dev/null
+++ b/platform/android/tests/docs/UI_TESTS.md
@@ -0,0 +1,131 @@
+#UI Tests
+## Running Espresso tests locally on a device
+
+This test project comes with all the required Android Testing Support Library dependencies
+in the Gradle file. Tests are under the `app/src/androidTest` folder.
+
+Note that before running your tests, you might want to turn off animations on your test device.
+It's a known issue that leaving system animations turned on in a test device
+(window animation scale, transition animation scale, animator duration scale)
+might cause unexpected results, or may lead tests to fail.
+
+To create a new run configuration:
+* Click on Run -> Edit Configurations...
+* Click on the plus sign and then on "Android Tests"
+* Give a name to the configuration, e.g. `TestAppTests`
+* Choose the `MapboxGLAndroidSDKTestApp` module
+* Choose `android.support.test.runner.AndroidJUnitRunner` as the instrumentation runner
+* Click OK to save the new configuration
+
+You can now run this configuration from the main toolbar dropdown menu.
+
+## Running Espresso tests manually on AWS Device Farm
+
+On a terminal, within `mapbox-gl-native/android/java`,
+run the tests (`cC` stands for `connectedCheck`):
+
+```
+$ ./gradlew cC -p MapboxGLAndroidSDKTestApp
+```
+
+Then:
+* Go to your AWS Console and choose Device Farm.
+* Create a new project, e.g. `MapboxGLAndroidSDKTestApp`
+* On step 1, upload the APK in `mapbox-gl-native/android/java/MapboxGLAndroidSDKTestApp/build/outputs/apk/MapboxGLAndroidSDKTestApp-debug-unaligned.apk`
+* On step 2, choose Instrumentation, test filter is `com.mapbox.mapboxgl.testapp.MainActivityTest` and upload the APK in `mapbox-gl-native/android/java/MapboxGLAndroidSDKTestApp/build/outputs/apk/MapboxGLAndroidSDKTestApp-debug-androidTest-unaligned.apk`
+* On step 3, choose a device pool. E.g. Top Devices
+* On step 4, customize your device state (if needed)
+* Finally, confirm the configuration and run the tests.
+
+On Step 2, you can also separate by commas different classes: `com.mapbox.mapboxgl.testapp.MainActivityTest,com.mapbox.mapboxgl.testapp.MainActivityScreenTest`
+
+If you have no tests for your app, or want to test some random user behaviour,
+you can just choose "Built-in: Fuzz" in step 2.
+
+## Running Espresso test automatically on AWS Device Farm
+To automatically execute Espresso tests as part of our CI build, we have created a Python [script](https://github.com/mapbox/mapbox-gl-native/blob/aws-devicelab/android/scripts/devicefarm.py).
+
+This script is responsible for:
+ - uploading an APK + test APK
+ - scheduling tests
+ - exiting with a return code
+ - 0 -> all tests have passed
+ - 1 otherwise
+
+### Requirements
+
+ * [Boto 3](http://boto3.readthedocs.org)
+ * [Requests](http://www.python-requests.org)
+
+### Running the script
+
+ A sample run would be as follows:
+
+ ```
+ $ python devicefarm.py \
+ --project-arn "arn:aws:devicefarm:us-west-2:XXXXX" \
+ --device-pool-arn "arn:aws:devicefarm:us-west-2::devicepool:YYYYY" \
+ --app-apk-path app/build/outputs/apk/app-debug-unaligned.apk \
+ --test-apk-path app/build/outputs/apk/app-debug-androidTest-unaligned.apk
+ ```
+
+ Where you need to insert your actual project and device ARNs. We follow Boto 3
+ conventions to to [set up the AWS credentials](https://github.com/boto/boto3#quick-start).
+
+ You can build the `app-debug-androidTest-unaligned.apk` package with Gradle:
+
+ ```
+ ./gradlew assembleAndroidTest
+ ```
+
+ To run tests locally, you can use `./gradlew assemble` to build the app APK, and
+ `./gradlew test --continue` to run unit tests. Finally, `./gradlew connectedAndroidTest`
+ will run the Espresso tests in a local device.
+
+ A sample output would be as follows:
+
+ ```
+ Starting upload: ANDROID_APP
+ Uploading: ../app/build/outputs/apk/app-debug-unaligned.apk
+ Checking if the upload succeeded.
+ Upload not ready (status is INITIALIZED), waiting for 5 seconds.
+ Starting upload: INSTRUMENTATION_TEST_PACKAGE
+ Uploading: ../app/build/outputs/apk/app-debug-androidTest-unaligned.apk
+ Checking if the upload succeeded.
+ Upload not ready (status is INITIALIZED), waiting for 5 seconds.
+ Scheduling a run.
+ Checking if the run succeeded.
+ Run not completed (status is SCHEDULING), waiting for 60 seconds.
+ Run not completed (status is RUNNING), waiting for 60 seconds.
+ Run not completed (status is RUNNING), waiting for 60 seconds.
+ Run not completed (status is RUNNING), waiting for 60 seconds.
+ Run not completed (status is RUNNING), waiting for 60 seconds.
+ Run not completed (status is RUNNING), waiting for 60 seconds.
+ Run completed: PASSED
+ ```
+
+### Available commands
+
+ You can use the `--help` command to get a list of all available options:
+
+ ```
+ $ python devicefarm.py --help
+ usage: Device Farm Runner [-h] [--project-arn PROJECT_ARN]
+ [--device-pool-arn DEVICE_POOL_ARN]
+ [--app-apk-path APP_APK_PATH]
+ [--test-apk-path TEST_APK_PATH]
+
+ Runs the Espresso tests on AWS Device Farm.
+
+ optional arguments:
+ -h, --help show this help message and exit
+ --project-arn PROJECT_ARN
+ The project ARN (Amazon Resource Name) (default: None)
+ --device-pool-arn DEVICE_POOL_ARN
+ The device pool ARN (Amazon Resource Name) (default:
+ None)
+ --app-apk-path APP_APK_PATH
+ Path to the app APK (default: None)
+ --test-apk-path TEST_APK_PATH
+ Path to the tests APK (default: None)
+ ```
diff --git a/platform/android/tests/docs/UNIT_TESTS.md b/platform/android/tests/docs/UNIT_TESTS.md
new file mode 100644
index 0000000000..ef89f23ced
--- /dev/null
+++ b/platform/android/tests/docs/UNIT_TESTS.md
@@ -0,0 +1,32 @@
+# Unit tests
+Our Unit tests are based on JUnit and are located under `/src/test/java/`.
+We are using plain JUnit to test classes that aren't calling the Android API,
+or are using Android's JUnit extensions to stub/mock Android components.
+
+## Running Unit tests locally
+To run Unit tests locally you switch to the Unit Tests build variant, then right click the corresponding test class or method and select "Run ...".
+
+You can also have a run configuration:
+* Click on Run -> Edit Configurations...
+* Click on "Junit Tests"
+* Give a name to the configuration, e.g. `JUnit tests`
+* As "Test Kind", choose "All in directory"
+* As folder, choose the following folder: `mapbox-gl-native/platforms/android/java/MapboxGLAndroidSDKTestApp/src/test/java`
+* Click OK to save the new configuration
+
+You can also run the tests from the command line with:
+
+```
+$ ./gradlew test --continue -p MapboxGLAndroidSDKTestApp
+```
+
+## Running Unit tests on CI
+The Unit tests are executed as part of the build process on our CI and are
+automatically run for each new commit pushed to this repo. If a Unit tests
+fails, this will fail and stop the build.
+
+You can find this gradle command in our [buildscript](https://github.com/mapbox/mapbox-gl-native/blob/master/platform/android/bitrise.yml#L48):
+
+```
+$ ./gradlew testReleaseUnitTest --continue
+```
diff --git a/platform/android/tests/scripts/devicefarm.py b/platform/android/tests/scripts/devicefarm.py
new file mode 100644
index 0000000000..6c0a337403
--- /dev/null
+++ b/platform/android/tests/scripts/devicefarm.py
@@ -0,0 +1,132 @@
+'''
+Uploads an APK and a test APK to AWS Device Farm and runs the Espresso tests.
+Exit code is 0 if all tests pass, 1 otherwise. See README.md for details.
+'''
+
+from time import sleep
+import argparse
+import boto3
+import requests
+
+'''
+Parser
+'''
+
+parser = argparse.ArgumentParser(
+ prog='Device Farm Runner',
+ description='Runs the Espresso tests on AWS Device Farm.',
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter,
+ add_help=True)
+
+parser.add_argument('--project-arn',
+ type=str, help='The project ARN (Amazon Resource Name)')
+parser.add_argument('--device-pool-arn',
+ type=str, help='The device pool ARN (Amazon Resource Name)')
+parser.add_argument('--app-apk-path',
+ type=str, help='Path to the app APK')
+parser.add_argument('--test-apk-path',
+ type=str, help='Path to the tests APK')
+
+args = vars(parser.parse_args())
+project_arn = args.get('project_arn')
+device_pool_arn = args.get('device_pool_arn')
+app_apk_path = args.get('app_apk_path')
+test_apk_path = args.get('test_apk_path')
+
+# Check
+if not project_arn or not device_pool_arn:
+ raise Exception('You need to set both the project and the device pool ARN.')
+elif not app_apk_path or not test_apk_path:
+ raise Exception('You need to set both the app and test APK path.')
+
+'''
+The AWS Device Farm client
+'''
+
+client = boto3.client('devicefarm', region_name='us-west-2')
+
+'''
+Methods
+'''
+
+def upload_apk(apk_name, apk_path, apk_type):
+ print 'Starting upload: %s' % apk_type
+ result = client.create_upload(
+ projectArn=project_arn, name=apk_name, type=apk_type)
+ presigned_url = result.get('upload').get('url')
+ upload_arn = result.get('upload').get('arn')
+
+ # PUT the file content and wait
+ put_file(apk_path=apk_path, presigned_url=presigned_url)
+ wait_for_upload(upload_arn=upload_arn)
+ return upload_arn
+
+def put_file(apk_path, presigned_url):
+ print 'Uploading: %s' % apk_path
+ with open(apk_path, 'rb') as f:
+ data = f.read()
+ result = requests.put(presigned_url, data=data)
+ if result.status_code != 200:
+ raise Exception(
+ 'PUT failed with status code: %s' % result.status_code)
+
+def wait_for_upload(upload_arn):
+ print 'Checking if the upload succeeded.'
+ succeeded = False
+ while not succeeded:
+ result = client.get_upload(arn=upload_arn)
+ status = result.get('upload').get('status')
+ succeeded = (status == 'SUCCEEDED')
+ if status == 'FAILED':
+ raise Exception('Upload failed.')
+ elif not succeeded:
+ print 'Upload is not ready (status is %s), waiting for 5 seconds.' % status
+ sleep(5)
+
+def schedule_run(app_arn, test_arn):
+ print 'Scheduling a run.'
+ result = client.schedule_run(
+ projectArn=project_arn,
+ appArn=app_arn,
+ devicePoolArn=device_pool_arn,
+ name='automated_run',
+ test={ 'type': 'INSTRUMENTATION', 'testPackageArn': test_arn})
+ run_arn = result.get('run').get('arn')
+ return_code = wait_for_run(run_arn=run_arn)
+ return return_code
+
+def wait_for_run(run_arn):
+ print 'Checking if the run succeeded.'
+ return_code = ''
+ succeeded = False
+ while not succeeded:
+ result = client.get_run(arn=run_arn)
+ status = result.get('run').get('status')
+ return_code = result.get('run').get('result')
+ succeeded = (status == 'COMPLETED')
+ if not succeeded:
+ print 'Run not completed (status is %s), waiting for 60 seconds.' % status
+ sleep(60)
+ return return_code
+
+'''
+Main flow
+'''
+
+# 1. Upload the app APK
+app_arn = upload_apk(
+ apk_name='app-debug-unaligned.apk',
+ apk_path=app_apk_path,
+ apk_type='ANDROID_APP')
+
+# 2. Upload the test APK
+test_arn = upload_apk(
+ apk_name='app-debug-androidTest-unaligned.apk',
+ apk_path=test_apk_path,
+ apk_type='INSTRUMENTATION_TEST_PACKAGE')
+
+# 3. Schedule the run
+return_code = schedule_run(app_arn=app_arn, test_arn=test_arn)
+exit_code = 0 if return_code == 'PASSED' else 1
+print 'Run completed: %s' % return_code
+exit(exit_code) \ No newline at end of file