[关闭]
@ghosert 2020-10-24T03:19:36.000000Z 字数 24905 阅读 2483

Test Automation Framework

Work


Introduction

The goal of this test automation framework is to help the developers and QE engineers to simply/reduce the efforts on automation test cases. In most cases, the engineers should focus only on the test cases itself instead of worrying about these routine jobs:

  1. Making a restful API call with headers/request body and validate the response.
  2. Inserting/updating data to databases like MySQL/Cassandra.
  3. Validating data inside the database.
  4. Sending a Kafka message or validate a message in the Kafka broker.

This test automation framework is going to take care of the trivial things above without writing any java code in most cases.

How it works

1. Automation Framework Work Flow

howItWorks.png?raw=true未知大小

Created with Raphaël 2.1.2StartRead test case in ExcelCalculate requestsExecute requests: Http/Kafka/SqlAssert actual data with expected dataEnd of the Excel?Endyesno

2. Sample Excel with test case

A sample Excel file with the test cases looks like the one below(click for large picture):

excel_sample.png?raw=true未知大小

testCaseId testCaseDescription vars kafkaRequest sqlUpsert httpRequest httpResponse sqlValidation kafkaValidation testConfig
Test_1_1 Slot abuse test for policy engine decision api(Attempt 1 decision API)

Sample Http Request

A sample data in httpRequest excel column including API endpoint, http method, headers and body as below

  1. {
  2. "endpoint": "${api.base.uri.policyengine}/policy-engine-app/rs/v1/decision",
  3. "method": "POST",
  4. "headers": {
  5. "accept": "application/json",
  6. "content-type": "application/json",
  7. "wm_consumer.tenant_id": "0",
  8. "wm_consumer.vertical_id": "2"
  9. },
  10. "body": {
  11. "tenant": 0,
  12. "vertical": 2,
  13. "context": {
  14. "flowType": "CASPR",
  15. "flowSubType": "BOOK_SLOT",
  16. "deviceType": "IOS",
  17. "customer": {
  18. "cid": "3ebws2dz-t6cc-9b0s-e6ad-fc3h03aiwqta",
  19. "isAssociate": false,
  20. "isGuestSignUp": false,
  21. },
  22. "deviceInfo": {
  23. "vtc": "PORqWxdC8AQPYYMKlInF",
  24. "deviceId": "123232323as",
  25. "ipV4Address": "10.12.34.56"
  26. },
  27. "slotDetail": {
  28. "slotId": "yjwv",
  29. "reservationId": "R109",
  30. "slotDate": 1589742000000,
  31. "zoneOffset": null,
  32. "accesspoint": "DELIVERY"
  33. }
  34. }
  35. }
  36. }

Sample Http Response

A sample data in httpResponse excel column as the expected Http response including status code and response body:

  1. {
  2. "status": 200,
  3. "body": {
  4. "decision": "ALLOW"
  5. }
  6. }

Each row in the excel file is a single test case, it may or may not have dependencies with the other test cases based on what test cases you have in the excel.

The test automation framework will read this excel file line by line to run each test case including calling API and validating response, manipulating database, or sending/validating a message to/in Kafka: as the excel column names indicate the features.

For example: Once the framework gets the listed Http sample above, it will start to call the API with the specified endpoint, method, headers, and body, internally, after getting a response, it will auto compare actual data with expected Http response, raise the error if they are not same or pass the test if they match with each other.

We will go through the details on this excel file in the following sections.

Since the automation framework takes care of the most work, thus, the engineers can focus only on writing test cases in excel.

Getting Started

1. Create Automation Git Repo

Create your automation git repository with the structure like below:

  1. ├── pom.xml
  2. ├── src
  3. ├── main
  4. ├── java
  5. └── resources
  6. └── test
  7. ├── java
  8. └── com
  9. └── walmart
  10. └── policy
  11. └── engine
  12. └── api
  13. ├── PolicyAdminAppTests.java
  14. └── PolicyEngineDecisionTests.java
  15. └── resources
  16. ├── PolicyAdminAppTests.csv
  17. ├── PolicyAdminAppTests.xls
  18. ├── PolicyEngineDecisionTests.csv
  19. ├── PolicyEngineDecisionTests.xls
  20. ├── application-local.yml
  21. ├── application-qaint.yml
  22. ├── log4j2.xml
  23. └── testng.xml
  24. └── tools
  25. ├── pre-commit
  26. └── xls2txt

The following sections explain the structure.

2. Required dependency in your pom

pom.xml

Add test automation framework dependency to your pom.xml

  1. <dependencies>
  2. <dependency>
  3. <groupId>com.walmart.test.automation</groupId>
  4. <artifactId>test-engine-automation</artifactId>
  5. <version>1.0.1</version>
  6. </dependency>
  7. </dependencies>

3. Test cases inside excel files

PolicyEngineDecisionTests.xls

The Excel file with test cases in it as we mentioned above.

You can view the raw excel file here, with this raw file, follow the template to create your own Excel test file, remember to put it into src/test/resources

4. An empty Java file

PolicyEngineDecisionTests.java

Each excel based test case file should have a Java class to trigger, so the file PolicyEngineDecisionTests.xls above should have a corresponding Java file with the naming convention like: PolicyEngineDecisionTests.java:

  1. package com.walmart.policy.engine.api;
  2. import com.walmart.test.automation.AutomationBaseTests;
  3. public class PolicyEngineDecisionTests extends AutomationBaseTests {
  4. // no java code is required, only excel test data in PolicyEngineDecisionTests.xls
  5. }

This Java file could be empty like above(focus on test in Excel) if you don't have any special case to handle, but it should extend AutomationBaseTests and be put into src/test/java

You can view the raw Java file here.

5. Auto-generated CSV text file

PolicyEngineDecisionTests.csv

Since Excel file is a binary file which is unable to diff as a text file, this CSV file as text file is auto-converted from the corresponding Xsl and has equivalent content while the users can keep using Excel with more feature supported like color/format.

Copy tools folder into the root of your git repo and copy tools/pre-commit into your GIT_REPO/.git/hooks/, after that, if only there are changes in the excel files, the corresponding CSV files will be auto-generated when committing your changes.

6. Yaml application configuration files

application-qaint.yml

This YAML file is where you have all the application configurations like API base Uri, Cassandra/Kafka/Mysql connection info, including the Jira integration(explain later). Just replace the values of Cassandra/Kafka/Mysql below with yours. You can also define your own configurations here. Not only these configurations will be used internally in the framework, but also you can use them in your java test case or even in the Excel file.

For example, ${api.base.uri.policyengine} below is referred by the Http sample above, Line 2.

  1. api.base.uri:
  2. policyengine: https://ws.qaint.policyengine.walmart.com
  3. policyadmin: https://ws.qaint.policyadmin.walmart.com
  4. electrode: https://policy.governance.qaint.walmart.com
  5. cassandra:
  6. contact-points: cass-491076148-2-515233908.dev.policy-engine-dev.ms-df-cassandra.stg-az-westus-2.prod.us.walmart.net
  7. local-data-center: westus
  8. username: app
  9. password: xxxxxx
  10. keyspace: policy_engine
  11. port: 9042
  12. schema-action: NONE
  13. kafka:
  14. bootstrap.servers: kafka-358735030-1-886374088.prod-southcentralus-az.kafka-shared-non-prod.ms-df-messaging.prod-az-southcentralus-1.prod.us.walmart.net:9092
  15. key.serializer: org.apache.kafka.common.serialization.StringSerializer
  16. value.serializer: org.apache.kafka.common.serialization.StringSerializer
  17. key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
  18. value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
  19. mysql.datasource:
  20. jdbc-url: jdbc:mysql://db.stg.policy-engine-qaint-stg.ms-df-cloudrdbms.glb.us.walmart.net:3306/policy_engine
  21. username: policy_engine
  22. password: xxxxxx
  23. driver-class-name: com.mysql.cj.jdbc.Driver
  24. max-pool-size: 20
  25. min-idel: 10
  26. pool-name: HikariReadWritePool
  27. jira:
  28. enable: false
  29. createForEach: false
  30. groupPriority: P2
  31. defaultPriority: P3
  32. project: CEPPE
  33. issuetype: Task
  34. reporter: j0z05z1
  35. assignee: j0z05z1
  36. summary: Test case(s) failed in automation suite
  37. api:
  38. url: https://jira.walmart.com/rest/api/2/issue
  39. authentication: GuRwNsXaTdygO0pG4Nw1JVFUMkyr73t5rZbSL+Zzq9rdij6UjgwAkI+FV8bian0Lsk0ekhQagswvdGGOaO

It's also possible to use these configurations in a test Java file like below:

  1. package com.walmart.policy.engine.api;
  2. public class PolicyEngineDecisionTests extends AutomationBaseTests {
  3. @Autowired
  4. private Environment environment;
  5. @Test
  6. public void testSample(ITestContext iTestContext) {
  7. // Resolve the placeholders by the values from application-qaint.yml or excel file, even support list
  8. System.out.println(this.environment.resolvePlaceholders("hello: ${api.base.uri.policyengine} world"));
  9. }
  10. }

The file naming convention indicates the environment this configuration is working for:

application-qaint.yml is for qaint environment
application-local.yml is for the local box.

By default, it will use application-qaint.yml as the default Yaml file to run test cases unless you specify system property like below.

  1. mvn test -Dspring.profiles.active=local

this system property also works when you run test cases in IDE like Intellij.

application-local.yml

  1. api.base.uri:
  2. policyengine: http://localhost:8080
  3. policyadmin: http://localhost:8080
  4. electrode: http://localhost:8080

Inside application-local.yml, define only the key/values you want to overwrite like above, in this case, the framework will switch the endpoint to your local but it will still use the rest of info like Kafka/Mysql/Cassandra defined in default application-qaint.yml.

7. Jira Integration

With the Jira configuration in Yaml configuration below, the automation framework will automatically create a Jira ticket for any test failures with a specified summary and detailed error information in the Jira description.

  1. jira:
  2. enable: false
  3. createForEach: false
  4. groupPriority: P2
  5. defaultPriority: P3
  6. project: CEPPE
  7. issuetype: Task
  8. reporter: j0z05z1
  9. assignee: j0z05z1
  10. summary: Test case(s) failed in automation suite
  11. api:
  12. url: https://jira.walmart.com/rest/api/2/issue
  13. authentication: GuRwNsXaTdygO0pG4Nw1JVFUMkyr73t5rZbSL+Zzq9rdij6UjgwAkI+FV8bian0Lsk0ekhQagswvdGGOaO
key name value description
jira.enable true/false turn on/off this feature
jira.createForEach true/false true: create one ticket for each failed test case, false: create a single ticket for all failed test cases
jira.groupPriority P1/P2/P3/P4/None The priority to be set when jira.createForEach is false
jira.defaultPriority P1/P2/P3/P4/None The default priority to be set if no priority is specified when creating Jira ticket
jira.project Jira project key Fetch proper project key for your team by calling: https://jira.walmart.com/rest/api/2/project
jira.issuetype Task
jira.reporter Walmart user id Specify the reporter of the Jira ticket
jira.assignee Walmart user id Specify the assignee of the Jira ticket
jira.summary Any String Specify the summary of the Jira ticket
jira.api.url https://jira.walmart.com/rest/api/2/issue Walmart Jira end point
jira.api.authentication Encrypted base64 basic authentication Update your Jira userId/password here, and run the test case to get encrypted value for this field

The values above defined in the YAML file are default values, they can be reset or update when starting a test like below:

mvn test -Djira.enable=true -Djira.createForEach=true

You can also set individual Jira priority for each test case when it fails and jira.enable=true and jira.createForEach=true. Check "priority" field in testConfig.

The sample Jira tickets created by automation:
One Jira per test failure:
https://jira.walmart.com/browse/CEPPE-357
A single Jira contains multiple test failures:
https://jira.walmart.com/browse/CEPPE-352

9. Your own Java test case

If you think the automation doesn't cover all the scenarios you want, you can also add more Java test cases like the below:

  1. public class PolicyEngineDecisionTests extends AutomationBaseTests {
  2. @Autowired
  3. private Environment environment;
  4. @Test
  5. public void testTestCaseId(ITestContext iTestContext) {
  6. TestData testData = (TestData) iTestContext.getAttribute("Test_1_1");
  7. System.out.println(testData.getActualHttpResponse());
  8. // Resolve the placeholders by the values from application-qaint.yml or excel file, even support list
  9. System.out.println(this.environment.resolvePlaceholders("hello: ${Test_1_1.httpResponse.body.decision}"));
  10. System.out.println(this.environment.resolvePlaceholders("hello: ${Test_2_1.vars.cid}"));
  11. TestData testData_2_1 = (TestData) iTestContext.getAttribute("Test_2_1");
  12. System.out.println(testData_2_1.getHttpRequest());
  13. }
  14. }

Not only the framework will run the test cases inside an excel/CSV file, but also it will run your java test case above. Line 8 returns all the data you want and line 11 explains how to resolve a string with a placeholder. Check details in PolicyEngineDecisionTests.java.

10. log4j2.xml and testng.xml

Nothing special for them, just use them as usual, don't forget to put test Java files into testng.xml to run even if you don't have any additional test case in it since these Java files will read corresponding excel test files to run the test cases.

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
  3. <suite guice-stage="DEVELOPMENT" name="Policy Tests">
  4. <test name="Policy Tests">
  5. <classes>
  6. <class name="com.walmart.policy.engine.api.PolicyEngineDecisionTests"/>
  7. <class name="com.walmart.policy.engine.api.PolicyAdminAppTests"/>
  8. </classes>
  9. </test>
  10. </suite>

Run with

$ mvn test

11. Sample Git Repository

This section explains everything on the structure, you can always copy/paste samples from Policy Automation to create your own.

Usage: Excel test case file

testCaseId testCaseDescription vars kafkaRequest sqlUpsert httpRequest httpResponse sqlValidation kafkaValidation testConfig
Test_1_1 Slot abuse test for policy engine decision api(Attempt 1 decision API)
Test_1_2 Slot abuse test for policy engine decision api(Attempt 2 decision API)

Before explaining the meaning of each column above, you can also view those two sample excel files as the sample to understand while reading the following explanation.

PolicyEngineDecisionTests.xls
PolicyAdminAppTests.xls

1. Column testCaseId (required)

You can put any unique string here as a test case id. The sample Test_1_1 is suggested since the first number 1 indicates groupId, the second 1 means the first case in group 1. This case id should be unique since java codes or the other excel cells may refer to this test case id for data, explain this later.

2. Column testCaseDescription (Optional)

Just put any description here for the purpose of the test case. It will be in the log if the assertion fails.

3. Column vars (Optional)

A column you can define a JSON here as the global variable which will be used in java codes or the other excel cells.

For example, you can have this JSON below in the vars cell of the row of Test_2_1

  1. {
  2. "cid": "7m5kemzz-xb7u-4skk-8jbu-bzb6uttrjori",
  3. "vtc": "K4mNGN445oE4bxELseLT"
  4. }

And then, in any cell of the excel, you can refer to this variable. For example, in the httpRequest cell you can define a request body like below with ${Test_2_1.vars.cid} as the placeholder.

  1. {
  2. "body": {
  3. "context": {
  4. "customer": {
  5. "cid": "${Test_2_1.vars.cid}",
  6. },
  7. "deviceInfo": {
  8. "vtc": "${Test_2_1.vars.vtc}",
  9. },
  10. "slotDetail": {
  11. "slotId": "tybe",
  12. }
  13. }
  14. }
  15. }

Also ${Test_2_1.vars.cid} here indicates how we use testCaseId.

4. Column kafkaRequest (Optional)

Any JSON data in this column will be sent to the specified Kafka topic by the automation framework.

  1. {
  2. "topic": "policy-caspr-feed",
  3. "key": "[slotId-1, slotId2]",
  4. "data": {
  5. "header": {
  6. "action": "RESERVATION_CANCEL"
  7. },
  8. "payload": {
  9. "reservationId": [
  10. "slotId-1",
  11. "slotId-2"
  12. ]
  13. }
  14. }
  15. }

The JSON should follow the schema like below:

key name value type required description
topic String yes where you want to send Kafka message to
key String no the message key you want to send to the specified topic, optional if it's not required
data Json yes the JSON you want to send to the specified topic with a specified key. Supports String serialization only now

Kafka connection info should be pre-configured in Yaml configuration.

5. Column sqlUpsert (Optional)

Any JSON data in this column will be inserted or updated to the specified database by the automation framework.

  1. {
  2. "cassandra": [
  3. "INSERT INTO velocity_snapshot (tenant_id, vel_var, vel_key, create_date, ref_id)
  4. VALUES (0, 'COUNT_ORDERS_BY_CID', 'WZV73LJL-0V0Y-GNY1-LRJ3-1E1A6SJK90GY', toTimeStamp(now()), 'order_509')",
  5. "INSERT INTO velocity_snapshot (tenant_id, vel_var, vel_key, create_date, ref_id)
  6. VALUES (0, 'COUNT_ORDERS_BY_VTC', 'K8LGU0TN1IA9GULHNISH', toTimeStamp(now()), 'order_509')"
  7. ],
  8. "mysql":
  9. "update policies set description = 'Customer auth failure more than limit(s)'
  10. where id = '1458fe0b-28c7-4831-8ea4-e249a138d0bc'"
  11. }

The JSON should follow the schema like below:

key name value type required description
cassandra List or String no specify Cassandra as the target DB, a list/string of DMLs to insert/update data to the target DB
mysql List or String no specify MySql as the target DB, same above

DB connection info should be pre-configured in Yaml configuration.

6. Column httpRequest (Optional)

The framework will call the API with the specified endpoint/method/headers/body in the httpRequest column.

  1. {
  2. "endpoint": "${api.base.uri.policyengine}/policy-engine-app/rs/v1/decision",
  3. "method": "POST",
  4. "headers": {
  5. "accept": "application/json",
  6. "content-type": "application/json",
  7. "wm_consumer.tenant_id": "0",
  8. "wm_consumer.vertical_id": "2"
  9. },
  10. "body": {
  11. "tenant": 0,
  12. "vertical": 2,
  13. "context": {
  14. "flowType": "CASPR",
  15. "flowSubType": "BOOK_SLOT",
  16. "deviceType": "IOS",
  17. "customer": {
  18. "cid": "3ebws2dz-t6cc-9b0s-e6ad-fc3h03aiwqta",
  19. "isAssociate": false,
  20. "isGuestSignUp": false,
  21. },
  22. "deviceInfo": {
  23. "vtc": "PORqWxdC8AQPYYMKlInF",
  24. "deviceId": "123232323as",
  25. "ipV4Address": "10.12.34.56"
  26. },
  27. "slotDetail": {
  28. "slotId": "yjwv",
  29. "reservationId": "R109",
  30. "slotDate": 1589742000000,
  31. "zoneOffset": null,
  32. "accesspoint": "DELIVERY"
  33. }
  34. }
  35. }
  36. }

The cell data should follow the schema like below:

key name value type required description
endpoint String yes the target endpoint you want to call the API
method String yes http method like "POST/PUT/GET/DELETE"
headers Json no any Http headers key/value pairs
body Json no http request body you want to send

7. Column httpResponse (Optional)

Put any expected response data in this column, after calling API in httpRequest column, the framework will compare the actual response with the expected ones recursively. If the response is a complex JSON, you don't have to list all the fields of the response, you can just pick up the ones you are interested in to validate.

  1. {
  2. "status": 200,
  3. "body": {
  4. "decision": "ALLOW"
  5. }
  6. }

If your actual response is not 200 or the decision is "DECLINE", your test case will then fail.

The cell data should follow the schema like below:

key name value type required description
status Number no any expected valid Http status code like 200/400/500
body Json no the expected JSON data with the fields you are interested in to validate, you don't have to list all the fields

8. Column sqlValidation (Optional)

Put SQL query and expected result set in this column, so that the framework will query DB and compare the actual result set with the expected ones.

  1. {
  2. "cassandra": [
  3. {
  4. "query": "SELECT count(1) FROM velocity_snapshot where vel_var = 'COUNT_ORDERS_BY_CID' AND vel_key = 'WZV73LJL-0V0Y-GNY1-LRJ3-1E1A6SJK90GY' ALLOW FILTERING",
  5. "result": [{"count": 1}]
  6. },
  7. {
  8. "query": "SELECT count(1) FROM velocity_snapshot where vel_var = 'COUNT_ORDERS_BY_VTC' AND vel_key = 'K8LGU0TN1IA9GULHNISH' ALLOW FILTERING",
  9. "result": [{"count": 1}]
  10. }
  11. ],
  12. "mysql":
  13. {
  14. "query": "select state, approval_level from policies where id = '${Test_1_1.httpResponse.body.id}'",
  15. "result": {"state": "PREVIEW", "approval_level": null}
  16. }
  17. }

The JSON should follow the schema like below:

key name value type required description
cassandra List or String no specify Cassandra as the target DB
mysql List or String no specify MySql as the target DB
query String yes the SQL query to fetch data from the target DB
result Json or Json list yes if there are multiple rows returning as a result set, this is a list of JSON.
Otherwise, one row result could be simplified as a simple JSON instead of a list of JSON.
Inside JSON, the keys are the column names while the values are the corresponding values from the target DB

DB connection info should be pre-configured in Yaml configuration.

9. Column kafkaValidation (Optional)

The automation framework will start a Kafka listener internally if this column is populated, and once you trigger a Kafka message from somewhere like kafkaRequest or httpRequest, this listener will expect to receive a kafka message you specified as below, if the listener fails to receive any messages in 45 seconds or the actual message received is not expected after comparing with the expected data below, an assertion error will be raised.

  1. {
  2. "topic": "policy-caspr-feed",
  3. "key": "[slotId-1, slotId2]",
  4. "data": {
  5. "header": {
  6. "action": "RESERVATION_CANCEL"
  7. },
  8. "payload": {
  9. "reservationId": [
  10. "slotId-1",
  11. "slotId-2"
  12. ]
  13. }
  14. }
  15. }

The JSON should follow the schema like below:

key name value type required description
topic String yes where you want to receive a Kafka message from
key String no the expected key to be received
data Json yes the expected JSON data to be received. Supports String serialization only now

Kafka connection info should be pre-configured in Yaml configuration.

10. Column testConfig (Required)

This testConfig column is introduced to avoid duplicate test data everywhere in the excel. With the proper overwrite settings on all(default) columns or specified columns, you can focus only on the data which is changing but keep the others as before without even mentioning them.

a) new strategy on default columns

For example, you have the first row of excel as below:

testCaseId

Test_1_1

httpRequest

  1. {
  2. "endpoint": "${api.base.uri.policyengine}/policy-engine-app/rs/v1/decision",
  3. "method": "POST",
  4. "headers": {
  5. "content-type": "application/json"
  6. },
  7. "body": {
  8. "tenant": 0,
  9. "vertical": 2,
  10. "context": {
  11. "flowType": "CASPR",
  12. "customer": {
  13. "cid": "3ebws2dz-t6cc-9b0s-e6ad-fc3h03aiwqta",
  14. "isAssociate": false,
  15. "isGuestSignUp": false
  16. },
  17. "deviceInfo": {
  18. "vtc": "PORqWxdC8AQPYYMKlInF"
  19. },
  20. "slotDetail": {
  21. "slotId": "yjwv",
  22. "slotDate": 1589742000000,
  23. "accesspoint": "DELIVERY"
  24. }
  25. }
  26. }
  27. }

httpResponse

  1. {
  2. "status": 200,
  3. "body": {
  4. "decision": "ALLOW"
  5. }
  6. }

testConfig

  1. {
  2. "overwrite": {
  3. "default": "new"
  4. }
  5. }

The "default": "new" above indicates whatever existing data in all(default) columns are new/base/template/original data. The framework will use them directly without any changes for the current case Test_1_1, but all the following rows/cases will generate new case based on the current data in this case until a new rows/cases have the same "default": "new" showing up in its testConfig column

b) merge strategy on default columns

For the second row/test case in the excel what if the changes are on only some fields of httpRequest but not all and we are expecting the same httpResponse. Since we don't want to repeat duplicate test data in excel you can have the second row as below:

testCaseId

Test_1_2

httpRequest

  1. {
  2. "body": {
  3. "context": {
  4. "customer": {
  5. "cid": "${Test_2_1.vars.cid}",
  6. },
  7. "deviceInfo": {
  8. "vtc": "${Test_2_1.vars.vtc}",
  9. },
  10. "slotDetail": {
  11. "slotId": "tybe",
  12. }
  13. }
  14. }
  15. }

httpResponse

leave blank

testConfig

  1. {
  2. "overwrite": {
  3. "default": "merge"
  4. }
  5. }

If we specify the merge strategy above, the framework will update the changes to the base data we have in the previous row Test_1_1 to get a new request, but keep the other fields as before, also the default key above indicates merge will happen on all columns including httpRequest and httpResponse.

So in this way you don't have to repeat duplicate fields in request/response if they don't have changes.

In this case, you will have a new context.customer.cid, context.deviceInfo.vtc and context.slotDetail.slotId before sending reqest for Test_1_2, but the other info like endpoint, method, headers body.context.flowType remains same as Test_1_1, and httpResonse is leaving blank since we will reuse the base/original data in Test_1_1 which is

  1. {
  2. "status": 200,
  3. "body": {
  4. "decision": "ALLOW"
  5. }
  6. }

This mechanism will save efforts on duplicate data and update changes only in the base/original/template case to reflect them to all the 'merge' strategy cases.

Check this sample file PolicyEngineDecisionTests.xls to get more senses.

c) remove insert strategy on specific columns

Besides the new, merge strategy, you can also have the other strategy like remove or insert for more scenarios, and also you can specify the strategy applies only on some column instead of all the columns.

For example, we have the third row for the 3rd case which is missing the context.customer.cid in httpRequest and we are expecting a bad request 400 with detailed error messages in httpResponse.

testCaseId

Test_1_3

httpRequest

  1. {
  2. "body": {
  3. "context": {
  4. "customer": {
  5. "cid": "3ebws2dz-t6cc-9b0s-e6ad-fc3h03aiwqta",
  6. }
  7. }
  8. }
  9. }

httpResponse

  1. {
  2. "status": 400,
  3. "body": {
  4. "message": "API Field validation failed",
  5. "fieldErrors": [
  6. {
  7. "field": "context.customer.cid",
  8. "message": "must not be blank"
  9. }
  10. ]
  11. }
  12. }

testConfig

  1. {
  2. "overwrite": {
  3. "default": "merge",
  4. "httpRequest": "remove",
  5. "httpResponse": "insert"
  6. }
  7. }

"httpRequest": "remove" means we need to remove/delete context.customer.cid from the base request in Test_1_1 which has the new strategy as the base/template data.

httpResponse": "insert" means the httpResponse is totally different from the one in Test_1_1 so I need to ignore the base response without any merge anymore but use the entire current httpResponse instead. The difference between new and insert strategy is insert works only for the current case and it will be ignored by the following test cases, while new strategy will reset the current data as base data so that the following test cases will respect the new base data.

d) "priority" field in testCofnig

  1. {
  2. "overwrite": {
  3. "default": "new"
  4. },
  5. "priority": "P2"
  6. }

"priority" field can be set with the values like "P1/P2/P3/P4/None", this field value will be used to set individual Jira priority for each test case when test failure happens on the test case itself. If you don't set a value on this field, the Jira.defaultPriority defined inside the YAML file will be set. See details in Jira Integration.

11. The ReExp and placeholder in Excel

RegExp

In some cases, you want to validate a random ID in httpResponse after calling create policy API for example. Although you can't assume the actual ID in your test case, if you know the format of the ID you can do something like below:

testCaseId

Test_1_1

httpResponse column

  1. {
  2. "status": 200,
  3. "body": {
  4. "id": "#{[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}}",
  5. ...
  6. }
  7. }

This form should be respected to use RegExp:

"#{regular expression}"

So that if the actual value matches this regular expression you defined, it will pass otherwise fail.

Placeholder

1. Refer to any previous cell data in excel

After creating a policy, the next case is going to reuse this new policy id to call update policy API, so you should know the policy id coming from the previous case, to do this, check this sample

testCaseId

Test_1_2

httpRequest column

  1. {
  2. "endpoint": "${api.base.uri.policyadmin}/policy-admin-app/rs/v1/policy/${Test_1_1.httpResponse.body.id}",
  3. "method": "PUT",
  4. ...
  5. }

${Test_1_1.httpResponse.body.id} means in Test_1_2, we will use the policy id returning from the previous create policy API.

This placeholder works for any columns and json data inside recursively:

${testCaseId.columnName.JsonField.subJsonField}
2. Refer the key/value from Yaml configuration as placeholder in excel
"endpoint": "${api.base.uri.policyadmin}/policy-admin-app/rs/v1/policy"

${api.base.uri.policyadmin} is coming from src/test/resources/application-qaint.yml or application-local.yml based on your predefined env.
添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注