Follow the steps below to create a Jupyter Notebook that extracts the ID from each test result uploaded to SystemLink.
- Ensure that your notebook imports the following modules at minimum (more modules can be used if needed):
import pandas as pd
import scrapbook as sb
import systemlink.clients.nitestmonitor as testmon
from typing import Tuple
- Populate a list with all test results currently uploaded to SystemLink.
- In the below code snippet, the systemlink.clients.nitestmonitor module (referred to as testmon based on the imports above) is used to perform a query. The query filters out test results based on the value of the results_filter variable.
- By default, results_filter has been set to only include test results that were executed within the last 30 days. This can be changed to any filter, including filtering by Sequence File name using results_filter = '(programName == \"Test Sequence.seq\")'
- The list of results are stored in results_list.
results_api = testmon.ResultsApi()
steps_api = testmon.StepsApi()
async def query_results(query: testmon.ResultsAdvancedQuery, continuation_token: str) -> Tuple[pd.DataFrame, str]:
"""
Queries for results using a query and a continuation token.
:param query: The query to execute.
:param continuation_token: Represents where to continue
paginating the query.
:return: A 2-tuple containing the results data in a Pandas
dataframe, and a continuation token which may be used
to paginate the query.
"""
if continuation_token:
query.continuation_token = continuation_token
response = await results_api.query_results_v2(post_body=query)
for result in response.results:
result.status = result.status.status_type
return response.results, pd.DataFrame([result.to_dict() for result in response.results]), response.continuation_token
# Change the value of results_filter to modify how Jupyter chooses which results to bring in.
# By default, this notebook will pull in all results who's started_at property falls within the last 30 days.
results_filter = 'startedWithin <= "30.0:0:0"'
results_query = testmon.ResultsAdvancedQuery(
results_filter, product_filter='', order_by=testmon.ResultField.STARTED_AT
)
results, df_results, continuation_token = await query_results(results_query, None)
while continuation_token:
results_batch, df_results_batch, continuation_token = await query_results(results_query, continuation_token)
if not df_results_batch.empty:
results = results.append(results_batch)
df_results = df_results.append(df_results_batch)
response = await results_api.query_results_v2(post_body=results_query)
while response.continuation_token:
results = results + response.results
results_query.continuation_token = response.continuation_token
response = await results_api.query_results_v2(post_body=results_query)
#List of results are stored in results_list.
results_list = [result.to_dict() for result in results]
- Populate arrays with the id and program_name associated with each result.
ids = []
names = []
for result in results_list:
ids.append(result["id"])
names.append(result["program_name"])
- Create a dictionary to store the ids and names arrays created above.
result_data = {
"Program Name" : names,
"Result ID" : ids
}
- Close the open sessions to the testmon.ResultsApi() and testmon.StepsApi().
await steps_api.api_client.close()
await results_api.api_client.close()
- Convert the result_data dictionary into the required SystemLink output format. This involves:
- Converting the result_data dictionary into a Dataframe using the pandas module.
- Create another dictionary with "columns" and "values" keys, where "columns" is the fields from the Dataframe and "values" is the values from the Dataframe.
- Create another dictionary with "type", "id" and "data" keys where "type" is "data_frame", "id" is a unique ID for the output data, and "data" is the dictionary created above.
- Create an array called result which consists of the dictionary from step 3.
- use the glue() function from the scrapbook module to output result from the notebook.
#Convert the result_data dictionary into a DataFrame
df_data_dictionary = pd.DataFrame.from_dict(result_data)
#Create a new dictionary with "columns" and "values" keys to store the column names and values for df_data_dictionary
df_dict = {
'columns': pd.io.json.build_table_schema(df_data_dictionary, index=False)['fields'],
'values': df_data_dictionary.values.tolist(),
}
#Ceate final dictionary to store "type", "id" and "data" values
#NOTE: It is impertive that the "type" and "id" match the outputs defined in the notebook parameters
df_output = {
'type': 'data_frame',
'id': 'result_ids',
'data': df_dict
}
#Create array containing df_output
result= [df_output]
#Glue the result to the notebook
sb.glue('result', result)
- Define notebook parameters that match the id and type defined in the df_output dictionary above.
- In a new notebook cell, open the Property Inspector by clicking on the cog icon to the right-hand side.
- Expand the Advanced Tools section.
- In the Cell Metadata section, create a JSON string that resembles the below snippet.
- The most important features of the JSON string are:
- The "outputs" key must define a "display_name", "id" and "type" for each of the Dataframes outputted by the notebook. The "display_name" can be any string, but the "id" and "type" must exactly match what is defined in the df_output dictionary.
- The "parameters" key is used to define inputs to the notebook. For this example, no inputs are used. To understand how to use inputs, refer to Pass Inputs Into a SystemLink Jupyter Notebook .
{
"papermill": {
"parameters": {}
},
"systemlink": {
"namespaces": [
"ni-testmanagement"
],
"outputs": [
{
"display_name": "Result IDs",
"id": "result_ids",
"type": "data_frame"
}
],
"parameters": [],
"version": 2
},
"tags": [
"parameters"
]
}