Jupyter Notebooks use the 
[External] Pandas library to build 
[External] DataFrames, where a DataFrame is a 2 dimensional structure (such as an array or table). The DataFrame is then "glued" (refer to 
[External] Glue API) so that it is outputted by the notebook and is accessible from SystemLink reports, SystemLink dashboards, and Grafana dashboards.
To configure multiple Jupyter Notebook outputs: 
- Ensure that the Notebook Parameters are configured so that:
	
- The number of outputs defined matches the number of DataFrames you wish to output.
 - The id for each output matches the id defined in the final DataFrame.
 - The type for each output is set to data_frame (scalar is also a valid type, but the data structure must be modified to suit this).
 - Below is an example of Notebook Parameters that define two outputs ("Random Data" and "Animal Data"). To configure parameters:
		
- Select the Code Cell that defines your input variables.
 - Click the cog on the right-hand side of the notebook to open the Property Inspector.
 - Expand the Advanced Tools section.
 - Populate the Cell Metadata block accordingly.
 
		 
	 
{
    "papermill": {
        "parameters": {
            "num_elements": 50
        }
    },
    "systemlink": {
        "namespaces": [
            "ni-testmanagement"
        ],
        "outputs": [
            {
                "display_name": "Random Data",
                "id": "randomdata",
                "type": "data_frame"
            },
            {
                "display_name": "Animal Data",
                "id": "animals",
                "type": "data_frame"
            }
        ],
        "parameters": [
            {
                "display_name": "# of elements",
                "id": "num_elements",
                "type": "number"
            }
        ],
        "version": 2
    },
    "tags": [
        "parameters"
    ]
}
- When collecting or calculating your output data, ensure that it matches the structure required by SystemLink.
	
- It is often simplest to follow these steps for each output:
		
- Structure your output data as a [External] Dictionary.
 - Once the Dictionary is populated, convert it to a DataFrame using df = pd.DataFrame.from_dict(<dictionary name>).
 - To prepare the data for output, create a new Dictionary from the DataFrame consisting of 'columns' and 'values' Keys. Where: 
			
- pd.io.json.build_table_schema(<DataFrame  name>, index = False)['fields'] is columns
 - <DataFrame name>.values.tolist() is values.
 
			 - Create a final Dictionary consisting of 'type', 'id' and 'data' Keys where:
			
- 'type' is 'data_frame' (it must match the 'type' defined in the outputs[] section of the parameters shown above).
 - 'id' is a string that is used to identify your output data (it must match the 'id' defined in the outputs[] section of the parameters shown above).
 - 'data' is the Dictionary created in step 3.
 
			 
		 - Once each output has a Dictionary consisting of 'type', 'id' and 'data' Keys, concatenate the output Dictionaries using result = [<dictionary1> , <dictionary2>, .... <dictionaryn> ].
 - Use sb.glue('result' , result) to output your data into SystemLink or Grafana.
 - The example code below demonstrates these steps.
 
	 
# Create a dictionary that will store the output data
data_dictionary = {
    "timestamps" : [],
    "values" : []
}
data_dictionary2 = {
    "animals" : ["dog", "cat", "horse", "rabbit"]
}
# In a for loop, populate the "values" array with random data and "timestamp" array with corresponding timestamp.
current_time = datetime.datetime.now()
for i in range(num_elements):
    data_dictionary["timestamps"].append(current_time - (i*datetime.timedelta(seconds = 30)))
    data_dictionary["values"].append(random.randint(0,100))
    
# Convert the dictionaries into a DataFrame
df_data_dictionary = pd.DataFrame.from_dict(data_dictionary)
df_data_dictionary2 = pd.DataFrame.from_dict(data_dictionary2)
# Create dictionaries from the DataFrames above
df_dict = {
    'columns': pd.io.json.build_table_schema(df_data_dictionary, index=False)['fields'],
    'values': df_data_dictionary.values.tolist(),
}
df_dict2 = {
    'columns': pd.io.json.build_table_schema(df_data_dictionary2, index=False)['fields'],
    'values': df_data_dictionary2.values.tolist(),
}
# Include the df_dict and df_dict2 dictionaries in another dictionary that includes the 'type', 'id' and 'data' keys.
df1 = {
    'type': 'data_frame',
    'id': 'randomdata',
    'data': df_dict
}
df2 = {
    'type': 'data_frame',
    'id': 'animals',
    'data': df_dict2
}
# Concatenate the dictionaries
result = [df1, df2]
# Record the result using the Glue API
sb.glue('result', result)