Solution
The duration of module imports in your script directly affects the responsiveness of the TestStand UI. During specific events, TestStand must verify that the configured module can be imported without errors. For Python modules, we import them in a new interpreter session to check for issues. If this step is skipped, we cannot display an error in the step settings if the module fails to load. Typically, importing lightweight modules won't cause noticeable delays. However, if your script includes many modules or a single module that takes a long time to import, the UI's responsiveness will suffer.
Measuring Import Times
You can use the following python command in a terminal to get a report of the load times in your script:
python -X importtime <script.py>
A method to obtain a total import duration is to store the output of the importtime command in a file and analyze it with a Python script. You can generate the file by running the following command.
python -X importtime <script_name>.py 2> C:\temp\<log_file_name>.log
Then use this python code to parse the file and get the total time in seconds.
total_time = 0
with open('importtime.log', 'r', encoding='utf-16') as f:
for line in f:
if line.startswith('import time:'):
parts = line.split('|')
time_us = int(parts[1].strip())
total_time += time_us
print(f"Total import time: {total_time / 1_000_000: .6f} seconds")
Workaround
You should import the "heavy" modules only when needed. This can be achieved by placing the class instance creation within a function. For instance, the qcodes module takes about 2-3 seconds to load.
def create_instance(aliasName, instrumentAddress):
from qcodes.instrument_drivers.tektronix.DPO7200xx import TektronixDP0700xx
class DP05104B(TektronixDP0700xx)
...
return DP05104B(aliasName, instrumentAddress)
Users of the class need to use the function to create an instance instead of directly invoking it. Now, qcodes will be imported only the first time the function is executed. Therefore, the slowdown in load time of the file is shifted to the first execution of the function. Subsequent executions of the function will not experience any slowdown.
Although the Lazy Import approach is the most effective, it is also the most labor-intensive, as it requires implementing the imports in the methods that will actually use them. An alternative is to wrap the imports in a Python function, set the module variables as global, and call it as the very first step of the sequence (within the Setup group).
The following Python code, which has heavy import modules, implements this approach
def lazy_imports():
global np, pd, plt, tf, scipy, sklearn, sns, torch, cv2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import scipy
import sklearn
import seaborn as sns
import torch
import cv2
# Class definition
class DataProcessor:
def __init__(self):
self.data = pd.DataFrame({
'A': np.random.randn(100),
'B': np.random.randn(100),
'C': np.random.randn(100)
})
def clean_data(self):
# Example method to clean data
self.data.dropna(inplace=True)
print("Data cleaned")
def normalize_data(self):
# Example method to normalize data
self.data = (self.data - self.data.mean()) / self.data.std()
print("Data normalized")
def plot_data(self):
import matplotlib.pyplot as plt
# Example method to plot data
self.data.plot()
plt.show()
and the TestStand sequence would look like this.
