Share via


Storage Performance CS

This test verifies that the performance of the storage device meets the performance requirements.

Test details

Associated requirements

System.Fundamentals.StorageAndBoot.BootPerformance

See the system hardware requirements.

Platforms

Windows RT (ARM-based) Windows 8 (x86) Windows RT 8.1 Windows 8.1 x64 Windows 8.1 x86

Expected run time

~240 minutes

Categories

Certification Reliability

Type

Automated

 

Running the test

The storage device must be attached to the appropriate controller. The test is non-destructive so no files or partitions will be destroyed during testing. Files will, however, be written to the drive. It is important to minimize the amount of activity occurring on the drive outside of the logo test. Since this is a performance test, outside activity may affect the results.

Troubleshooting

  • Check WTT Trace:

    • Go into the Child Job Results of RunJob – Storage Performance Library.

    • View Task Log of Run StorPerf.

    • Open the log file StorPerf.wtl.

    • Check for messages that may solve the issue.

  • Span Size Failure:

    • Error message: “The requested span size, 10737418240 bytes, is larger than the precondition file, 7195066368 bytes. Logo test invalidated.”

    • If the requested span size is larger than the precondition file, an error will be logged, but the test will continue to run. The testzone.tmp file created is too small to sufficiently test the range required by the test.

    • More space must be made to create this file, or the file was too small and not properly deleted before beginning testing.

    • Currently, the minimum size of the preconditioning file required is 10Gb. 20% free space must also be left on the drive. The preconditioning phase writes a file that fills all free space leaving 20% of the total free space on the drive.

      Total Disk Size * 20% + 10GB < Free Space

  • Check individual test results:

    • Browse Job Logs of Storage Performance CS / Storage Performance USB3.

    • There are several types of files for each test case run inside of the test that are copied back to the controller for triage. These files contain more information than is available in the WTT logs.

    • The .result files are the console output of each process that is launched from this test.

    • The .xml files are generated by the workload that this tests launches. It is what is parsed to get the metrics.

    • The .csv file is the aggregate of all of the parsed data for every test case.

    • The .xls file is the same aggregate as the .csv file of the same name except that it has color-coded pass/fail along with what the expected metric bar value was.

    • The naming of the .result and .xml files uniquely identifies the test case run.

      • Scen = Scenario.

      • The long hexadecimal string identifies all of the parameters passed to the workload.

      • The three digit hexadecimal string is the thread id.

      • The last digits are the runs of this exact same test case on the same workload.

    • If an error occurs in the log, the first place to check is the .result file of the same name as the test case where the error occurred. When an error occurs in the workload, the .result file is copied into the .wtl log file for easier access to the content.

      If it is unable to open a handle to the disk, it may not have a partition on the drive, or may not be running as administrator.

    • If there is a discrepancy about the metric, the values are located in the .xml file

    • If there is a discrepancy about the variance or test case interactions, the .csv/.xls files show the results for all of the tests.

  • Issues with open ETW logs:

    • If the test is closed during execution, it is possible that an ETW log may remain active.

    • The easiest way to reset it is to restart the computer.

    • The logger can also be manually closed:

      • Open an elevated command prompt.

      • Run logman query -ets

      • Run logman stop -ets "Circular BitLocker Logger"

For more troubleshooting information, see Troubleshooting Device.Storage Testing.

More Information

The job takes in the device instance ID of the device under test and converts the device instance ID to a physical drive number or drive letter depending on the scenario. If it is required by the scenario, the job partitions and formats the drive to get it into the configuration needed for testing. The test will run through a series of test cases each mapped to items in the requirements. The test cases are self-contained and are run sequentially. A list of test cases can be obtained by using the PrintPolicy command line option with the appropriate device specified. Each of these test cases can be run on the command-line using the test in standalone mode with a custom policy xml file using the PolicyXML command-line option for further testing or debugging.

The Storage Performance test stores a policy table defining for each type of device what performance tests are to be run and what the appropriate metrics should be. Once the appropriate items in the table are selected, the test will sequentially spawn instances of the specified workload, StorageAssessment, in this case, to test the items specified in the table for that device. Once StorageAssessment has finished doing its testing and created the results, Storage Performance test will parse those values and compare them with the bars defined in the logo requirements in order to print pass/fail logs.

The scenario to test is referenced by the DeviceTag flag on the command-line. This flag is a TestcaseGroup in the policy xml. The test has a few built in scenarios, but allows for custom scenarios, if desired.

A scenario is defined by an Order, Workload, Access, Operation, Operation Value, IO Size, Span Size, Runtime, Queue Depth, Precondition Percent and Precondition MB. Each scenario defined in the table will correspond to the workload being spawned once. Multiple of the same scenarios defined for one device will still only invoke one test case.

A metric is defined by its type and value. Intrinsic to the metric is its units and whether the bar is an upper limit or lower limit. Many metrics can be specified for each scenario which results in only one test case being invoked for that scenario.

For each entry in the table there is a variance criteria specified which defines the maximum variance allowed over the last set of runs before it will stop testing that test case. For many of the entries it is defined as a minimum of 5 runs, maximum of 30 runs, and the variance over the last 5 runs must be under 10% to continue testing. The test case will be rerun up to 30 times or until the variance requirement is satisfied. At that point, the metric will be evaluated according to the defined properties of the metric (minimum, maximum, mean, average, and so on) over the last set of runs.

Although Storage Performance test is not limited to one workload, the majority of the scenarios defined in the policy table use the StorageAssessment workload in order to generate the performance workload and metrics.

Command usage

Command Description

StorPerf.exe /DriveLetter [StorageDriveLetter] /DeviceTag CS_Boot

Runs the CS testing on the specified drive. DeviceTag can also be CS_Boot_HS200 for HS200 capable drives.

 

Command syntax

Command option Description

/DriveNumber <number>

Physical drive number of device under test.

Example: /DriveNumber 0

/DriveLetter <letter>

Drive letter of device under test.

Example: /DriveLetter C

/DeviceTag <value>

Identifies which TestcaseGroup or ComparisonGroup to select as the input from the configuration xml files. This parameter is case-sensitive and is used for indexing both the policy and comparison xml files.

Example: /DeviceTag CS_Boot

/PolicyXML <value>

The policy xml file name. Defines all of the parameters for running the I/O workloads. If no option is given, the default file will be generated.

Example: /PolicyXML CSPolicy.xml

/Compare <value> <value>

The two xml files to compare. These must have been generated from a previous run of this test. The "FinalTestCasesAggregated*.xml" files should be used instead of the "AllTestCasesAggregated*.xml" files since there is no guarantee that the number of iteration is the same for each test case.

Example: /Compare FinalTestCasesAggregated_42f4.xml FinalTestCasesAggregated_a732.xml

/CompareXML <value>

The comparison xml file name. Defines all of the parameters for running the comparison. If no option is given, the default file will be generated.

Example: /CompareXML CSCompare.xml

/PrintPolicy

Prints the policy table.

 

Note  

For command-line help for this test binary, type /h.

 

File list

File Location

StorPerf.exe

<[testbinroot]>\NTTest\driverstest\storage\wdk\

StorageAssessment.exe

<[testbinroot]>\NTTest\driverstest\storage\wdk\StorageAssessment\

ssdtest.dat

<[testbinroot]>\NTTest\driverstest\storage\wdk\StorageAssessment\

 

 

 

Send comments about this topic to Microsoft