Repository: earlephilhower/ezfio
Branch: master
Commit: bbaff2619d1a
Files: 7
Total size: 152.5 KB
Directory structure:
gitextract_6z78mt2r/
├── COPYING
├── README
├── combine.py
├── ezfio.bat
├── ezfio.ps1
├── ezfio.py
└── original.ods
================================================
FILE CONTENTS
================================================
================================================
FILE: COPYING
================================================
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.
================================================
FILE: README
================================================
ezFIO V1.0
(C) Copyright 2015-18 HGST
earle.philhower.iii@hgst.com
------------------------------------------------------------------------
ezFIO is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or
(at your option) any later version.
ezFIO is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with ezFIO. If not, see <https://www.gnu.org/licenses/>.
------------------------------------------------------------------------
This test script is intended to give a block-level based overview of
SSD performance (SATA, SAS, and NVME) under real-world conditions by
focusing on sustained performance at different block sizes and queue
depths. Both text-mode Linux and GUI and text-mode Windows versions
are included.
The results of multiple tests are summarized into a single OpenDoc format
spreadsheet, readable under OpenOffice, LibreOffice, or Microsoft Excel.
FIO is required to perform the actual IO tests. Please ensure the latest
version is installed, either from your operating system's repository or
sources available at https://github.com/axboe/fio or precompiled for
Windows at https://ci.appveyor.com/project/axboe/fio (for the GIT latest)
or from https://www.bluestop.org/fio/ .
(There seems to be an issue with FIO 3.1 under Windows that is not present
under earlier or later builds. In a nutshell, the 1200 second sustained
performance test ends up running, under this version, for over 12 hours!
While the final results are still good and the script continues, it does
waste a large amount of time and so I recommend avoiding the BlueStop 3.1
build. The CI.appveyor.com link above can be used to get current FIO
head builds instead.)
------------------------------------------------------------------------
A new --cluster option allows for running multiple clients in parallel,
to allow testing performance of shared storage systems like SANs or
AFAs.
Start a "fio --server" job on all clients, then on one of them run
./ezfio.py --cluster --drive host1:/dev/dr1,host2:/dev/dr2/... ...
Basically add "--cluster" to the command line before the drive
option, and in the drive option make a comma separated list of
hostname:/path/to/storage .
The first host in the list must be the one you're currently running
ezfio from. ezfio will try using the local system to collect
appropriate system info on the first drive.
In the current implementation, all nodes/drives must be identical in
size. There are no provisions for having volumes of differing sizes.
All other graphs and results should be the aggregate of the entire
cluster, as reported by fio.
ex:
Start up FIO servers on all systems to be tested
(on host 1):
# fio --server &
(on host 2):
# fio --server &
(on host 3):
# fio --server &
Run a benchmark run:
(on host 1)
# ./ezfio.py --cluster --drive host1:/dev/nvme1n1,host2:/dev/nvme1n1,host3:/dev/nvme4n1
------------------------------------------------------------------------
ezFIO got where it is today through the help of many users who filed
bugs when things didn't work, or submitted patches to support new CPUs.
Please feel free to open issues or drop me a line if you have questions.
Special thanks to @coolrecep (Recep Baltaş) who has spent literally days
tracking down Windows issues.
================================================
FILE: combine.py
================================================
#!/usr/bin/python
# ezfio 1.0
# earle.philhower.iii@hgst.com
#
# ------------------------------------------------------------------------
# ezfio is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# ezfio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with ezfio. If not, see <http://www.gnu.org/licenses/>.
# ------------------------------------------------------------------------
#
# Usage: ./append.py --source <old.ods> --append <new.ods> --suffix <_new> --color <223344> --output <combined.ods>
import argparse
import base64
import datetime
import json
import os
import platform
import pwd
import re
import shutil
import socket
import subprocess
import sys
import threading
import time
import zipfile
def ParseArgs():
"""Parse command line options into globals."""
global sourceODS, appendODS, destODS, suffix, color
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description="A tool to add a dataset to an existing ezFIO ODS file.",
epilog="")
parser.add_argument("--source", "-s", dest = "sourceODS",
help="First ODS file with 1 or more test runs included", required=True)
parser.add_argument("--append", "-a", dest="appendODS",
help="ODS file with tests to append to source", required=True)
parser.add_argument("--suffix", "-x", dest="suffix",
help="Suffix to append to data tables from appended ODS", required=True)
parser.add_argument("--color", "-c", dest="color",
help="Color to use for graphed data in appended ODS (rrggbb format)", required=True)
parser.add_argument("--output", "-o", dest="destODS",
help="Location where results should be saved", required=True)
args = parser.parse_args()
sourceODS = args.sourceODS
appendODS = args.appendODS
destODS = args.destODS
suffix = args.suffix
color = args.color
def GenerateCombinedODS():
"""Builds a new ODS spreadsheet w/graphs from generated test CSV files."""
def GetContentXMLFromODS( odssrc ):
"""Extract content.xml from an ODS file, where the sheet lives."""
ziparchive = zipfile.ZipFile( odssrc )
content = ziparchive.read("content.xml")
content = content.replace("\n", "")
return content
def CSVtoXMLSheet(sheetName, csvName):
"""Replace a named sheet with the contents of a CSV file."""
newt = '<table:table table:name='
newt += '"' + sheetName + '"' + ' table:style-name="ta1" > '
newt += '<table:table-column table:style-name="co1" '
newt += 'table:default-cell-style-name="Default"/>'
# Insert the rows, one entry at a time
with open(csvName) as f:
for line in f:
line = line.rstrip()
newt += '<table:table-row table:style-name="ro1">'
for val in line.split(','):
try:
cell = '<table:table-cell office:value-type="float" '
cell += 'office:value="' + str(float(val))
cell += '"><text:p>'
cell += str(float(val)) + '</text:p></table:table-cell>'
except: # It's not a float, so let's call it a string
cell = '<table:table-cell office:value-type="string" '
cell += '><text:p>'
cell += str(val) + '</text:p></table:table-cell>'
newt += cell
newt += '</table:table-row>'
f.close()
# Close the tags
newt += '</table:table>'
return newt
def AppendSheetFromCSV(sheetName, csvName, xmltext):
"""Add a new sheet to the XML from the CSV file."""
newt = CSVtoXMLSheet(sheetName, csvName)
# Replace the XML using lazy string matching
searchstr = '<table:named-expressions/>'
return re.sub(searchstr, newt + searchstr, xmltext)
def UpdateContentXMLToODS_text( odssrc, odsdest, xmltext ):
"""Replace content.xml in an ODS w/an in-memory copy and write new.
Replace content.xml in an ODS file with in-memory, modified copy and
write new ODS. Can't just copy source.zip and replace one file, the
output ZIP file is not correct in many cases (opens in Excel but fails
ODF validation and LibreOffice fails to load under Windows).
Also strips out any binary versions of objects and the thumbnail,
since they are no longer valid once we've changed the data in the
sheet.
"""
global suffix
if os.path.exists(odsdest):
os.unlink(odsdest)
# Windows ZipArchive will not use "Store" even with "no compression"
# so we need to have a mimetype.zip file encoded below to match spec:
mimetypezip = """
UEsDBAoAAAAAAOKbNUiFbDmKLgAAAC4AAAAIAAAAbWltZXR5cGVhcHBsaWNhdGlvbi92bmQub2Fz
aXMub3BlbmRvY3VtZW50LnNwcmVhZHNoZWV0UEsBAj8ACgAAAAAA4ps1SIVsOYouAAAALgAAAAgA
JAAAAAAAAACAAAAAAAAAAG1pbWV0eXBlCgAgAAAAAAABABgAAAyCUsVU0QFH/eNMmlTRAUf940ya
VNEBUEsFBgAAAAABAAEAWgAAAFQAAAAAAA==
"""
zipbytes = base64.b64decode( mimetypezip )
with open(odsdest, 'wb') as f:
f.write(zipbytes)
zasrc = zipfile.ZipFile(odssrc, 'r')
zadst = zipfile.ZipFile(odsdest, 'a', zipfile.ZIP_DEFLATED)
for entry in zasrc.namelist():
if entry == "mimetype":
continue
elif entry.endswith('/') or entry.endswith('\\'):
continue
elif entry == "content.xml":
zadst.writestr( "content.xml", xmltext)
elif ("Object" in entry) and ("content.xml" in entry):
# Remove <table:table table:name="local-table"> table
rdbytes = zasrc.read(entry)
outbytes = re.sub('<table:table table:name="local-table">.*</table:table>', "", rdbytes)
# Add in extra chart series following existing format...
searchStr = '<chart:series .*</chart:series>'
match = re.search(searchStr, outbytes);
addl = ""
if match:
fmt = match.group(0)
addl = fmt;
for sheet in [ "Tests", "Timeseries", "Exceedance"]:
addl = re.sub( sheet, sheet+suffix, addl )
# Remove any existing label and add updated one
addl = re.sub("loext:label-string=\".*?\"" , "", addl );
addl = re.sub ("<chart:series ", "<chart:series " + "loext:label-string=\""+suffix+"\" ", addl)
styleMatch = re.search("chart:style-name=\"(.)*?\"", fmt)
if styleMatch:
styleName = re.sub("chart:style-name=\"", "", styleMatch.group(0) )
styleName = re.sub("\".*", "", styleName)
# Change the style requested in new one...
addl = re.sub( "\"" + styleName + "\"", "\"" + styleName + suffix + "\"", addl )
# And patch in the new chart:series entry
outbytes = re.sub ( searchStr, fmt + addl, outbytes )
# Now make the new style...
oldStyleMatch = re.search( "<style:style style:name=\"" + styleName + ".*?</style:style>" , outbytes )
if oldStyleMatch:
oldStyle = oldStyleMatch.group(0)
newStyle = re.sub( "\"" + styleName + "\"", "\"" + styleName + suffix + "\"", oldStyle)
# Change the embedded color:
newStyle = re.sub( "svg:stroke-color=\"#.*?\"", "svg:stroke-color=\"#" + color + "\"", newStyle )
# Add in the new style...
outbytes = re.sub ( oldStyle, oldStyle + newStyle, outbytes )
# Add legend if it doesn't exist
legendMatch = re.search("<chart:legend .*?/>", outbytes)
if not legendMatch:
# Put in hardcoded one...looks like junk, but can be tweaked by user in application
legend = "<chart:legend chart:legend-position=\"bottom\" svg:x=\"0.000cm\" svg:y=\"0.000cm\" style:legend-expansion=\"wide\" chart:style-name=\"ch3\"/>";
outbytes = re.sub ("</chart:title>", "</chart:title>" + legend, outbytes )
zadst.writestr(entry, outbytes)
elif entry == "META-INF/manifest.xml":
# Remove ObjectReplacements from the list
rdbytes = zasrc.read(entry)
outbytes = ""
lines = rdbytes.split("\n")
for line in lines:
if not ( ("ObjectReplacement" in line) or ("Thumbnails" in line) ):
outbytes = outbytes + line + "\n"
zadst.writestr(entry, outbytes)
elif ("Thumbnails" in entry) or ("ObjectReplacement" in entry):
# Skip binary versions
continue
else:
rdbytes = zasrc.read(entry)
zadst.writestr(entry, rdbytes)
zasrc.close()
zadst.close()
global sourceODS, appendODS, destODS
# First rename and append the extra data sheets
xmlsrc = GetContentXMLFromODS( sourceODS )
xmlapp = GetContentXMLFromODS( appendODS )
for tableName in [ "Tests", "Timeseries", "Exceedance" ]:
searchStr = '<table:table table:name="' + tableName + '".*?</table:table>'
sheetMatch = re.search(searchStr, xmlapp);
if sheetMatch:
sheet = sheetMatch.group(0)
# Rename the table
sheet = re.sub( '"' + tableName + '"', '"' + tableName + suffix + '"', sheet);
# Stick it right before the end of the list
searchStr = '<table:named-expressions/>'
xmlsrc = re.sub(searchStr, sheet + searchStr, xmlsrc)
UpdateContentXMLToODS_text( sourceODS, destODS, xmlsrc )
sourceODS = ""
appendODS = ""
destODS = ""
suffix = ""
color = ""
if __name__ == "__main__":
ParseArgs()
GenerateCombinedODS()
================================================
FILE: ezfio.bat
================================================
@echo off
REM Start EZFIO.PS1 in an elevated PowerShell interpreter.
REM Here be dragons.
REM First start a standard powershell and use it's Start-Process cmdlet
REM to start *another* powershell, this one as administrator, to interpret
REM the script. Care must be taken to properly quote the path to the script.
set GO='%cd%\ezfio.ps1'
powershell -Command "$p = new-object System.Diagnostics.ProcessStartInfo 'PowerShell'; $p.Arguments = {-WindowStyle hidden -Command ". %GO%"}; $p.Verb = 'RunAs'; [System.Diagnostics.Process]::Start($p) | out-null;"
================================================
FILE: ezfio.ps1
================================================
# ezfio 1.0
# earle.philhower.iii@hgst.com
#
# ------------------------------------------------------------------------
# ezfio is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# ezfio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with ezfio. If not, see <http://www.gnu.org/licenses/>.
# ------------------------------------------------------------------------
#
# Usage: ezfio.ps1 -drive {physicaldrive number}
# Example: ezfio.ps1 -drive 3
#
# When no parameters are specified, the script will provide usage info
# as well as a list of attached PhysicalDrives
#
# This script requires Administrator privileges so must be run from
# a PowerShell session started with "Run as Administrator."
#
# If Windows errors with, "...cannot be loaded because running scripts is
# disabled on this system...." you need to run the following line to enable
# execution of local PowerShell scripts:
# Set-ExecutionPolicy -scope CurrentUser RemoteSigned
#
# Please be sure to have FIO installed, or you will be prompted to install
# and re-run the script.
param (
[string]$drive = "none",
[string]$outDir = "none",
[int]$util = 100,
[switch]$help,
[switch]$yes,
[switch]$nullio,
[switch]$fastprecond,
[switch]$quickie
)
Add-Type -Assembly System.IO.Compression
Add-Type -Assembly System.IO.Compression.FileSystem
Add-Type -AssemblyName PresentationFramework, System.Windows.Forms
Add-Type -AssemblyName PresentationCore
Chdir (Split-Path $script:MyInvocation.MyCommand.Path)
function WindowFromXAML( $xaml, $prefix )
{
# Create a WPF window from XAML from DevStudio
$xaml = $xaml -replace 'mc:Ignorable="d"', ''
$xaml = $xaml -replace "x:N", 'N'
$xaml = $xaml -replace '^<Win.*', '<Window'
$xml = [xml]$xaml
$reader = (New-Object System.Xml.XmlNodeReader $xml)
try{ $window = [Windows.Markup.XamlReader]::Load( $reader ) }
catch { Write-Host "Unable to load XAML for window."; return $null; }
# Set the variables locally in the calling function. Please forgive me.
$xml.SelectNodes("//*[@Name]") |
%{Set-Variable -Name "$($prefix)_$($_.Name)" -Value $window.FindName($_.Name) -Scope 1}
return $window
}
function CheckAdmin()
{
# Check that we have root privileges for disk access, abort if not.
if ( -not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator") ) {
[System.Windows.Forms.MessageBox]::Show( "Administrator privileges are required for low-level disk access.`nPlease restart this script as Administrator to continue.", "Fatal Error", 0, 48 ) | Out-Null
exit
}
}
function FindFIO()
{
# Try the path to the FIO executable, return path or exit.
if ( -not (Get-Command fio.exe) ) {
$ret = [System.Windows.Forms.MessageBox]::Show( "FIO is required to run IO tests. Would you like to install?", "FIO Not Detected", 4, 32 )
if ($ret -eq "yes" ) {
Start-Process "https://www.bluestop.org/fio/"
}
exit
} else {
$global:fio = (Get-Command fio.exe).Path
}
}
function CheckFIOVersion()
{
# Check we have a version of FIO we can use.
try {
$global:fioVerString = ( . $global:fio "--version" )
$fiov = ( . $global:fio "--version" ).Split('-')[1].Split('.')[0]
if ([int]$fiov -lt 2) {
$err = "ERROR! FIO version " + (. $global:fio "--version") + " is unsupported. Version 2.0 or later is required"
if ($global:testmode -eq "gui") {
[System.Windows.Forms.MessageBox]::Show( $err, "Fatal Error", 0, 48 ) | Out-Null
} else {
Write-Error $err
}
exit 1
}
} catch {
$err = "ERROR! Unable to determine FIO version. Version 2.0 or later is required."
if ($global:testmode -eq "gui") {
[System.Windows.Forms.MessageBox]::Show( $err, "Fatal Error", 0, 48 ) | Out-Null
} else {
Write-Error $err
}
exit 1
}
try {
$out = (. $global:fio "--parse-only" "--output-format=json+")
if ($LastExitCode -eq 0 ) {
$global:fioOutputFormat = "json+"
}
} catch {
# Nothing, we can't make exceedance
}
}
function ParseArgs()
{
# Set the global values to the param() values, so that Parse() can see them
function IntroDialog()
{
# Gets user test parameters if not specified on the command line
$xaml = @'
<Window x:Class="Window2"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="ezFIO Drive Selection" ResizeMode="NoResize" Height="281" Width="456">
<Grid>
<Label Content="Drive to test:" HorizontalAlignment="Left" Margin="24,16,0,0" VerticalAlignment="Top"/>
<ComboBox x:Name="driveList" HorizontalAlignment="Left" Margin="115,20,0,0" VerticalAlignment="Top" Width="218"/>
<Button x:Name="startTest" Content="Start Test" HorizontalAlignment="Left" Margin="358,79,0,0" VerticalAlignment="Top" Width="75"/>
<Button x:Name="exit" Content="Exit" HorizontalAlignment="Left" Margin="358,125,0,0" VerticalAlignment="Top" Width="75"/>
<GroupBox Header="Information" HorizontalAlignment="Left" Margin="24,55,0,0" VerticalAlignment="Top" Height="109" Width="309">
<Grid>
<Label Content="Model:" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="2,0,0,0"/>
<Label x:Name="modelName" Content="Happy NVME" HorizontalAlignment="Left" Margin="50,0,0,0" VerticalAlignment="Top"/>
<Label Content="Serial:" HorizontalAlignment="Left" Margin="8,25,0,0" VerticalAlignment="Top"/>
<Label x:Name="serial" Content="NVME001" HorizontalAlignment="Left" Margin="50,25,0,0" VerticalAlignment="Top"/>
<Label Content="Size:" HorizontalAlignment="Left" Margin="15,50,0,0" VerticalAlignment="Top"/>
<Label x:Name="sizeGB" Content="100GB" HorizontalAlignment="Left" Margin="50,50,0,0" VerticalAlignment="Top"/>
</Grid>
</GroupBox>
<GroupBox Header="WARNING! WARNING! WARNING!" HorizontalAlignment="Left" Margin="24,176,0,0" VerticalAlignment="Top" Width="398">
<Label >
<TextBlock TextWrapping="WrapWithOverflow" Width="376" HorizontalAlignment="Center" VerticalAlignment="Center">All data on the selected drive will be destroyed by the test. Please make sure there are no mounted filesystems or data on the drive.</TextBlock>
</Label>
</GroupBox>
</Grid>
</Window>
'@
$intro = WindowFromXAML $xaml 'intro'
$intro.Icon = $global:iconBitmap
$pd = @{}
$intro.add_Loaded( {
$intro.Activate()
$intro_driveList.Focus()
$global:physDrive = $null
# Populate the physicaldrive list
$drives = Get-WmiObject -query "SELECT * from Win32_DiskDrive" | Sort-Object
foreach ( $drive in $drives ) {
$idx = $intro_driveList.Items.Add( $drive.DeviceID )
$pd.Add( $idx, $drive )
}
$intro_driveList.SelectedIndex = 0
$drive = $pd.Get_Item( 0 )
$intro_modelName.Content = $drive.Model.Trim()
if ($drive.SerialNumber -ne $null) { $intro_serial.Content = $drive.SerialNumber.Trim() }
else { $intro_serial.Content = "UNKNOWN" }
$intro_sizeGB.Content = [string]::Format( "{0} GB", [int]($drive.Size/1000000000) )
$intro_driveList.add_SelectionChanged( {
$idx = $intro_driveList.SelectedIndex
$drive = $pd.Get_Item( $idx )
$intro_modelName.Content = $drive.Model.Trim()
if ($drive.SerialNumber -ne $null) { $intro_serial.Content = $drive.SerialNumber.Trim() }
else { $intro_serial.Content = "UNKNOWN" }
$intro_sizeGB.Content = [string]::Format( "{0} GB", [int]($drive.Size/1000000000) )
} )
$intro_startTest.add_Click( {
$idx = $intro_driveList.SelectedIndex
$drive = $pd.Get_Item( $idx )
$global:physDrive = $drive.DeviceID
$intro.dialogResult = $true
$intro.Close()
} )
$intro_exit.add_Click( { $intro.Close() } )
} )
$intro.ShowDialog()
}
# Parse command line options into globals.
function usage()
{
# How to use the script, and some handy info on current drives
$scriptname = split-path $global:scriptName -Leaf
"ezfio, an in-depth IO tester for NVME devices"
"WARNING: All data on any tested device will be destroyed!`n"
"Usage: "
[string]::Format(" .\{0} -drive <PhysicalDiskNumber> [-util <1..100>] [-outDir <path>] [-nullIO]", $scriptname)
[string]::Format("EX: .\{0} -drive 2 -util 100`n", $scriptname)
"PhysDrive is the ID number of the \\PhysicalDrive to test"
"Usage is the percent of total size to test (100%=default)`n"
"`nPhysical disks:"
$drives=Get-WmiObject -query "SELECT * from Win32_DiskDrive" | Sort-Object
foreach ( $drive in $drives ) {
if ($drive.SerialNumber -ne $null) {
[string]::Format( "{0}. {1}, Serial: {2}, Size: {3}GB",
$drive.DeviceID.substring(17), $drive.Model.Trim(),
$drive.SerialNumber.Trim(), [int]($drive.Size/1000000000) )
} else {
[string]::Format( "{0}. {1}, Size: {2}GB",
$drive.DeviceID.substring(17), $drive.Model.Trim(),
[int]($drive.Size/1000000000) )
}
}
exit
}
if ($help) { usage }
if (($util -lt 1) -or ($util -gt 100)) {
"ERROR: Utilization must be between 1 and 100.`n"
usage
} else {
$global:utilization = $util
}
if ( $outDir -eq "none" ) {
$global:outDir = "${PWD}"
} else {
$global:outDir = "$outDir"
}
if ( $drive -ne "none" ) {
$global:testMode = "cli"
if ( $drive -notin (Get-Disk).Number ){
Write-Error "The drive number `"$drive`" you entered does not exist.`n`n"
usage
}
$global:physDrive = "\\.\PhysicalDrive$drive"
} else {
$global:testmode = "gui"
$ok = IntroDialog
if (-not ($ok) ) {
exit
}
}
$global:yes = $yes
if ( -not $nullio ) {
$global:ioengine = "windowsaio"
} else {
$global:ioengine = "null"
}
$global:quickie = $quickie
$global:fastPrecond = $fastprecond
# Do a sanity check that the selected drive does not show as a local drive letter
Get-WMIObject Win32_LogicalDisk | Foreach-Object {
$did = (Get-WmiObject -Query "Associators of {Win32_LogicalDisk.DeviceID='$($_.DeviceID)'} WHERE ResultRole=Antecedent").Path
$dl = $_.DeviceID
if ($did.RelativePath) {
$part = $did.RelativePath.Split('"')[1]
$pd = $part.split(',')[0].split('#')[1]
if ($global:physDrive.ToLower() -eq "\\.\physicaldrive$pd") {
if ($global:testmode -eq "cli") {
Write-Error "ERROR! Drive '$global:physdrive' is mounted as drive '$dl'!"
Write-Error "Aborting run, cannot run on mounted filesystem."
exit
} else {
[System.Windows.Forms.MessageBox]::Show( "ERROR! Drive '$global:physdrive' is mounted as drive '$dl'!`nAborting run, cannot run on mounted filesystem.", "Fatal Error", 0, 48 ) | Out-Null
exit
}
}
}
}
}
function CollectSystemInfo()
{
# Collect some OS and CPU information.
# May want to put a window up while this happens. GWMI is very slow
$procs = [array](Get-WmiObject -class win32_processor) # Single-socket gives object, so coerce into array to match multisocket
$global:cpu = $procs[0].Name.Trim()
$cpuCount = ($procs[0].NumberOfCores).Count
$cpuCores = ($procs[0] | Where DeviceID -eq "CPU0" ).NumberOfLogicalProcessors
$global:cpuCores = $cpuCores * $cpuCount
$global:cpuFreqMHz = ($procs[0] | Where DeviceID -eq "CPU0").MaxClockSpeed
$os = Get-WmiObject Win32_OperatingSystem
$global:uname = $os.Caption.Trim() + " - Build " + $os.BuildNumber.Trim() + " - ServicePack " + $os.ServicePackMajorVersion + "." + $os.ServicePackMinorVersion
# Check if we're running in high-performance mode
$plan = Get-WmiObject -Class win32_powerplan -Namespace root\cimv2\power -Filter "IsActive=True"
if (-not ($plan.InstanceID -like "*8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c*")) {
function SetHighPerformance() {
"Setting High Performance power scheme via POWERCFG"
powercfg /setactive "8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c"
}
if ($global:yes) {
SetHighPerformance
} elseif ($testmode -ne "gui") {
"-" * 75
"Power mode is not currently set to High Performance."
"This may result in lowered test results."
$cont = Read-Host "Would you like to enable this power setting now? (y/n)"
"-" * 75
if ($cont -ne "") {
if ($cont.Substring(0, 1).ToLower() -eq "y" ) {
SetHighPerformance
}
}
} else {
$ret = [System.Windows.Forms.MessageBox]::Show(
"System power mode not set to High Performance.`nThis may result in lowered test results.`nWould you like to enable High Performance Mode now?",
"Verify Performance Mode", 4, 32)
if ($ret -eq"yes" ) {
SetHighPerformance
}
}
}
}
function VerifyContinue()
{
# User's last chance to abort the test. Exit if they don't agree.
if ( -not $global:yes ) {
if ($testMode -ne "gui") { # text-mode prompt since we're running with command line options
"-" * 75
"WARNING! " * 9
"THIS TEST WILL DESTROY ANY DATA AND FILESYSTEMS ON $global:physDrive"
$cont = Read-Host "Please type the word `"yes`" and hit return to continue, or anything else to abort"
"-" * 75
if ( $cont -ne "yes" ) {
"Performance test aborted, drive is untouched."
exit
}
} else {
# Do it in a messagebox since we're running GUI-wise
$ret = [System.Windows.Forms.MessageBox]::Show(
"Test selected to run on $global:physDrive.`nALL DATA WILL BE ERASED ON THIS DRIVE`nContinue with testing?",
"Verify the device to test", 4)
if ($ret -ne "yes" ) {
"Performance test aborted, drive is untouched."
exit
}
}
}
}
function CollectDriveInfo()
{
# Get important device information, exit if not possible.
# We absolutely need this information
$global:physDriveBase = ([io.fileinfo]$global:physDrive).BaseName
$global:physDriveNo = $global:physDrive.Substring(17)
$global:physDriveBytes=(GET-WMIOBJECT win32_diskdrive | where DeviceID -eq $global:physDrive).Size
$global:physDriveGB=[long]($global:physDriveBytes/(1000*1000*1000))
$global:physDriveGiB=[long]($global:physDriveBytes/(1024*1024*1024))
$global:testcapacity = [long](($global:physDriveGiB * $global:utilization) / 100)
# This is just nice to have
$drive = (Get-Disk | Where-Object { $_.Number -eq $global:physDriveNo })
$global:model = $drive.Model.ToString().Trim()
if ($drive.SerialNumber -ne $null ) { $global:serial = $drive.SerialNumber.ToString().Trim() }
else { $global:serial = "UNKNOWN" }
}
# Set up names for all output/input files, place headers on CSVs.
function CSVInfoHeader {
if ($global:fastPrecond -eq $false) { $prefix = "" }
else { $prefix = "FASTPRECOND-" }
#Headers to the CSV file (ending up in the ODS at the test end)
"Drive,$prefix$global:physDrive"
"Model,$prefix$global:model"
"Serial,$prefix$global:serial"
"AvailCapacity,$prefix$global:physDriveGiB,GiB"
"TestedCapacity,$prefix$global:testcapacity,GiB"
"CPU,$prefix$global:cpu"
"Cores,$prefix$global:cpuCores"
"Frequency,$prefix$global:cpuFreqMHz"
"OS,$prefix$global:uname"
"FIOVersion,$prefix$global:fioVerString"
}
function SetupFiles()
{
# Datestamp for run output files
$global:ds=(Get-Date).ToString("yyyy-MM-dd_HH-mm-ss")
# The unique suffix we generate for all output files
$suffix="${global:physDriveGB}GB_${global:cpuCores}cores_${global:cpuFreqMHz}MHz_${global:physDriveBase}_${env:computername}_${global:ds}"
# Need to worry about normalizing passed in directory names, or else non-absolute output paths will resolve to c:\windows\system32\...
if ( -not ( Test-Path -Path $global:outDir ) ) {
# New-Item used PWD, so we're OK here
New-Item -ItemType directory -Path $global:outDir | Out-Null
}
# Now resolve to c:\... path and put back to global for sanity.
$global:outDir = Resolve-Path $global:outDir
# The "details" directory contains the raw output of each FIO run
$global:details = "${global:outDir}\details_${suffix}"
# The "details" directory contains the raw output of each FIO run
if ( Test-Path -Path $global:details ) {
Remove-Item -Recurse -Force $global:details | Out-Null
}
New-Item -ItemType directory -Path $global:details | Out-Null
# Copy this script into it for posterity
Copy-Item $scriptName $global:details
# Files we're going to generate, encode some system info in the names
# If the output files already exist, erase them
$global:testcsv = "${global:details}\ezfio_tests_${suffix}.csv"
if (Test-Path $global:testcsv) { Remove-Item $global:testcsv }
CSVInfoHeader > $global:testcsv
"Type,Write %,Block Size,Threads,Queue Depth/Thread,IOPS,Bandwidth (MB/s),Read Latency (us),Write Latency (us)" >> $global:testcsv
$global:timeseriescsv ="${global:details}\ezfio_timeseries_${suffix}.csv"
$global:timeseriesclatcsv ="${global:details}\ezfio_timeseriesclat_${suffix}.csv"
$global:timeseriesslatcsv ="${global:details}\ezfio_timeseriesslat_${suffix}.csv"
if (Test-Path $global:timeseriescsv) { Remove-Item $global:timeseriescsv }
if (Test-Path $global:timeseriesclatcsv) { Remove-Item $global:timeseriesclatcsv }
if (Test-Path $global:timeseriesslatcsv) { Remove-Item $global:timeseriesslatcsv }
CSVInfoHeader > $global:timeseriescsv
CSVInfoHeader > $global:timeseriesclatcsv
CSVInfoHeader > $global:timeseriesslatcsv
"IOPS" >> $global:timeseriescsv # Add IOPS header
"CLAT-read,CLAT-write" >> $global:timeseriesclatcsv
"SLAT-read,SLAT-write" >> $global:timeseriesslatcsv
# ODS input and output files
$global:odssrc = "${PWD}\original.ods"
$global:odsdest = "${global:outDir}\ezfio_results_${suffix}.ods"
if (Test-Path $global:odsdest) { Remove-Item $global:odsdest }
}
function TestName ($seqrand, $wmix, $bs, $threads, $iodepth)
{
# Return full path and filename prefix for test of specified params
$testfile = $global:details + "\Test" + $seqrand + "_w" + [string]$wmix
$testfile += "_bs" + [string]$bs + "_threads" + [string]$threads + "_iodepth"
$testfile += [string]$iodepth + "_" + $global:physDriveBase + ".out"
return $testfile
}
# The actual functions that run FIO, in a string so that we can do a Start-Job using it.
$global:jobutils = @'
function TestName ($seqrand, $wmix, $bs, $threads, $iodepth)
{
# Return full path and filename prefix for test of specified params
$testfile = $details + "\Test" + $seqrand + "_w" + [string]$wmix
$testfile += "_bs" + [string]$bs + "_threads" + [string]$threads + "_iodepth"
$testfile += [string]$iodepth + "_" + $physDriveBase + ".out"
return $testfile
}
function SequentialConditioning
{
# Sequentially fill the complete capacity of the drive once.
# Note that we can't use regular test runner because this test needs
# to run for a specified # of bytes, not a specified # of seconds.
if ( $quickie ) {
$size = "1G"
} else {
$size = "${testcapacity}G"
}
. $fio "--name=SeqCond" "--readwrite=write" "--bs=128k" "--ioengine=$ioengine" "--iodepth=64" "--direct=1" "--filename=$physDrive" "--size=$size" "--thread" | Out-Null
if ( $LastExitCode -ne 0 ) {
Write-Output "ERROR" "ERROR" "ERROR"
} else {
Write-Output "DONE" "DONE" "DONE"
}
}
function RandomConditioning
{
# Randomly write entire device for the full capacity
# Note that we can't use regular test runner because this test needs
# to run for a specified # of bytes, not a specified # of seconds.
if ( $quickie ) {
$size = "1G"
} else {
$size = "${testcapacity}G"
}
. $fio "--name=RandCond" "--readwrite=randwrite" "--bs=4k" "--invalidate=1" "--end_fsync=0" "--group_reporting" "--direct=1" "--filename=$physDrive" "--size=$size" "--ioengine=$ioengine" "--iodepth=256" "--norandommap" "--randrepeat=0" "--thread" | Out-Null
if ( $LastExitCode -ne 0 ) {
Write-Output "ERROR" "ERROR" "ERROR"
} else {
Write-Output "DONE" "DONE" "DONE"
}
}
# Taken from fio_latency2csv.py
function plat_idx_to_val( $idx, $FIO_IO_U_PLAT_BITS, $FIO_IO_U_PLAT_VAL )
{
# MSB <= (FIO_IO_U_PLAT_BITS-1), cannot be rounded off. Use
# all bits of the sample as index
if ($idx -lt ($FIO_IO_U_PLAT_VAL -shl 1)) {
return $idx
}
# Find the group and compute the minimum value of that group
$error_bits = ($idx -shr $FIO_IO_U_PLAT_BITS) - 1
$base = 1 -shl ($error_bits + $FIO_IO_U_PLAT_BITS)
# Find its bucket number of the group
$k = $idx % $FIO_IO_U_PLAT_VAL
# Return the mean of the range of the bucket
return ($base + (($k + 0.5) * (1 -shl $error_bits)))
}
function WriteExceedance($j, $rdwr, $outfile)
{
# Generate an exceedance CSV for read or write from JSON output.
if ($fioOutputFormat -eq "json") {
return # This data not present in JSON format, only JSON+
}
$ios = $j.jobs[0].$rdwr.total_ios
if ( $ios -gt 0 ) {
$runttl = 0;
# FIO 2.99 changed this to use saner latency bucketing, no semi-log needed
if ($j.jobs[0].$rdwr.clat_ns) {
# This is very inefficient, but need to convert from object.property's to sorted ints...
$lat_ns = @()
foreach ($n in ((Get-Member -inputObject $j.jobs[0].$rdwr.clat_ns.bins -MemberType Properties).name) ) {
$lat_ns += [long]$n
}
foreach ($b in ($lat_ns | sort-object)) {
$lat_us = [float]($b) / 1000.0
$cnt = [int]$j.jobs[0].$rdwr.clat_ns.bins.$b
$runttl += $cnt
$pctile = 1.0 - [float]$runttl / [float]$ios;
if ( $cnt -gt 0 ) {
"$lat_us,$pctile" >> $outfile
}
}
} else {
$plat_bits = $j.jobs[0].$rdwr.clat.bins.FIO_IO_U_PLAT_BITS
$plat_val = $j.jobs[0].$rdwr.clat.bins.FIO_IO_U_PLAT_VAL
foreach ($b in 0..[int]$j.jobs[0].$rdwr.clat.bins.FIO_IO_U_PLAT_NR) {
$cnt = [int]$j.jobs[0].$rdwr.clat.bins.$b
$runttl += $cnt
$pctile = 1.0 - [float]$runttl / [float]$ios
if ( $cnt -gt 0 ) {
$p2idx = plat_idx_to_val $b $plat_bits $plat_val
"${p2idx},${pctile}" >> $outfile
}
}
}
}
}
function CombineThreadOutputs($suffix, $outcsv, $lat, $runtime, $extra_runtime)
{
# Merge all FIO iops/lat logs across all servers"""
# The lists may be called "iops" but the same works for clat/slat
$testtime = $runtime + $extra_runtime
$iops = New-Object 'float[]' $testtime
# For latencies, need to keep the _w and _r separate
$iops_w = New-Object 'float[]' $testtime
$filecnt = 0
$fileglob = "$testfile$suffix.*log"
Get-ChildItem $fileglob | ForEach-Object {
$filename = $_.FullName
$filecnt++
$csvhdr = 'timestamp', 'value', 'wr', 'ign'
$lines = Import-Csv -Path $filename -Header $csvhdr
$lineidx = 0
# Set time 0 IOPS to first values
$riops = [float]0.0
$wiops = [float]0.0
$nexttime = [float]0.0
for ($x=0; $x -lt $testtime; $x++) {
if ( -not $lat ) {
$iops[$x] = [float]$iops[$x] + [float]$riops + [float]$wiops
} else {
$iops[$x] = [float]$iops[$x] + [float]$riops
$iops_w[$x] = [float]$iops_w[$x] + [float]$wiops
}
while (($lineidx -lt $lines.Count) -and ($nexttime -lt $x)) {
$nexttime = $lines[$lineidx].timestamp / 1000.0
if ( $lines[$lineidx].wr -eq 1 ) {
$wiops = [int]$lines[$lineidx].value
} else {
$riops = [int]$lines[$lineidx].value
}
$lineidx++
}
}
}
# Generate the combined CSV
for ($x=[int]($extra_runtime / 2); $x -lt ($runtime + $extra_runtime); $x++) {
if ( $lat ) {
$a = [float]$iops[$x] / [float]$filecnt
$b = [float]$iops_w[$x] / [float]$filecnt
"{0:f1},{1:f1}" -f $a, $b >> $outcsv
} else {
$a = $iops[$x]
"{0:f0}" -f $a >> $outcsv
}
}
}
function RunTest
{
# Runs the specified test, generates output CSV lines.
# Output file names
$testfile = TestName $seqrand $wmix $bs $threads $iodepth
if ( $seqrand -eq "Seq" ) { $rw = "rw" }
else { $rw = "randrw" }
if ( $iops_log ) {
$extra_runtime = 10
} else {
$extra_runtime = 0
}
$testtime = $runtime + $extra_runtime
$cmd = ("--name=test", "--readwrite=$rw", "--rwmixwrite=$wmix")
$cmd += ("--bs=$bs", "--invalidate=1", "--end_fsync=0")
$cmd += ("--group_reporting", "--direct=1", "--filename=$physDrive")
$cmd += ("--size=${testcapacity}G", "--time_based", "--runtime=$testtime")
$cmd += ("--ioengine=$ioengine", "--numjobs=$threads")
$cmd += ("--iodepth=$iodepth", "--norandommap", "--randrepeat=0")
if ( $iops_log ) {
$cmd += ("--write_iops_log=$testfile")
$cmd += ("--write_lat_log=$testfile")
$cmd += ("--log_avg_msec=1000")
$cmd += ("--log_unix_epoch=0")
}
$cmd += ("--thread", "--output-format=$fioOutputFormat", "--exitall")
$fio + " " + [string]::Join(" ", $cmd) | Out-File $testfile
# Check that the IO size is usable. Some SSDs are only 4K logical sectors
$minblock = (Get-Disk | Where-Object { $_.Number -eq $global:physDriveNo }).LogicalSectorSize
if ( $bs -lt $minblock ) {
"Test not run because block size $bs below minimum size $minblock" | Out-File -Append $testfile
"3;" + "0;" * 100 | Out-File -Append $testfile # Bogus 0-filled result line
"1,1" | Out-File "${testfile}.exc.read.csv"
"1,1" | Out-File "${testfile}.exc.write.csv"
"$seqrand,$wmix,$bs,$threads,$iodepth,0,0,0,0" | Out-File -Append $testcsv
Write-Output "SKIP" "SKIP" "SKIP"
return
}
. $fio @cmd | Out-File -Append $testfile
if ( $LastExitCode -ne 0 ) {
Write-Output "ERROR" "ERROR" "ERROR"
return # Don't process this one, it was error'd out!
}
if ( $iops_log ) {
CombineThreadOutputs '_iops' $timeseriescsv $false $runtime $extra_runtime
CombineThreadOutputs '_clat' $timeseriesclatcsv $true $runtime $extra_runtime
CombineThreadOutputs '_slat' $timeseriesslatcsv $true $runtime $extra_runtime
}
# Thanks to @BryanTuttle. Skip any FIO output before the JSON open-bracket
$LineSkip=0
foreach ($line in Get-Content $testfile) {
if ($line -match '^{') { break }
else {$LineSkip++}
}
$j = ConvertFrom-Json "$(Get-Content $testfile | select -Skip $LineSkip)"
$rdiops = [float]($j.jobs[0].read.iops);
$wriops = [float]($j.jobs[0].write.iops);
$rlat = [float]($j.jobs[0].read.lat_ns.mean) / 1000.0;
if ($rlat -le 0.0001) { $rlat = [float]($j.jobs[0].read.lat.mean); }
$wlat = [float]($j.jobs[0].write.lat_ns.mean) / 1000.0;
if ($wlat -le 0.0001) { $wlat = [float]($j.jobs[0].wlat.lat.mean); }
$iops = "{0:F0}" -f ($rdiops + $wriops)
# Locale output is not wanted here, manually make a decimal string. Ugh
$lat = "{0:F1}" -f ([math]::Max($rlat, $wlat))
$mbpsfloat = (( ($rdiops+$wriops) * $bs ) / ( 1024.0 * 1024.0 ))
"{0:f1}" -f $mbpsfloat | Set-Variable mbps
$lat = "{0:F1}" -f ([math]::Max($rlat, $wlat)) # This is just displayed, use native locale
"$seqrand,$wmix,$bs,$threads,$iodepth,$iops,$mbps,$rlat,$wlat" | Out-File -Append $testcsv
WriteExceedance $j "read" "${testfile}.exc.read.csv"
WriteExceedance $j "write" "${testfile}.exc.write.csv"
Write-Output $iops $mbps $lat
}
'@
function DefineTests {
# Generate the work list for the main worker into OC.
# What we're shmoo-ing across
$bslist = (512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072)
$qdlist = (1, 2, 4, 8, 16, 32, 64, 128, 256)
$threadslist = (1, 2, 4, 8, 16, 32, 64, 128, 256)
$shorttime = 120 # Runtime of point tests
$longtime = 1200 # Runtime of long-running tests
if ( $quickie ) {
$shorttime = [int]($shorttime / 10)
$longtime = [int]($longtime / 10)
}
function AddTest( $name, $seqrand, $writepct, $blocksize, $threads, $qdperthread, $desc, $cmdline ) {
if ($threads -eq "") { $qd = '' } else { $qd = ([int]$threads) * ([int]$qdperthread) }
if ($blocksize -ne "") { if ($blocksize -lt 1024) { $bsstr = "${blocksize}b" } else { $bsstr = "{0:N0}K" -f ([int]$blocksize/1024) } }
if ($writepct -ne "" ) { $writepct = [string]$writepct + "%" }
$dat = New-Object psobject -Property @{ name=$name; seqrand=$seqrand; writepct=$writepct
bs=$bsstr; qd = $qd; qdperthread = $qdperthread; bw = ''; iops= ''; lat = ''; desc = $desc;
cmdline = $cmdline }
$global:oc.Add( $dat )
}
function DoAddTest {
AddTest $testname $seqrand $wmix $bs $threads $iodepth $desc "$global:globals; $global:jobutils; `$iops_log=$iops_log; `$seqrand=`"$seqrand`"; `$wmix=$wmix; `$bs=$bs; `$threads=$threads; `$iodepth=$iodepth; `$runtime=$runtime; RunTest"
}
function AddTestBSShmoo {
AddTest $testname 'Preparation' '' '' '' '' '' "$global:globals; `"$testname`" >> `"$global:testcsv`"; Write-Output ' ' ' ' ' '"
foreach ($bs in $bslist ) { $desc = "$testname, BS=$bs"; DoAddTest }
}
function AddTestQDShmoo {
AddTest $testname 'Preparation' '' '' '' '' '' "$global:globals; `"$testname`" >> `"$global:testcsv`"; Write-Output ' ' ' ' ' '"
foreach ($iodepth in $qdlist ) { $desc = "$testname, QD=$iodepth"; DoAddTest }
}
function AddTestThreadsShmoo {
AddTest $testname 'Preparation' '' '' '' '' '' "$global:globals; `"$testname`" >> `"$global:testcsv`"; Write-Output ' ' ' ' ' '"
foreach ($threads in $threadslist) { $desc = "$testname, Threads=$threads"; DoAddTest }
}
AddTest 'Sequential Preconditioning' 'Seq Pass 1' '100' '131072' '1' '256' 'Sequential Preconditioning' "$global:globals; $global:jobutils; SequentialConditioning;"
if ($global:fastPrecond -ne $true) {
AddTest 'Sequential Preconditioning' 'Seq Pass 2' '100' '131072' '1' '256' 'Sequential Preconditioning' "$global:globals; $global:jobutils; SequentialConditioning;"
}
$testname = "Sustained Multi-Threaded Sequential Read Tests by Block Size"
$seqrand = "Seq"; $wmix=0; $threads=1; $runtime=$shorttime; $iops_log="`$false"; $iodepth=256
AddTestBSShmoo
$testname = "Sustained Multi-Threaded Random Read Tests by Block Size"
$seqrand = "Rand"; $wmix=0; $threads=16; $runtime=$shorttime; $iops_log="`$false"; $iodepth=16
AddTestBSShmoo
$testname = "Sequential Write Tests with Queue Depth=1 by Block Size"
$seqrand = "Seq"; $wmix=100; $threads=1; $runtime=$shorttime; $iops_log="`$false"; $iodepth=1
AddTestBSShmoo
if ($global:fastPrecond -ne $true) {
AddTest 'Random Preconditioning' 'Rand Pass 1' '100' '4096' '1' '256' 'Random Preconditioning' "$global:globals; $global:jobutils; RandomConditioning;"
AddTest 'Random Preconditioning' 'Rand Pass 2' '100' '4096' '1' '256' 'Random Preconditioning' "$global:globals; $global:jobutils; RandomConditioning;"
}
$testname = "Sustained 4KB Random Read Tests by Number of Threads"
$seqrand = "Rand"; $wmix=0; $bs=4096; $runtime=$shorttime; $iops_log="`$false"; $iodepth=1
AddTestThreadsShmoo
$testname = "Sustained 4KB Random mixed 30% Write Tests by Number Threads"
$seqrand = "Rand"; $wmix=30; $bs=4096; $runtime=$shorttime; $iops_log="`$false"; $iodepth=1
AddTestThreadsShmoo
$testname = "Sustained Perf Stability Test - 4KB Random 30% Write for 20 minutes"
$desc = $testname
AddTest $testname 'Preparation' '' '' '' '' '' "$global:globals; `"$testname`" >> `"$global:testcsv`"; Write-Output ' ' ' ' ' '"
$seqrand = "Rand"; $wmix=30; $bs=4096; $runtime=$longtime; $iops_log="`$true"; $iodepth=1; $threads=256
DoAddTest
$testname = "Sustained 4KB Random Write Tests by Number of Threads"
$seqrand = "Rand"; $wmix=100; $bs=4096; $runtime=$shorttime; $iops_log="`$false"; $iodepth=1
AddTestThreadsShmoo
$testname = "Sustained Multi-Threaded Random Write Tests by Block Size"
$seqrand = "Rand"; $wmix=100; $runtime=$shorttime; $iops_log="`$false"; $iodepth=16; $threads=16
AddTestBSShmoo
}
function RunAllTests()
{
# Iterate through the OC work queue and run each job, show progress.
function UpdateView {
# Updates the grid to reflect new data, scrolls to selection
$t_testList.ItemsSource.Refresh()
$t_testList.UpdateLayout()
$t_testList.ScrollIntoView($t_testList.SelectedItem)
}
function NotifyIcon {
# NotifyIcon needs to run as separate Powerhell process because
# a WPF form will block other events (like the notify-clicked) until
# it returns control to PowerShell
# Pass destination into the block through the child's environment
[System.Environment]::SetEnvironmentVariable("ods", $global:odsdest)
$proc = Start-Process -PassThru (Get-Command powershell.exe) -WindowStyle Hidden -ArgumentList ( "-Command", {
Add-Type -AssemblyName PresentationFramework, System.Windows.Forms
echo ([System.Environment]::GetEnvironmentVariable("'ods'"))
# Add a NotifyIcon that, when clicked, will open the results spreadsheet
$global:notify = New-Object System.Windows.Forms.NotifyIcon
$global:notify.Icon = [System.Drawing.SystemIcons]::Information
$global:notify.BalloonTipIcon = "'Info'"
$global:notify.BalloonTipText = "'The ezFIO test series has completed and result spreadsheet may be opened.'"
$global:notify.Text = "'Click to open the ezFIOresult spreadsheet'"
$global:notify.BalloonTipTitle = "'ezFIO Test Completion'"
$global:notify.Visible = $True
# Using the add_BalloonTipClicked() seemed to fault every time
Unregister-Event -SourceIdentifier click_event -ErrorAction SilentlyContinue
Register-ObjectEvent $notify Click -sourceIdentifier click_event -Action {
Invoke-Item ([System.Environment]::GetEnvironmentVariable("'ods'"))
$global:notify.Dispose()
$global:notify = $null
} | Out-Null
Unregister-Event -SourceIdentifier balloonclick_event -ErrorAction SilentlyContinue
Register-ObjectEvent $notify BalloonTipClicked -SourceIdentifier balloonclick_event -Action {
Invoke-Item ([System.Environment]::GetEnvironmentVariable("'ods'"))
$global:notify.Dispose()
$global:notify = $null
} | Out-Null
$notify.ShowBalloonTip(10000)
while ( $global:notify -ne $null ) { sleep 1 }
} )
return $proc
}
$xaml = @'
<Window x:Class="Window3"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="ezFIO Test Progress" Height="562.084" Width="682.007">
<Window.Resources>
<Style x:Key="CellRightAlign">
<Setter Property="Control.HorizontalAlignment" Value="Right" />
</Style>
<Style x:Key="CellCenterAlign">
<Setter Property="Control.HorizontalAlignment" Value="Center" />
</Style>
</Window.Resources>
<Grid>
<Label Content="Testing Drive:" HorizontalAlignment="Left" Margin="83,23,0,0" VerticalAlignment="Top"/>
<Label x:Name="testingDrive" Content="\\physicaldrive0" HorizontalAlignment="Left" Margin="167,23,0,0" VerticalAlignment="Top"/>
<Label Content="Current Test Runtime:" HorizontalAlignment="Left" Margin="40,75,0,0" VerticalAlignment="Top"/>
<Label x:Name="testRuntime" Content="00:00:00" HorizontalAlignment="Left" Margin="167,75,0,0" VerticalAlignment="Top"/>
<Label Content="Current Test:" HorizontalAlignment="Left" Margin="88,49,0,0" VerticalAlignment="Top"/>
<Label x:Name="currentTest" Content="BS 4K, QD 32, WR 100%" HorizontalAlignment="Left" Margin="167,49,0,0" VerticalAlignment="Top"/>
<DataGrid x:Name="testList" HorizontalAlignment="Left" Margin="36,146,0,0" VerticalAlignment="Top" Height="321" Width="600" GridLinesVisibility="None" HeadersVisibility="Column">
<DataGrid.GroupStyle>
<GroupStyle>
<GroupStyle.HeaderTemplate>
<DataTemplate>
<TextBlock Text="{Binding Items[0].name}"/>
</DataTemplate>
</GroupStyle.HeaderTemplate>
</GroupStyle>
</DataGrid.GroupStyle>
<DataGrid.Columns>
<DataGridTextColumn Header="Access Pattern" Binding="{Binding seqrand}" CanUserSort="false" CanUserReorder="false" IsReadOnly="true"/>
<DataGridTextColumn Header="Write %" Binding="{Binding writepct}" CanUserSort="false" CanUserReorder="false" IsReadOnly="true" ElementStyle="{StaticResource CellRightAlign}"/>
<DataGridTextColumn Header="Block Size" Binding="{Binding bs}" CanUserSort="false" CanUserReorder="false" IsReadOnly="true" ElementStyle="{StaticResource CellRightAlign}"/>
<DataGridTextColumn Header="Queue Depth" Binding="{Binding qd}" CanUserSort="false" CanUserReorder="false" IsReadOnly="true" ElementStyle="{StaticResource CellRightAlign}"/>
<DataGridTextColumn Header=" " Binding="{Binding blank}" CanUserSort="false" CanUserReorder="false" IsReadOnly="true" ElementStyle="{StaticResource CellRightAlign}"/>
<DataGridTextColumn Header="IOPS" Binding="{Binding iops}" CanUserSort="false" CanUserReorder="false" IsReadOnly="true" ElementStyle="{StaticResource CellRightAlign}"/>
<DataGridTextColumn Header="Bandwidth (MB/s)" Binding="{Binding bw}" CanUserSort="false" CanUserReorder="false" IsReadOnly="true" ElementStyle="{StaticResource CellRightAlign}"/>
<DataGridTextColumn Header="Latency (us)" Binding="{Binding lat}" CanUserSort="false" CanUserReorder="false" IsReadOnly="true" ElementStyle="{StaticResource CellRightAlign}"/>
</DataGrid.Columns>
</DataGrid>
<Button x:Name="openSpreadsheet" Content="Open Graphs Spreadsheet" HorizontalAlignment="Left" Margin="236,486,0,0" Width="200" Height="27" VerticalAlignment="Top"/>
<Label Content="Total Test Runtime:" HorizontalAlignment="Left" Margin="53,102,0,0" VerticalAlignment="Top"/>
<Label x:Name="totalRuntime" Content="00:00:00" HorizontalAlignment="Left" Margin="167,102,0,0" VerticalAlignment="Top"/>
</Grid>
</Window>
'@
# The test window
$t = WindowFromXAML $xaml 't'
$t.Icon = $global:iconBitmap
$t.add_Loaded( {
$t.Activate()
$t_testList.Focus()
$global:step = -1 # Which test we're on
$global:curjob = $null # Which process is running
$global:totalStarttime = Get-Date
# The NotifyIcon process info
$global:notifyProc = $null
$t_testingDrive.Content = [string]::Format("{0}, {1}({2}), {3}GB", $global:physDrive, $global:model, $global:serial, $global:testcapacity )
$t_openSpreadsheet.IsEnabled = $false
$t_currentTest.Content = "Starting up..."
$t_testList.CanUserAddRows = $false
$t_testList.AutoGenerateColumns = $false
$t_testList.ItemsSource = $null
$lview = [System.Windows.Data.ListCollectionView]$global:oc
$lview.GroupDescriptions.Add((new-object System.Windows.Data.PropertyGroupDescription "name"))
$t_testList.ItemsSource = $lview
# Poor man's threading/event driven
$global:timer = new-object System.Windows.Threading.DispatcherTimer
$global:timer.Interval = [TimeSpan]"0:0:1.00"
$global:timer.Add_Tick(
{
# If there's a running job, update the runtime if not done, and capture the results if finished
if ($global:curjob -ne $null)
{
if ( $global:curjob.State -match 'running' )
{
$now = Get-Date
$delta = $now - $global:starttime
$ts = [timespan]::FromTicks($delta.Ticks)
$t_testRuntime.Content = $ts.ToString("hh\:mm\:ss")
$delta = $now - $global:totalstarttime
$ts = [timespan]::FromTicks($delta.Ticks)
$t_totalRuntime.Content = $ts.ToString("hh\:mm\:ss")
} else {
# Job just finished, let's read out answers
$q = Receive-Job $global:curjob
$global:oc[$global:step].iops = $q[0]
$global:oc[$global:step].bw = $q[1]
$global:oc[$global:step].lat = $q[2]
$ign = 0.0
if ([float]::TryParse($global:oc[$global:step].iops, [ref]$ign)) { $global:oc[$global:step].iops = [string]::Format("{0:N0}", [float]$global:oc[$global:step].iops) }
if ([float]::TryParse($global:oc[$global:step].bw, [ref]$ign)) { $global:oc[$global:step].bw = [string]::Format("{0:N1}", [float]$global:oc[$global:step].bw) }
$t_testList.SelectedIndex = $global:step
UpdateView
if ($global:oc[$global:step].iops -eq "ERROR") {
$global:step = 9999 # Skip all the other tests
[System.Windows.Forms.MessageBox]::Show( "ERROR! FIO job did not complete successfully. Aborting further runs.", "Fatal Error", 0, 48 ) | Out-Null
}
}
}
# If there's no running job (last one finished), start a new one
if ( ($global:curjob -eq $null) -or ( -not ($global:curjob.State -match 'running' ) ) ){
$global:step = $global:step + 1
if ($global:step -lt $t_testList.Items.Count) {
$t_testList.SelectedIndex = $global:step
$global:cmdline = $t_testList.Items[$global:step].cmdline
$t_currentTest.Content = $t_testList.Items[$global:step].desc
# Powershell won't have the $globals in the Start-Job context, so expand here
$fullcmd = "Start-Job { $global:cmdline }"
$global:curjob = Invoke-Expression $fullcmd
$global:starttime = Get-Date
$global:oc[$global:step].iops = "Running"
$global:oc[$global:step].bw = "Running"
$global:oc[$global:step].lat = "Running"
$t_testList.SelectedIndex = $global:step
UpdateView
} else {
$global:timer.Stop()
$global:curjob = $null
$t_testList.SelectedIndex = $null
if ($global:step -lt 9999) {
$t_currentTest.Content = "Completed"
GenerateResultODS
$t_openSpreadsheet.IsEnabled = $true
$global:notifyProc = NotifyIcon
} else {
$t_currentTest.Content = "ERROR"
}
}
}
} )
$global:timer.Start()
} )
$t.add_Closing( {
$global:timer.Stop()
} )
$t_openSpreadsheet.add_Click( {
# Just open the file using default application
Invoke-Item $global:odsdest
$t.close()
} )
$t.ShowDialog() | Out-Null
# Clean up the notifyicon process
if ($global:notifyProc -ne $null) {
if (-not ($global:notifyProc.HasExited)) { $global:notifyProc.Kill() }
}
}
function RunAllTestsCLI()
{
# CLI mode will short-circuit, run much simpler path and output only text
# Determine some column widths to make format specifiers for CLI mode outputs
$maxlen = 0
foreach ($o in $global:oc) {
$maxlen = [math]::max($maxlen, $o.desc.length)
}
$descfmt = "{0,-" + [string]$maxlen + "}"
$resfmt = "{1,8} {2,9} {3,8}"
$fmtstr = $descfmt + " " + $resfmt
"*" * [string]::format( $fmtstr, "", "", "", "").length
"ezFio test parameters:"
$fmtinfo="{0,-20}: {1}"
[string]::format( $fmtinfo, "Drive", $global:physDrive )
[string]::format( $fmtinfo, "Model", $global:model )
[string]::format( $fmtinfo, "Serial", $global:serial )
[string]::format( $fmtinfo, "AvailCapacity", [string]$global:physDriveGiB + " GiB")
[string]::format( $fmtinfo, "TestedCapacity", [string]$global:testcapacity + " GiB")
[string]::format( $fmtinfo, "CPU", $global:cpu)
[string]::format( $fmtinfo, "Cores", $global:cpuCores)
[string]::format( $fmtinfo, "Frequency", $global:cpuFreqMHz)
[string]::format( $fmtinfo, "FIO Version", $global:fioVerString)
""
[string]::format( $fmtstr, "Test Description", "BW(MB/s)", "IOPS", "Lat(us)")
[string]::format( $fmtstr, "-"*$maxlen, "-"*8, "-"*9, "-"*8)
foreach ($o in $global:oc) {
if ( $o.desc -eq "" ) {
# This is a header-printing job, don't thread out
[string]::format( $fmtstr, "---" + $o.name + "---", "", "", "")
[Console]::Out.Flush()
Invoke-Expression $o.cmdline | Out-Null
} else {
# This is a real test job, print some stuff, execute, then print output
Write-Host -NoNewline ([string]::format($descfmt, $o.desc))
[Console]::Out.Flush()
$q = Invoke-Expression $o.cmdline
$iops = $q[0]
$mbps = $q[1]
$lat = $q[2]
Write-Host ([string]::format($resfmt, "", $mbps, $iops, $lat))
if ($mbps -eq "ERROR") {
"ERROR! FIO job did not complete successfully. Aborting further runs."
return
}
[Console]::Out.Flush()
}
}
GenerateResultODS
"`nCOMPLETED! Output file: $global:odsdest"
return
}
function GenerateResultODS()
{
# Builds a new ODS spreadsheet w/graphs from generated test CSV files.
function GetContentXMLFromODS( $odssrc )
{
# Extract content.xml from an ODS file, where the sheet lives.
$ziparchive = [System.IO.Compression.ZipFile]::Open( $odssrc, [System.IO.Compression.ZipArchiveMode]::Read )
$zipentry = $ziparchive.GetEntry("content.xml")
$reader = New-Object System.IO.StreamReader( $zipentry.Open() )
$contentobj = $reader.ReadToEnd()
$reader.Close()
$ziparchive.Dispose()
return $contentobj -replace "`n","" -replace "`r",""
}
function ReplaceSheetWithCSV_regex($sheetName, $csvName, $xmltext)
{
# Replace a named sheet with the contents of a CSV file.
$newt = "<table:table table:name="
$newt = $newt + "`"$sheetName`"" + ' table:style-name="ta1" > <table:table-column table:style-name="co1" table:default-cell-style-name="Default"/>'
Get-Content $csvName | ForEach-Object {
$newt = $newt + "<table:table-row table:style-name=`"ro1`">"
foreach ($val in ($_.Split(','))) {
$dbl = 0.0
if ( [System.Double]::TryParse( $val, [ref]$dbl ) ) {
$newt = $newt + "<table:table-cell office:value-type=`"float`" office:value=`"$val`"><text:p>$val</text:p></table:table-cell>"
} else {
$newt = $newt + "<table:table-cell office:value-type=`"string`"><text:p>$val</text:p></table:table-cell>"
}
}
$newt = $newt + "</table:table-row>"
}
$newt = $newt + "</table:table>"
$searchstr = "<table:table table:name=`"$sheetName`".*?</table:table>"
$xmltext -replace $searchstr, $newt
}
function CombineExceedanceCSV( $qdList, $testType, $testWpct, $testBS, $testIOdepth, $suffix )
{
# Merge multiple exceedance CSVs into a single output file.
# Column merge multiple CSV files into a single one. Complicated by
# the fact that the number of columns in each may vary.
$csv = $global:details + "/ezfio_exceedance_" + $suffix + ".csv"
if ( Test-Path -Path $csv ) {
Remove-Item -Recurse -Force $csv | Out-Null
}
CSVInfoHeader > $csv
$line1 = ""
$line2 = ""
foreach ($qd in $qdList) {
$line1 = $line1 + "QD${qd} Read Exceedance,,QD${qd} Write Exceedance,,,"
$line2 = $line2 + "rdusec,rdpct,wrusec,wrpct,,"
}
$line1 >> $csv;
$line2 >> $csv;
$files = @()
foreach ($qd in $qdList) {
$testname = TestName $testType $testWpct $testBS $qd $testIOdepth
if ( Test-Path -Path "${testname}.exc.read.csv") {
$r = [System.IO.File]::OpenText( "${testname}.exc.read.csv" )
} else {
$r = $null
}
if ( Test-Path -Path "${testname}.exc.write.csv") {
$w = [System.IO.File]::OpenText( "${testname}.exc.write.csv" )
} else {
$w = $null
}
$files += , @( $r, $w )
}
do {
$all_empty = $true
$l = ""
foreach ($fset in $files) {
if (($fset[0] -eq $null) -or ($fset[0].EndOfStream)) {
$a = ","
} else {
$a = $fset[0].ReadLine().Trim()
$all_empty = $false
}
if (($fset[1] -eq $null) -or ($fset[1].EndOfStream)) {
$b = ","
} else {
$b = $fset[1].ReadLine().Trim()
$all_empty = $false
}
$l += "${a},${b},,"
}
$l >> $csv
} while (-not $all_empty)
foreach ($fset in $files) {
if ($fset[0] -ne $null) {
$fset[0].Close()
}
if ($fset[1] -ne $null) {
$fset[1].Close()
}
}
return $csv
}
function UpdateContentXMLToODS_text( $odssrc, $odsdest, $xmltext )
{
# Replace content.xml in an ODS file with in-memory, modified copy and
# write new ODS. Can't just copy source.zip and replace one file, the
# output ZIP file is not correct in many cases (opens in Excel but fails
# ODF validation and LibreOffice fails to load under Windows)
if (test-path $odsdest) { Remove-Item $odsdest }
# Windows ZipArchive will not use "Store" even if we select no compression
# so we need to have a mimetype.zip file encoded below to match ODF spec:
$mimetypezip = @'
UEsDBBQAAAgAAICyN0+FbDmKLgAAAC4AAAAIAAAAbWltZXR5cGVhcHBsaWNhdGlvbi92bmQub2Fz
aXMub3BlbmRvY3VtZW50LnNwcmVhZHNoZWV0UEsBAhQAFAAACAAAgLI3T4VsOYouAAAALgAAAAgA
AAAAAAAAAAAAAAAAAAAAAG1pbWV0eXBlUEsFBgAAAAABAAEANgAAAFQAAAAAAA==
'@
$bytes = [System.Convert]::FromBase64String( $mimetypezip )
[io.file]::WriteAllBytes( $odsdest, $bytes )
$zasrc = [System.IO.Compression.ZipFile]::Open( $odssrc, [System.IO.Compression.ZipArchiveMode]::Read )
$zadst = [System.IO.Compression.ZipFile]::Open( $odsdest, [System.IO.Compression.ZipArchiveMode]::Update )
foreach ($entry in $zasrc.Entries) {
if (($entry.FullName -eq "mimetype") -or $entry.FullName.StartsWith("Thumbnails") -or $entry.FullName.StartsWith("ObjectReplacement")) {
# Skip binary versions, and the copied-over mimetype
continue
}
$newentry = $zadst.CreateEntry( $entry )
if ($entry.FullName.EndsWith("/") -or $entry.FullName.EndsWith("\")) {
# Directory, don't copy anything
} elseif ($entry.FullName -eq "content.xml") {
# Copying data for content.xml from new data
$wr = New-Object System.IO.StreamWriter( $newentry.Open() )
$wr.Write( $xmltext )
$wr.Close()
} elseif ($entry.FullName -like "Object */content.xml") {
# Remove <table:table table:name="local-table"> table
$rd = New-Object System.IO.StreamReader( $entry.Open() )
$rdbytes = $rd.ReadToEnd()
$wr = New-Object System.IO.StreamWriter( $newentry.Open() )
$wrbytes = $rdbytes -replace "<table:table table:name=`"local-table`">.*</table:table>", ""
$wr.write( $wrbytes )
$wr.Close()
$rd.Close()
} elseif ($entry.FullName -eq "META-INF/manifest.xml") {
# Remove ObjectReplacements from the list
$rd = New-Object System.IO.StreamReader( $entry.Open() )
$wr = New-Object System.IO.StreamWriter( $newentry.Open() )
$rdbytes = $rd.ReadToEnd()
$lines = $rdbytes.Split("`n")
foreach ($line in $lines) {
if ( -not ( ($line -contains "ObjectReplacement") -or ($line -contains "Thumbnails") ) ) {
$wr.Write($line)
$wr.Write("`n")
}
}
$wr.Close()
$rd.Close()
} else {
# Copying data for from the source ZIP
$wr = New-Object System.IO.StreamWriter( $newentry.Open() )
$rd = New-Object System.IO.StreamReader( $entry.Open() )
$wr.Write( $rd.ReadToEnd() )
$wr.Close()
$rd.Close()
}
}
$zadst.Dispose()
$zasrc.Dispose()
}
# Use text magic and not XML editing as the XML processor doesn't seem to
# escape special characters in the same way that OpenOffice does, leading to
# occasional problems. Also allows same logic to run under Linux w/sed
[string]$xmlsrc = GetContentXMLFromODS $global:odssrc
$xmlsrc = ReplaceSheetWithCSV_regex Timeseries $global:timeseriescsv $xmlsrc
$xmlsrc = ReplaceSheetWithCSV_regex TimeseriesCLAT $global:timeseriesclatcsv $xmlsrc
$xmlsrc = ReplaceSheetWithCSV_regex TimeseriesSLAT $global:timeseriesslatcsv $xmlsrc
$xmlsrc = ReplaceSheetWithCSV_regex Tests $global:testcsv $xmlsrc
# Potentially add exceedance data if we have it
if ($global:fioOutputFormat -eq "json+") {
$csv = CombineExceedanceCSV @(1, 4, 16, 32) "Rand" 30 4096 1 "exceedance30"
$xmlsrc = ReplaceSheetWithCSV_regex Exceedance $csv $xmlsrc
}
# Remove draw:image references to deleted binary previews
$xmlsrc = $xmlsrc -replace "<draw:image.*?/>",""
$xmlsrc = $xmlsrc -replace "_DRIVE",$global:physDrive -replace "_TESTCAP",$global:testcapacity -replace "_MODEL",$global:model -replace "_SERIAL",$global:serial -replace "_OS",$global:os -replace "_FIO",$global:fioVerString
UpdateContentXMLToODS_text $global:odssrc $global:odsdest $xmlsrc
}
function CreateIcon()
{
$iconb64 = @'
AAABAAEAICAAAAEAIABDAgAAFgAAAIlQTkcNChoKAAAADUlIRFIAAAAgAAAAIAgGAAAAc3p6
9AAAAgpJREFUWIXtlz9rVEEUxX+zTqW9gn8Sm6ikErTQJPoB3E4UIogQ3WiiqIh/CGJhsYkG
gmCjxqwaYqO9H0DcqI1gJ2YNSLTKB9BCcmcs4oxv3s7ukpDNFHrgwbwz975z5s5w33vKWgvA
4bH3ReAisBfYQnuwCHwEym9uHnwLoKy19JVn7wAjbRJthJHZW33jquf26yLwap3FAQQ4oI2R
CwnEATYAV7UR2ZfIAECvtkY2JzSwQxtZSqgP2hhJa8CmNmAktYF/vgKrOQOfHp6q47qHZzzf
PTzTNC4LtXvwqV2pgc+PB5aTlQp492JTSgXj/Pyes888F5yBWqXkx7tKlTouyzt0nZlqaTgf
Mzd12nMFK4IV8ULOca1SworQNTAZrOLLk8FgRbVKycc2gtOIcQVjhGwVXJkA5qeHMEai5XT3
7pqfHmpoIK+R5bQxYSvOi+Tnd5683/Q+bqC+3TtOx0rXeeKeH399fikwFitlK8RiHOcPYUf/
BN9eXAu2oKN/4m/CHz7LLa8kbiD2vOxCXF7BiOCu7cfHg339/vJ6Uw4glu/4fK6b23bsrs9R
W4+OrrgPrCXSt+LkL6P/3wPJt8DI0iLt+xVrhQVtjXwAiokMVLURGQOOAKpV9BrjF1BW1lo2
HTp/AxgF9DqKX/5RffBIuV69sefcfuAK0At0tkl4AagC5Z/vJucAfgOSfC+wPSfmJAAAAABJ
RU5ErkJggg==
'@
# Load the icon as a bitmap for user
$iconBitmap = New-Object System.Windows.Media.Imaging.BitmapImage
$iconBitmap.BeginInit()
$iconBitmap.StreamSource = [System.IO.MemoryStream][System.Convert]::FromBase64String($iconb64)
$iconBitmap.EndInit()
$iconBitmap.Freeze()
return $iconBitmap
}
$global:fio = "" # FIO executable
$global:fioVerString = "" # FIO self-reported version
$global:fioOutputFormat = "json" # Can we make exceedance charts using JSON+ output?
$global:physDrive = "" # Device path to test
$global:utilization = "" # Device utilization % 1..100
$global:yes = $false # Skip user verification
$global:nullio = $false # Use the null IO engine, no real transfers done
$global:fastPrecond = $false # Only do one sequential fill, no other preconditioning
$global:ioengine = "windowsaio" # FIO engine to use for simplicity
$global:quickie = $false # Do short shadown test, non-standard
$global:cpu = "" # CPU model
$global:cpuCores = "" # # of cores (including virtual)
$global:cpuFreqMHz = "" # "Nominal" speed of CPU
$global:uname = "" # Kernel name/info
$global:physDriveGiB = "" # Disk size in GiB (2^n)
$global:physDriveGB = "" # Disk size in GB (10^n)
$global:physDriveBase = "" # Basename (ex: nvme0n1)
$global:testcapacity = "" # Total GiB to test
$global:model = "" # Drive model name
$global:serial = "" # Drive serial number
$global:ds = "" # Datestamp to appent to files/directories to uniquify
$global:details = "" # Test details directory
$global:testcsv = "" # Intermediate test output CSV file
$global:timeseriescsv = "" # Intermediate iostat output CSV file
$global:timeseriesclatcsv = "" # Intermediate iostat output CSV file
$global:timeseriesslatcsv = "" # Intermediate iostat output CSV file
$global:odssrc = "" # Original ODS spreadsheet file
$global:odsdest = "" # Generated results ODS spreadsheet file
$global:oc = New-Object System.Collections.ObjectModel.ObservableCollection[Object] # The list of tests to run
$global:iconBitmap = CreateIcon
$global:scriptName = $MyInvocation.MyCommand.Name
CheckAdmin
ParseArgs
FindFIO
CheckFIOVersion
CollectSystemInfo
CollectDriveInfo
VerifyContinue
SetupFiles
# $globals == The "global" variables to pass into the FIO runner script
$global:globals = "`$fio = `"$global:fio`";"
$global:globals += "`$fioOutputFormat = `"$global:fioOutputFormat`";"
$global:globals += "`$physDrive = `"$global:physDrive`";"
$global:globals += "`$testcapacity = `"$global:testcapacity`";"
$global:globals += "`$timeseriescsv = `"$global:timeseriescsv`";"
$global:globals += "`$timeseriesclatcsv = `"$global:timeseriesclatcsv`";"
$global:globals += "`$timeseriesslatcsv = `"$global:timeseriesslatcsv`";"
$global:globals += "`$testcsv = `"$global:testcsv`";"
$global:globals += "`$physDriveBase = `"$global:physDriveBase`";"
$global:globals += "`$physDriveNo = `"$global:physDriveNo`";"
$global:globals += "`$details= `"$global:details`";"
$global:globals += "`$ds = `"$global:ds`";"
$global:globals += "`$ioengine = `"$global:ioengine`";"
if ( $globals:quickie ) {
$global:globals += "`$quickie = 1;"
} else {
$global:globals += "`$quickie = 0;"
}
DefineTests
if ($global:testmode -eq "cli") { RunAllTestsCLI }
else { RunAllTests }
# GenerateResultODS # Done in the RunAllTests function
================================================
FILE: ezfio.py
================================================
#!/usr/bin/python3
"""ezfio 1.9
earlephilhower@yahoo.com
------------------------------------------------------------------------
ezfio is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or
(at your option) any later version.
ezfio is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with ezfio. If not, see <http://www.gnu.org/licenses/>.
------------------------------------------------------------------------
Usage: ./ezfio.py -d </dev/node> [-u <100..1>]
Example: ./ezfio.py -d /dev/nvme0n1 -u 100
This script requires root privileges so must be run as "root" or
via "sudo ./ezfio.py"
Please be sure to have FIO installed, or you will be prompted to install
and re-run the script."""
from __future__ import print_function
import argparse
import base64
from collections import OrderedDict
import datetime
import glob
import json
import os
import platform
import pwd
import re
import shutil
import socket
import subprocess
import sys
import tempfile
import threading
import time
import zipfile
def AppendFile(text, filename):
"""Equivalent to >> in BASH, append a line to a text file."""
with open(filename, "a") as f:
f.write(text)
f.write("\n")
def Run(cmd):
"""Run a cmd[], return the exit code, stdout, and stderr."""
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out = proc.stdout.read()
err = proc.stderr.read()
code = proc.wait()
return int(code), out.decode('UTF-8'), err.decode('UTF-8')
def CheckAdmin():
"""Check that we have root privileges for disk access, abort if not."""
if os.geteuid() != 0:
sys.stderr.write("Root privileges are required for low-level disk ")
sys.stderr.write("access.\nPlease restart this script as root ")
sys.stderr.write("(sudo) to continue.\n")
sys.exit(1)
def FindFIO():
"""Try the path and the CWD for a FIO executable, return path or exit."""
# Determine if FIO is in path or CWD
try:
ret, out, err = Run(["fio", "-v"])
if ret == 0:
return "fio"
except:
try:
ret, out, err = Run(['./fio', '-v'])
if ret == 0:
return "./fio"
except:
sys.stderr.write("FIO is required to run IO tests.\n")
sys.stderr.write("The latest versions can be found at ")
sys.stderr.write("https://github.com/axboe/fio.\n")
sys.exit(1)
def CheckFIOVersion():
"""Check that we have a version of FIO installed that we can use."""
global fio, fioVerString, fioOutputFormat
code, out, err = Run([fio, '--version'])
try:
fioVerString = out.split('\n')[0].rstrip()
ver = out.split('\n')[0].rstrip().split('-')[1].split('.')[0]
if int(ver) < 2:
sys.stderr.write("ERROR: FIO version " + ver + " unsupported, ")
sys.stderr.write("version 2.0 or later required. Exiting.\n")
sys.exit(2)
except:
sys.stderr.write("ERROR: Unable to determine version of fio " +
"installed. Exiting.\n")
sys.exit(2)
# Now see if we can make exceedance charts
# Can't just try --output-format=json+ because the FIO in Ubuntu 16.04
# repo doesn't understand it and *silently ignores ir*. Instead, use
# the help output to see if "json+" exists at all...
try:
code, out, err = Run([fio, '--help'])
if (code == 0) and ("json+" in out):
fioOutputFormat = "json+"
except:
pass
def CheckAIOLimits():
"""Ensure kernel AIO max transactions is large enough to run test."""
global aioNeeded
# If anything fails, silently continue. FIO will give error if it
# can't run due to the AIO setting later on.
try:
code, out, err = Run(['cat', '/proc/sys/fs/aio-max-nr'])
if code == 0:
aiomaxnr = int(out.split("\n")[0].rstrip())
if aiomaxnr < int(aioNeeded):
sys.stderr.write(
"ERROR: The kernel's maximum outstanding async IO" +
"setting (aio-max-nr) is too\n")
sys.stderr.write(" low to complete the test run. Required value is " + str(
aioNeeded) + ", current is " + str(aiomaxnr) + "\n")
sys.stderr.write(
" To fix this temporarially, please execute the following command:\n")
sys.stderr.write(
" sudo sysctl -w fs.aio-max-nr=" + str(aioNeeded) + "\n")
sys.stderr.write("Unable to continue. Exiting.\n")
sys.exit(2)
except:
pass
def ParseArgs():
"""Parse command line options into globals."""
global physDrive, physDriveDict, physDriveTxt, utilization, nullio, isFile
global outputDest, offset, cluster, yes, quickie, verify, fastPrecond
global readOnly, compressPct
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description="A tool to easily run FIO to benchmark sustained "
"performance of NVME\nand other types of SSD.",
epilog="""
Requirements:\n
* Root access (log in as root, or sudo {prog})
* No filesytems or data on target device
* FIO IO tester (available https://github.com/axboe/fio)
* sdparm to identify the NVME device and serial number
WARNING: All data on the target device will be DESTROYED by this test.""")
parser.add_argument("--cluster", dest="cluster", action='store_true',
help="Run the test on a cluster (--drive in "+
"host1:/dev/p1,host2:/dev/ps,...)", required=False)
parser.add_argument("--verify", dest="verify", action='store_true',
help="Have FIO perform data verifications on reads."+
" May impact performance", required=False)
parser.add_argument("--drive", "-d", dest="physDrive",
help="Device to test (ex: /dev/nvme0n1)", required=True)
parser.add_argument("--utilization", "-u", dest="utilization",
help="Amount of drive to test (in percent), 1...100",
default="100", type=int, required=False)
parser.add_argument("--offset", "-s", dest="offset",
help="offset from start (in percent), 0...99", default="0",
type=int, required=False)
parser.add_argument("--output", "-o", dest="outputDest",
help="Location where results should be saved", required=False)
parser.add_argument("--yes", dest="yes", action='store_true',
help="Skip the final warning prompt (for scripted tests)",
required=False)
parser.add_argument("--fast-precondition", dest='fastpre', action='store_true',
help="Only do a single sequential write to precondition drive",
required=False)
parser.add_argument("--quickie", dest="quickie", help=argparse.SUPPRESS,
action='store_true', required=False)
parser.add_argument("--file", dest="file", help="Test using a regular file, not a device",
action='store_true', required=False)
parser.add_argument("--nullio", dest="nullio", help=argparse.SUPPRESS,
action='store_true', required=False)
parser.add_argument("--readonly", dest="readonly", help="Only run read-only tests, don't write to device",
action='store_true', required=False)
parser.add_argument("--compress_percentage", dest="compresspct", help="Set the target data compressibility",
default="100", type=int, required=False)
args = parser.parse_args()
physDrive = args.physDrive
physDriveTxt = physDrive
utilization = args.utilization
outputDest = args.outputDest
offset = args.offset
yes = args.yes
quickie = args.quickie
nullio = args.nullio
verify = args.verify
fastPrecond = args.fastpre
cluster = args.cluster
isFile = args.file
readOnly = args.readonly
compressPct = args.compresspct
# For cluster mode, we add a new physDriveList dict and fake physDrive
if cluster:
nodes = physDrive.split(",")
for node in nodes:
physDriveDict[node.split(":")[0]] = node.split(":")[1]
physDrive = nodes[0].split(":")[1]
if (utilization < 1) or (utilization > 100):
print("ERROR: Utilization must be between 1...100")
parser.print_help()
sys.exit(1)
if (offset < 0) or (offset > 99) or (offset+utilization > 100):
print("ERROR: offset must be between 0...99 while offset + utilization <= 100")
parser.print_help()
sys.exit(1)
# Sanity check that the selected drive is not mounted by parsing mounts
# This is not guaranteed to catch all as there's just too many different
# naming conventions out there. Let's cover simple HDD/SSD/NVME patterns
pdispart = (re.match('.*p?[1-9][0-9]*$', physDrive) and
not re.match('.*/nvme[0-9]+n[1-9][0-9]*$', physDrive))
hit = ""
with open("/proc/mounts", "r") as f:
mounts = f.readlines()
for l in mounts:
dev = l.split()[0]
mnt = l.split()[1]
if dev == physDrive:
hit = dev + " on " + mnt # Obvious exact match
if pdispart:
chkdev = dev
else:
# /dev/sdp# is special case, don't remove the "p"
if re.match('^/dev/sdp.*$', dev):
chkdev = re.sub('[1-9][0-9]*$', '', dev)
else:
# Need to see if mounted partition is on a raw device being tested
chkdev = re.sub('p?[1-9][0-9]*$', '', dev)
if chkdev == physDrive:
hit = dev + " on " + mnt
if hit != "":
print("ERROR: Mounted volume '" + str(hit) + "' is on same device" +
"as tested device '" + str(physDrive) + "'. ABORTING.")
sys.exit(2)
def grep(inlist, regex):
"""Implement grep in a non-Pythonic way to make it comprehensible to humans"""
out = []
for i in inlist:
if re.search(regex, i):
out = out + [i]
return out
def CollectSystemInfo():
"""Collect some OS and CPU information."""
global cpu, cpuCores, cpuFreqMHz, uname
uname = " ".join(platform.uname())
code, cpuinfo, err = Run(['cat', '/proc/cpuinfo'])
cpuinfo = cpuinfo.split("\n")
if 'aarch64' in uname:
code, cpuinfo, err = Run(['lscpu'])
cpuinfo = cpuinfo.split("\n")
cpu = grep(cpuinfo, r'Model name')[0].split(':')[1].lstrip()
cpuCores = grep(cpuinfo, r'CPU')[1].split(':')[1].lstrip()
try:
code, dmidecode, err = Run(['dmidecode', '--type', 'processor'])
cpuFreqMHz = int(round(float(grep(dmidecode.split("\n"), r'Current Speed')[0].rstrip().lstrip().split(" ")[2])))
except:
cpuFreqMHz = grep(cpuinfo, r'max')[0].split(':')[1].lstrip()
elif 'ppc64' in uname:
# Implement grep and sed in Python...
cpu = grep(cpuinfo, r'model')[0].split(': ')[1].replace('(R)', '').replace('(TM)', '')
cpuCores = len(grep(cpuinfo, r'processor'))
try:
code, dmidecode, err = Run(['dmidecode', '--type', 'processor'])
cpuFreqMHz = int(round(float(grep(dmidecode.split("\n"), r'Current Speed')[0].rstrip().lstrip().split(" ")[2])))
except:
cpuFreqMHz = int(round(float(grep(cpuinfo, r'clock')[0].split(': ')[1][:-3])))
else:
model_names = grep(cpuinfo, r'model name')
cpu = model_names[0].split(': ')[1].replace('(R)', '').replace('(TM)', '')
cpuCores = len(model_names)
try:
code, dmidecode, err = Run(['dmidecode', '--type', 'processor'])
cpuFreqMHz = int(round(float(grep(dmidecode.split("\n"), r'Current Speed')[0].rstrip().lstrip().split(" ")[2])))
except:
cpuFreqMHz = int(round(float(grep(cpuinfo, r'cpu MHz')[0].split(': ')[1])))
def VerifyContinue():
"""User's last chance to abort the test. Exit if they don't agree."""
if not yes:
print("-" * 75)
print("WARNING! " * 9)
print("THIS TEST WILL DESTROY ANY DATA AND FILESYSTEMS ON " + physDrive)
cont = input("Please type the word \"yes\" and hit return to " +
"continue, or anything else to abort.")
print("-" * 75 + "\n")
if cont != "yes":
print("Performance test aborted, drive is untouched.")
sys.exit(1)
def CollectDriveInfo():
"""Get important device information, exit if not possible."""
global physDriveGiB, physDriveGB, physDriveBase, testcapacity, testoffset
global model, serial, physDrive, isFile
# We absolutely need this information
pd = physDrive.split(',')[0]
try:
if isFile:
physDriveBase = os.path.basename(pd)
physDriveBytes = str(os.stat(pd).st_size) + "\n"
else:
physDriveBase = os.path.basename(pd)
code, physDriveBytes, err = Run(['blockdev', '--getsize64', pd])
if code != 0:
raise Exception("Can't get drive size for " + pd)
physDriveBytes = physDriveBytes.split('\n')[0]
physDriveBytes = int(physDriveBytes)
physDriveGB = int(physDriveBytes / (1000 * 1000 * 1000))
physDriveGiB = int(physDriveBytes / (1024 * 1024 * 1024))
testcapacity = int((physDriveGiB * utilization) / 100)
testoffset = int((physDriveGiB * offset) / 100)
except:
print("ERROR: Can't get '" + pd + "' size. Incorrect device name?")
sys.exit(1)
# These are nice to have, but we can run without it
model = "UNKNOWN"
serial = "UNKNOWN"
try:
nvmeclicmd = ['nvme', 'list', '--output-format=json']
code, nvmecli, err = Run(nvmeclicmd)
if code == 0:
j = json.loads(nvmecli)
for drive in j['Devices']:
if drive['DevicePath'] == pd:
model = drive['ModelNumber']
serial = drive['SerialNumber']
return
except:
pass # An error in nvme is not a problem
try:
sdparmcmd = ['sdparm', '--page', 'sn', '--inquiry', '--long', pd]
code, sdparm, err = Run(sdparmcmd)
lines = sdparm.split("\n")
if len(lines) == 4:
model = re.sub(
r'\s+', " ", lines[0].split(":")[1].lstrip().rstrip())
serial = re.sub(r'\s+', " ", lines[2].lstrip().rstrip())
else:
print("Unable to identify drive using sdparm. Continuing.")
except:
print("Install sdparm to allow model/serial extraction. Continuing.")
def CSVInfoHeader(f):
"""Headers to the CSV file (ending up in the ODS at the test end)."""
global physDriveTxt, model, serial, physDriveGiB, testcapacity, testoffset
global cpu, cpuCores, cpuFreqMHz, uname, quickie, fastPrecond
if quickie:
prefix = "QUICKIE-INVALID-RESULTS-"
else:
prefix = ""
if fastPrecond:
prefix = "FASTPRECOND-" + prefix
AppendFile("Drive," + prefix + str(physDriveTxt).replace(",", " "), f)
AppendFile("Model," + prefix + str(model), f)
AppendFile("Serial," + prefix + str(serial), f)
AppendFile("AvailCapacity," + prefix + str(physDriveGiB) + ",GiB", f)
if offset == 0:
testcap = str(testcapacity)
else:
testcap = str(testcapacity) + " @ " + str(testoffset)
AppendFile("TestedCapacity," + prefix + str(testcap) + ",GiB", f)
AppendFile("CPU," + prefix + str(cpu), f)
AppendFile("Cores," + prefix + str(cpuCores), f)
AppendFile("Frequency," + prefix + str(cpuFreqMHz), f)
AppendFile("OS," + prefix + str(uname), f)
AppendFile("FIOVersion," + prefix + str(fioVerString), f)
def SetupFiles():
"""Set up names for all output/input files, place headers on CSVs."""
global ds, details, testcsv, timeseriescsv, odssrc, odsdest
global physDriveBase, fioVerString, outputDest, timeseriesclatcsv
global timeseriesslatcsv
# Datestamp for run output files
ds = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# The unique suffix we generate for all output files
suffix = str(physDriveGB) + "GB_" + str(cpuCores) + "cores_"
suffix += str(cpuFreqMHz) + "MHz_" + physDriveBase + "_"
suffix += socket.gethostname() + "_" + ds
if not outputDest:
outputDest = os.getcwd()
# The "details" directory contains the raw output of each FIO run
details = outputDest + "/details_" + suffix
if os.path.exists(details):
shutil.rmtree(details)
os.makedirs(details)
# Copy this script into it for posterity
shutil.copyfile(__file__, details + "/" + os.path.basename(__file__))
# Files we're going to generate, encode some system info in the names
# If the output files already exist, erase them
testcsv = details + "/ezfio_tests_"+suffix+".csv"
if os.path.exists(testcsv):
os.unlink(testcsv)
CSVInfoHeader(testcsv)
AppendFile("Type,Write %,Block Size,Threads,Queue Depth/Thread,IOPS," +
"Bandwidth (MB/s),Read Latency (us),Write Latency (us)," +
"System CPU,User CPU", testcsv)
timeseriescsv = details + "/ezfio_timeseries_"+suffix+".csv"
timeseriesclatcsv = details + "/ezfio_timeseriesclat_"+suffix+".csv"
timeseriesslatcsv = details + "/ezfio_timeseriesslat_"+suffix+".csv"
for f in [timeseriescsv, timeseriesclatcsv, timeseriesslatcsv]:
if os.path.exists(f):
os.unlink(f)
CSVInfoHeader(f)
AppendFile(",".join(["IOPS"] + list(physDriveDict.keys())),
timeseriescsv) # Add IOPS header
hdr = ""
for host in physDriveDict.keys():
hdr = hdr + ',' + host + "-read"
hdr = hdr + ',' + host + "-write"
AppendFile('CLAT-read,CLAT-write' + hdr,
timeseriesclatcsv) # Add IOPS header
AppendFile('SLAT-read,SLAT-write' + hdr,
timeseriesslatcsv) # Add IOPS header
# ODS input and output files
odssrc = os.path.dirname(os.path.realpath(__file__)) + "/original.ods"
if not os.path.exists(odssrc):
print("ERROR: Can't find original ODS spreadsheet '" + odssrc + "'.")
sys.exit(1)
odsdest = outputDest + "/ezfio_results_"+suffix+".ods"
if os.path.exists(odsdest):
os.unlink(odsdest)
class FIOError(Exception):
"""Exception generated when FIO returns a non-success value
Attributes:
cmdline -- The FIO command that was executed
code -- Error code FIO returned
stderr -- STDERR output from FIO
stdout -- STDOUT output from FIO
"""
def __init__(self, cmdline, code, stderr, stdout):
super(FIOError, self).__init__()
self.cmdline = cmdline
self.code = code
self.stderr = stderr
self.stdout = stdout
def TestName(seqrand, wmix, bs, threads, iodepth):
"""Return full path and filename prefix for test of specified params"""
global details, physDriveBase
testfile = str(details) + "/Test" + str(seqrand) + "_w" + str(wmix)
testfile += "_bs" + str(bs) + "_threads" + str(threads) + "_iodepth"
testfile += str(iodepth) + "_" + str(physDriveBase) + ".out"
return testfile
def SequentialConditioning():
"""Sequentially fill the complete capacity of the drive once."""
global quickie, fastPrecond, nullio, readOnly, compressPct
def GenerateJobfile(drive, testcapacity, testoffset):
"""Write the sequential jobfile for a single server"""
jobfile = tempfile.NamedTemporaryFile(delete=False, mode='w')
for dr in drive.split(','):
jobfile.write("[SeqCond-" + dr + "]\n")
# Note that we can't use regular test runner because this test needs
# to run for a specified # of bytes, not a specified # of seconds.
jobfile.write("readwrite=write\n")
jobfile.write("bs=128k\n")
if nullio:
jobfile.write("ioengine=null\n")
else:
jobfile.write("ioengine=libaio\n")
jobfile.write("iodepth=64\n")
jobfile.write("direct=1\n")
jobfile.write("filename=" + str(dr) + "\n")
if quickie:
jobfile.write("size=1G\n")
else:
jobfile.write("size=" + str(testcapacity) + "G\n")
jobfile.write("thread=1\n")
jobfile.write("offset=" + str(testoffset) + "G\n")
if compressPct != 100:
jobfile.write("buffer_compress_percentage=" + str(compressPct) + "\n")
jobfile.close()
return jobfile
cmdline = [fio]
if not cluster:
jobfile = GenerateJobfile(physDrive, testcapacity, testoffset)
cmdline = cmdline + [jobfile.name]
else:
jobfile = []
for host in physDriveDict.keys():
newjob = GenerateJobfile(
physDriveDict[host], testcapacity, testoffset)
cmdline = cmdline + ['--client=' + str(host), str(newjob.name)]
jobfile = jobfile + [newjob]
cmdline = cmdline + ['--output-format=' + str(fioOutputFormat)]
if not readOnly:
code, out, err = Run(cmdline)
else:
code = 0
if cluster:
for job in jobfile:
os.unlink(job.name)
else:
os.unlink(jobfile.name)
if code != 0:
raise FIOError(" ".join(cmdline), code, err, out)
else:
return "DONE", "DONE", "DONE"
def RandomConditioning():
"""Randomly write entire device for the full capacity"""
global quickie, nullio, readOnly, compressPct
def GenerateJobfile(drive, testcapacity, testoffset):
"""Write the random jobfile"""
jobfile = tempfile.NamedTemporaryFile(delete=False, mode='w')
for dr in drive.split(','):
jobfile.write("[RandCond-" + dr + "]\n")
# Note that we can't use regular test runner because this test needs
# to run for a specified # of bytes, not a specified # of seconds.
jobfile.write("readwrite=randwrite\n")
jobfile.write("bs=4k\n")
jobfile.write("invalidate=1\n")
jobfile.write("end_fsync=0\n")
jobfile.write("group_reporting=1\n")
jobfile.write("direct=1\n")
jobfile.write("filename=" + str(dr) + "\n")
if quickie:
jobfile.write("size=1G\n")
else:
jobfile.write("size=" + str(testcapacity) + "G\n")
if nullio:
jobfile.write("ioengine=null\n")
else:
jobfile.write("ioengine=libaio\n")
jobfile.write("iodepth=256\n")
jobfile.write("norandommap\n")
jobfile.write("randrepeat=0\n")
jobfile.write("thread=1\n")
jobfile.write("offset=" + str(testoffset) + "G\n")
if compressPct != 100:
jobfile.write("buffer_compress_percentage=" + str(compressPct) + "\n")
jobfile.close()
return jobfile
cmdline = [fio]
if not cluster:
jobfile = GenerateJobfile(physDrive, testcapacity, testoffset)
cmdline = cmdline + [jobfile.name]
else:
jobfile = []
for host in physDriveDict.keys():
newjob = GenerateJobfile(
physDriveDict[host], testcapacity, testoffset)
cmdline = cmdline + ['--client=' + str(host), str(newjob.name)]
jobfile = jobfile + [newjob]
cmdline = cmdline + ['--output-format=' + str(fioOutputFormat)]
if not readOnly:
code, out, err = Run(cmdline)
else:
code = 0
if cluster:
for job in jobfile:
os.unlink(job.name)
else:
os.unlink(jobfile.name)
if code != 0:
raise FIOError(" ".join(cmdline), code, err, out)
else:
return "DONE", "DONE", "DONE"
def RunTest(iops_log, seqrand, wmix, bs, threads, iodepth, runtime):
"""Runs the specified test, generates output CSV lines."""
global cluster, physDriveDict, compressPct
# Taken from fio_latency2csv.py - needed to convert funky semi-log to normal latencies
def plat_idx_to_val(idx, FIO_IO_U_PLAT_BITS=6, FIO_IO_U_PLAT_VAL=64):
"""Convert from lat bucket to real value, for obsolete FIO revisions"""
# MSB <= (FIO_IO_U_PLAT_BITS-1), cannot be rounded off. Use
# all bits of the sample as index
if idx < (FIO_IO_U_PLAT_VAL << 1):
return idx
# Find the group and compute the minimum value of that group
error_bits = (idx >> FIO_IO_U_PLAT_BITS) - 1
base = 1 << (error_bits + FIO_IO_U_PLAT_BITS)
# Find its bucket number of the group
k = idx % FIO_IO_U_PLAT_VAL
# Return the mean of the range of the bucket
return base + ((k + 0.5) * (1 << error_bits))
def WriteExceedance(j, rdwr, outfile):
"""Generate an exceedance CSV for read or write from JSON output."""
global fioOutputFormat
if fioOutputFormat == "json":
return # This data not present in JSON format, only JSON+
# Generate a dict of combined bins, either for jobs[0] or client_stats[]
bins = {}
ios = 0
try:
# Non-cluster case will have jobs, only a single one needed
ios = j['jobs'][0][rdwr]['total_ios']
if ('N' in j['jobs'][0][rdwr]['clat_ns']) and (j['jobs'][0][rdwr]['clat_ns']['N'] > 0):
bins = j['jobs'][0][rdwr]['clat_ns']['bins']
else:
bins = {}
except:
# Cluster case will have client_stats to combine
for client_stats in j['client_stats']:
if client_stats['jobname'] == 'All clients':
# Don't bother looking at combined, bins doesn't exist there
continue
if client_stats[rdwr]['total_ios']:
ios = ios + client_stats[rdwr]['total_ios']
for k in client_stats[rdwr]['clat_ns']['bins'].keys():
try:
bins[k] = bins[k] + client_stats[rdwr]['clat_ns']['bins'][k]
except:
bins[k] = client_stats[rdwr]['clat_ns']['bins'][k]
#ios = client[rdwr]['total_ios']
#bins = client[rdwr]['clat_ns']['bins']
if ios:
runttl = 0
# This was changed in 2.99 to be in nanoseconds and to discard the crazy _bits magic
if float(fioVerString.split('-')[1]) >= 2.99:
lat_ns = []
# JSON dict has keys of type string, need a sorted integer list for our work...
for entry in bins:
lat_ns.append(int(entry))
for entry in sorted(lat_ns):
lat_us = float(entry) / 1000.0
cnt = int(bins[str(entry)])
runttl += cnt
pctile = 1.0 - float(runttl) / float(ios)
if cnt > 0:
AppendFile(
",".join((str(lat_us), str(pctile))), outfile)
else:
plat_bits = client[rdwr]['clat']['bins']['FIO_IO_U_PLAT_BITS']
plat_val = client[rdwr]['clat']['bins']['FIO_IO_U_PLAT_VAL']
for b in range(0, int(client[rdwr]['clat']['bins']['FIO_IO_U_PLAT_NR'])):
cnt = int(client[rdwr]['clat']['bins'][str(b)])
runttl += cnt
pctile = 1.0 - float(runttl) / float(ios)
if cnt > 0:
AppendFile(
",".join((str(plat_idx_to_val(b, plat_bits, plat_val)),
str(pctile))), outfile)
def GenerateJobfile(rw, wmix, bs, drive, testcapacity, runtime, threads, iodepth, testoffset):
"""Make a jobfile for the specified test parameters"""
global verify, nullio
jobfile = tempfile.NamedTemporaryFile(delete=False, mode='w')
for dr in drive.split(","):
jobfile.write("[test-" + dr + "]\n")
jobfile.write("readwrite=" + str(rw) + "\n")
jobfile.write("rwmixwrite=" + str(wmix) + "\n")
jobfile.write("bs=" + str(bs) + "\n")
jobfile.write("invalidate=1\n")
jobfile.write("end_fsync=0\n")
jobfile.write("group_reporting=1\n")
jobfile.write("direct=1\n")
jobfile.write("filename=" + str(dr) + "\n")
jobfile.write("size=" + str(testcapacity) + "G\n")
jobfile.write("time_based=1\n")
jobfile.write("runtime=" + str(runtime) + "\n")
if nullio:
jobfile.write("ioengine=null\n")
else:
jobfile.write("ioengine=libaio\n")
jobfile.write("numjobs=" + str(threads) + "\n")
jobfile.write("iodepth=" + str(iodepth) + "\n")
jobfile.write("norandommap=1\n")
jobfile.write("randrepeat=0\n")
jobfile.write("thread=1\n")
jobfile.write("exitall=1\n")
if verify:
jobfile.write("verify=crc32c\n")
jobfile.write("random_generator=lfsr\n")
jobfile.write("offset=" + str(testoffset) + "G\n")
if compressPct != 100:
jobfile.write("buffer_compress_percentage=" + str(compressPct) + "\n")
jobfile.close()
return jobfile
def CombineThreadOutputs(suffix, outcsv, lat):
"""Merge all FIO iops/lat logs across all servers"""
# The lists may be called "iops" but the same works for clat/slat
iops = [0] * (runtime + extra_runtime)
# For latencies, need to keep the _w and _r separate
iops_w = [0] * (runtime + extra_runtime)
host_iops = OrderedDict()
host_iops_w = OrderedDict()
filecnt = 0
if not cluster:
pdd = OrderedDict()
pdd['localhost'] = 1 # Just the single host, faked here
else:
pdd = physDriveDict
for host in pdd.keys():
host_iops[host] = [0] * (runtime + extra_runtime)
host_iops_w[host] = [0] * (runtime + extra_runtime)
if not cluster:
fileglob = testfile + str(suffix) + '.*log'
else:
fileglob = testfile + str(suffix) + '.*.log.' + host
for filename in glob.glob(fileglob):
filecnt = filecnt + 1
catcmdline = ['cat', filename]
catcode, catout, caterr = Run(catcmdline)
if catcode != 0:
AppendFile("ERROR", testcsv)
raise FIOError(" ".join(catcmdline),
catcode, caterr, catout)
lines = catout.split("\n")
# Set time 0 IOPS to first values
riops = 0
wiops = 0
nexttime = 0
for x in range(0, runtime + extra_runtime):
if not lat:
iops[x] = iops[x] + riops + wiops
host_iops[host][x] = host_iops[host][x] + riops + wiops
else:
iops[x] = iops[x] + riops
iops_w[x] = iops_w[x] + wiops
host_iops[host][x] = host_iops[host][x] + riops
host_iops_w[host][x] = host_iops_w[host][x] + wiops
while len(lines) > 1 and (nexttime < x):
parts = lines[0].split(",")
nexttime = float(parts[0]) / 1000.0
if int(lines[0].split(",")[2]) == 1:
wiops = int(parts[1])
else:
riops = int(parts[1])
lines = lines[1:]
# Generate the combined CSV
with open(outcsv, 'a') as f:
for cnt in range(int(extra_runtime/2), runtime + extra_runtime):
if filecnt > 0 and lat:
line = str(float(iops[cnt])/float(filecnt))
line = line + ',' + str(float(iops_w[cnt])/float(filecnt))
else:
line = str(iops[cnt])
if len(pdd.keys()) > 1:
for host in pdd.keys():
if filecnt > 0 and lat:
line = line + ',' + \
str(float(host_iops[host][cnt])/float(filecnt))
line = line + ',' + \
str(float(host_iops_w[host]
[cnt])/float(filecnt))
else:
line = line + "," + str(host_iops[host][cnt])
f.write(line + "\n")
# Output file names
testfile = TestName(seqrand, wmix, bs, threads, iodepth)
if seqrand == "Seq":
rw = "rw"
else:
rw = "randrw"
if iops_log:
extra_runtime = 10
else:
extra_runtime = 0
cmdline = [fio]
if not cluster:
jobfile = GenerateJobfile(rw, wmix, bs, physDrive, testcapacity,
runtime + extra_runtime, threads, iodepth, testoffset)
cmdline = cmdline + [jobfile.name]
AppendFile("[JOBFILE]", testfile)
with open(jobfile.name, 'r') as of:
txt = of.read()
AppendFile(txt, testfile)
if iops_log:
AppendFile("write_iops_log=" + testfile, jobfile.name)
AppendFile("write_lat_log=" + testfile, jobfile.name)
AppendFile("log_avg_msec=1000", jobfile.name)
AppendFile("log_unix_epoch=0", jobfile.name)
else:
jobfile = []
for host in physDriveDict.keys():
newjob = GenerateJobfile(rw, wmix, bs, physDriveDict[host], testcapacity,
runtime + extra_runtime, threads, iodepth, testoffset)
cmdline = cmdline + ['--client=' + str(host), str(newjob.name)]
AppendFile('[JOBFILE-' + str(host) + "]", testfile)
with open(newjob.name, 'r') as of:
txt = of.read()
AppendFile(txt, testfile)
jobfile = jobfile + [newjob]
if iops_log:
AppendFile("write_iops_log=" + testfile, newjob.name)
AppendFile("write_lat_log=" + testfile, newjob.name)
AppendFile("log_avg_msec=1000", newjob.name)
AppendFile("log_unix_epoch=0", newjob.name)
cmdline = cmdline + ['--output-format=' + str(fioOutputFormat)]
# There are some NVME drives with 4k physical and logical out there.
# Check that we can actually do this size IO, OTW return 0 for all
skiptest = False
code, out, err = Run(['blockdev', '--getpbsz', str(physDrive.split(',')[0])])
if code == 0:
iomin = int(out.split("\n")[0])
if int(bs) < iomin:
skiptest = True
if readOnly and wmix != 0:
skiptest = True
# Silently ignore failure to return min block size, FIO will fail and
# we'll catch that a little later.
if skiptest:
code = 0
out = "Test not run because block size " + str(bs)
out += " below iominsize " + str(iomin) + "\n"
out += "3;" + "0;" * 100 + "\n" # Bogus 0-filled resulte line
err = ""
else:
code, out, err = Run(cmdline)
AppendFile("[STDOUT]", testfile)
AppendFile(out, testfile)
AppendFile("[STDERR]", testfile)
AppendFile(err, testfile)
if cluster:
for job in jobfile:
os.unlink(job.name)
else:
os.unlink(jobfile.name)
# Make sure we had successful completion, else note and abort run
if code != 0:
AppendFile("ERROR", testcsv)
raise FIOError(" ".join(cmdline), code, err, out)
if iops_log:
CombineThreadOutputs('_iops', timeseriescsv, False)
CombineThreadOutputs('_clat', timeseriesclatcsv, True)
CombineThreadOutputs('_slat', timeseriesslatcsv, True)
rdiops = 0
wriops = 0
rlat = 0
wlat = 0
syscpu = 0
usrcpu = 0
if not skiptest:
# Chomp anything before the json.
for i in range(0, len(out)):
if out[i] == '{':
out = out[i:]
break
j = json.loads(out)
if cluster and len(physDriveDict.keys()) == 1:
client = j['client_stats'][0]
elif cluster:
for res in j['client_stats']:
if res['jobname'] == "All clients":
client = res
break
else:
client = j['jobs'][0]
syscpu = float(client['sys_cpu'])
usrcpu = float(client['usr_cpu'])
rdiops = float(client['read']['iops'])
wriops = float(client['write']['iops'])
# 'lat' goes to 'lat_ns' in newest FIO JSON formats...ugh
try:
rlat = float(client['read']['lat_ns']['mean']) / 1000 # ns->us
except:
rlat = float(client['read']['lat']['mean'])
try:
wlat = float(client['write']['lat_ns']['mean']) / 1000 # ns->us
except:
wlat = float(client['write']['lat']['mean'])
iops = "{0:0.0f}".format(rdiops + wriops)
mbps = "{0:0.2f}".format((float((rdiops+wriops) * bs) /
(1024.0 * 1024.0)))
lat = "{0:0.1f}".format(max(rlat, wlat))
AppendFile(",".join((str(seqrand), str(wmix), str(bs), str(threads),
str(iodepth), str(iops), str(mbps), str(rlat),
str(wlat), str(syscpu), str(usrcpu))), testcsv)
if skiptest:
AppendFile("1,1\n", testfile + ".exc.read.csv")
AppendFile("1,1\n", testfile + ".exc.write.csv")
else:
WriteExceedance(j, 'read', testfile + ".exc.read.csv")
WriteExceedance(j, 'write', testfile + ".exc.write.csv")
return iops, mbps, lat
def DefineTests():
"""Generate the work list for the main worker into OC."""
global oc, quickie, fastPrecond
# What we're shmoo-ing across
bslist = (512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072)
qdlist = (1, 2, 4, 8, 16, 32, 64, 128, 256)
threadslist = (1, 2, 4, 8, 16, 32, 64, 128, 256)
shorttime = 120 # Runtime of point tests
longtime = 1200 # Runtime of long-running tests
if quickie:
shorttime = int(shorttime / 10)
longtime = int(longtime / 10)
def AddTest(name, seqrand, writepct, blocksize, threads, qdperthread,
iops_log, runtime, desc, cmdline):
"""Bare usage add a test to the list to execute"""
if threads != "":
qd = int(threads) * int(qdperthread)
else:
qd = 0
dat = {}
dat['name'] = name
dat['seqrand'] = seqrand
dat['wmix'] = writepct
dat['bs'] = blocksize
dat['qd'] = qd
dat['qdperthread'] = qdperthread
dat['threads'] = threads
dat['bw'] = ''
dat['iops'] = ''
dat['lat'] = ''
dat['desc'] = desc
dat['iops_log'] = iops_log
dat['runtime'] = runtime
dat['cmdline'] = cmdline
oc.append(dat)
def DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, desc,
iops_log, runtime):
"""Add an individual run to the list of tests to execute"""
AddTest(testname, seqrand, wmix, bs, threads, iodepth, iops_log,
runtime, desc, lambda o: {RunTest(o['iops_log'],
o['seqrand'], o['wmix'],
o['bs'], o['threads'],
o['qdperthread'],
o['runtime'])})
def AddTestBSShmoo():
"""Add a sequence of tests varying the block size"""
AddTest(testname, 'Preparation', '', '', '', '', '', '', '',
lambda o: {AppendFile(o['name'], testcsv)})
for bs in bslist:
desc = testname + ", BS=" + str(bs)
DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, desc,
iops_log, runtime)
def AddTestQDShmoo():
"""Add a sequence of tests varying the queue depth"""
AddTest(testname, 'Preparation', '', '', '', '', '', '', '',
lambda o: {AppendFile(o['name'], testcsv)})
for iodepth in qdlist:
desc = testname + ", QD=" + str(iodepth)
DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, desc,
iops_log, runtime)
def AddTestThreadsShmoo():
"""Add a sequence of tests varying the number of threads"""
AddTest(testname, 'Preparation', '', '', '', '', '', '', '',
lambda o: {AppendFile(o['name'], testcsv)})
for threads in threadslist:
desc = testname + ", Threads=" + str(threads)
DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, desc,
iops_log, runtime)
AddTest('Sequential Preconditioning', 'Preparation', '', '', '', '', '',
'', '', lambda o: {}) # Only for display on-screen
AddTest('Sequential Preconditioning', 'Seq Pass 1', '100', '131072', '1',
'256', False, '', 'Sequential Preconditioning Pass 1',
lambda o: {SequentialConditioning()})
if not fastPrecond:
AddTest('Sequential Preconditioning', 'Seq Pass 2', '100', '131072', '1',
'256', False, '', 'Sequential Preconditioning Pass 2',
lambda o: {SequentialConditioning()})
testname = "Sustained Multi-Threaded Sequential Read Tests by Block Size"
seqrand = "Seq"
wmix = 0
threads = 1
runtime = shorttime
iops_log = False
iodepth = 256
AddTestBSShmoo()
testname = "Sustained Multi-Threaded Random Read Tests by Block Size"
seqrand = "Rand"
wmix = 0
threads = 16
runtime = shorttime
iops_log = False
iodepth = 16
AddTestBSShmoo()
testname = "Sequential Write Tests with Queue Depth=1 by Block Size"
seqrand = "Seq"
wmix = 100
threads = 1
runtime = shorttime
iops_log = False
iodepth = 1
AddTestBSShmoo()
if not fastPrecond:
AddTest('Random Preconditioning', 'Preparation', '', '', '', '', '', '',
'', lambda o: {}) # Only for display on-screen
AddTest('Random Preconditioning', 'Rand Pass 1', '100', '4096', '1',
'256', False, '', 'Random Preconditioning',
lambda o: {RandomConditioning()})
AddTest('Random Preconditioning', 'Rand Pass 2', '100', '4096', '1',
'256', False, '', 'Random Preconditioning',
lambda o: {RandomConditioning()})
testname = "Sustained 4KB Random Read Tests by Number of Threads"
seqrand = "Rand"
wmix = 0
bs = 4096
runtime = shorttime
iops_log = False
iodepth = 1
AddTestThreadsShmoo()
testname = "Sustained 4KB Random mixed 30% Write Tests by Threads"
seqrand = "Rand"
wmix = 30
bs = 4096
runtime = shorttime
iops_log = False
iodepth = 1
AddTestThreadsShmoo()
testname = "Sustained Perf Stability Test - 4KB Random 30% Write"
AddTest(testname, 'Preparation', '', '', '', '', '', '', '',
lambda o: {AppendFile(o['name'], testcsv)})
seqrand = "Rand"
wmix = 30
bs = 4096
runtime = longtime
iops_log = True
iodepth = 1
threads = 256
DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, testname,
iops_log, runtime)
testname = "Sustained 4KB Random Write Tests by Number of Threads"
seqrand = "Rand"
wmix = 100
bs = 4096
runtime = shorttime
iops_log = False
iodepth = 1
AddTestThreadsShmoo()
testname = "Sustained Multi-Threaded Random Write Tests by Block Size"
seqrand = "Rand"
wmix = 100
runtime = shorttime
iops_log = False
iodepth = 16
threads = 16
AddTestBSShmoo()
def RunAllTests():
"""Iterate through the OC work queue and run each job, show progress."""
global ret_iops, ret_mbps, ret_lat, fioVerString
# Determine some column widths to make format specifiers
maxlen = 0
for o in oc:
maxlen = max(maxlen, len(o['desc']))
descfmt = "{0:" + str(maxlen) + "}"
resfmt = "{1: >8} {2: >9} {3: >8}"
fmtstr = descfmt + " " + resfmt
def JobWrapper(**kwargs):
"""Thread wrapper to store return values for parent to read later."""
global ret_iops, ret_mbps, ret_lat, oc
# Until we know it's succeeded, we're in error
ret_iops = "ERROR"
ret_mbps = "ERROR"
ret_lat = "ERROR"
try:
val = o['cmdline'](o)
ret_iops = list(val)[0][0]
ret_mbps = list(val)[0][1]
ret_lat = list(val)[0][2]
except FIOError as e:
print("\nFIO Error!\n" + e.cmdline + "\nSTDOUT:\n" + e.stdout)
print("STDERR:\n" + e.stderr)
raise
except:
print("\nUnexpected error while running FIO job.")
raise
print("*" * len(fmtstr.format("", "", "", "")))
print("ezFio test parameters:\n")
fmtinfo = "{0: >20}: {1}"
print(fmtinfo.format("Drive", str(physDriveTxt)))
print(fmtinfo.format("Model", str(model)))
print(fmtinfo.format("Serial", str(serial)))
print(fmtinfo.format("AvailCapacity", str(physDriveGiB) + " GiB"))
print(fmtinfo.format("TestedCapacity", str(testcapacity) + " GiB"))
print(fmtinfo.format("TestedOffset", str(testoffset) + " GiB"))
print(fmtinfo.format("CPU", str(cpu)))
print(fmtinfo.format("Cores", str(cpuCores)))
print(fmtinfo.format("Frequency", str(cpuFreqMHz)))
print(fmtinfo.format("FIO Version", str(fioVerString)))
print("\n")
print(fmtstr.format("Test Description", "BW(MB/s)", "IOPS", "Lat(us)"))
print(fmtstr.format("-"*maxlen, "-"*8, "-"*9, "-"*8))
for o in oc:
if o['desc'] == "":
# This is a header-printing job, don't thread out
print("\n" + fmtstr.format("---"+o['name']+"---", "", "", ""))
sys.stdout.flush()
o['cmdline'](o)
else:
# This is a real test job, run it in a thread
if sys.stdout.isatty():
print(fmtstr.format(o['desc'], "Runtime", "00:00:00", "..."), end='\r')
else:
print(descfmt.format(o['desc']), end='')
sys.stdout.flush()
starttime = datetime.datetime.now()
job = threading.Thread(target=JobWrapper, kwargs=(o))
job.start()
while job.is_alive():
now = datetime.datetime.now()
delta = now - starttime
dstr = "{0:02}:{1:02}:{2:02}".format(int(delta.seconds / 3600),
int((delta.seconds % 3600)/60),
int(delta.seconds % 60))
if sys.stdout.isatty():
# Blink runtime to make it obvious stuff is happening
if (delta.seconds % 2) != 0:
print(fmtstr.format(o['desc'], "Runtime", dstr, "..."), end='\r')
else:
print(fmtstr.format(o['desc'], "", dstr, ""), end='\r')
sys.stdout.flush()
time.sleep(1)
job.join()
# Pretty-print with grouping, if possible
try:
ret_iops = "{:,}".format(int(ret_iops))
ret_mbps = "{:0,.2f}".format(float(ret_mbps))
except:
pass
if sys.stdout.isatty():
print(fmtstr.format(o['desc'], ret_mbps, ret_iops, ret_lat))
else:
print(" " + resfmt.format(o['desc'],
ret_mbps, ret_iops, ret_lat))
sys.stdout.flush()
# On any error abort the test, all future results could be invalid
if ret_mbps == "ERROR":
print("ERROR DETECTED, ABORTING TEST RUN.")
sys.exit(2)
def GenerateResultODS():
"""Builds a new ODS spreadsheet w/graphs from generated test CSV files."""
def GetContentXMLFromODS(odssrc):
"""Extract content.xml from an ODS file, where the sheet lives."""
ziparchive = zipfile.ZipFile(odssrc)
content = ziparchive.read("content.xml").decode('UTF-8')
content = content.replace("\n", "")
return content
def CSVtoXMLSheet(sheetName, csvName):
"""Replace a named sheet with the contents of a CSV file."""
newt = '<table:table table:name='
newt += '"' + sheetName + '"' + ' table:style-name="ta1" > '
newt += '<table:table-column table:style-name="co1" '
newt += 'table:default-cell-style-name="Default"/>'
# Insert the rows, one entry at a time
with open(csvName, 'r') as f:
for line in f:
line = line.rstrip()
newt += '<table:table-row table:style-name="ro1">'
for val in line.split(','):
try:
cell = '<table:table-cell office:value-type="float" '
cell += 'office:value="' + str(float(val))
cell += '"><text:p>'
cell += str(float(val)) + \
'</text:p></table:table-cell>'
except: # It's not a float, so let's call it a string
cell = '<table:table-cell office:value-type="string" '
cell += '><text:p>'
cell += str(val) + '</text:p></table:table-cell>'
newt += cell
newt += '</table:table-row>'
f.close()
# Close the tags
newt += '</table:table>'
return newt
def ReplaceSheetWithCSV_regex(sheetName, csvName, xmltext):
"""Replace a named sheet with the contents of a CSV file."""
newt = CSVtoXMLSheet(sheetName, csvName)
# Replace the XML using lazy string matching
searchstr = '<table:table table:name="' + sheetName
searchstr += '".*?</table:table>'
return re.sub(searchstr, newt, xmltext, flags=re.DOTALL)
def AppendSheetFromCSV(sheetName, csvName, xmltext):
"""Add a new sheet to the XML from the CSV file."""
newt = CSVtoXMLSheet(sheetName, csvName)
# Replace the XML using lazy string matching
searchstr = '<table:named-expressions/>'
return re.sub(searchstr, newt + searchstr, xmltext, flags=re.DOTALL)
def UpdateContentXMLToODS_text(odssrc, odsdest, xmltext):
"""Replace content.xml in an ODS w/an in-memory copy and write new.
Replace content.xml in an ODS file with in-memory, modified copy and
write new ODS. Can't just copy source.zip and replace one file, the
output ZIP file is not correct in many cases (opens in Excel but fails
ODF validation and LibreOffice fails to load under Windows).
Also strips out any binary versions of objects and the thumbnail,
since they are no longer valid once we've changed the data in the
sheet.
"""
if os.path.exists(odsdest):
os.unlink(odsdest)
# Windows ZipArchive will not use "Store" even with "no compression"
# so we need to have a mimetype.zip file encoded below to match spec:
mimetypezip = """
UEsDBBQAAAgAAICyN0+FbDmKLgAAAC4AAAAIAAAAbWltZXR5cGVhcHBsaWNhdGlvbi92bmQub2Fz
aXMub3BlbmRvY3VtZW50LnNwcmVhZHNoZWV0UEsBAhQAFAAACAAAgLI3T4VsOYouAAAALgAAAAgA
AAAAAAAAAAAAAAAAAAAAAG1pbWV0eXBlUEsFBgAAAAABAAEANgAAAFQAAAAAAA==
"""
zipbytes = base64.b64decode(mimetypezip)
with open(odsdest, 'wb') as f:
f.write(zipbytes)
zasrc = zipfile.ZipFile(odssrc, 'r')
zadst = zipfile.ZipFile(odsdest, 'a', zipfile.ZIP_DEFLATED)
for entry in zasrc.namelist():
if entry == "mimetype":
continue
elif entry.endswith('/') or entry.endswith('\\'):
continue
elif entry == "content.xml":
zadst.writestr("content.xml", xmltext)
elif ("Object" in entry) and ("content.xml" in entry):
# Remove <table:table table:name="local-table"> table
rdbytes = zasrc.read(entry).decode('UTF-8')
outbytes = re.sub(
'<table:table table:name="local-table">.*</table:table>', "", rdbytes, flags=re.DOTALL)
zadst.writestr(entry, outbytes)
elif entry == "META-INF/manifest.xml":
# Remove ObjectReplacements from the list
rdbytes = zasrc.read(entry).decode('UTF-8')
outbytes = ""
lines = rdbytes.split("\n")
for line in lines:
if not (("ObjectReplacement" in line) or ("Thumbnails" in line)):
outbytes = outbytes + line + "\n"
zadst.writestr(entry, outbytes)
elif ("Thumbnails" in entry) or ("ObjectReplacement" in entry):
# Skip binary versions
continue
else:
rdbytes = zasrc.read(entry)
zadst.writestr(entry, rdbytes)
zasrc.close()
zadst.close()
def CombineExceedanceCSV(qdList, testType, testWpct, testBS, testIOdepth, suffix):
"""Merge multiple exceedance CSVs into a single output file.
Column merge multiple CSV files into a single one. Complicated by
the fact that the number of columns in each may vary.
"""
csv = details + "/ezfio_exceedance_"+suffix+".csv"
if os.path.exists(csv):
os.unlink(csv)
CSVInfoHeader(csv)
line1 = ""
line2 = ""
for qd in qdList:
line1 = line1 + \
("QD%d Read Exceedance,,QD%d Write Exceedance,,," % (qd, qd))
line2 = line2 + "rdusec,rdpct,wrusec,wrpct,,"
AppendFile(line1, csv)
AppendFile(line2, csv)
files = []
for qd in qdList:
try:
r = open(TestName(testType, testWpct, testBS,
qd, testIOdepth) + ".exc.read.csv")
except:
r = None
try:
w = open(TestName(testType, testWpct, testBS,
qd, testIOdepth) + ".exc.write.csv")
except:
w = None
files.append([r, w])
while True:
all_empty = True
l = ""
for fset in files:
if fset[0] is None:
a = ""
else:
a = fset[0].readline().strip()
if fset[1] is None:
b = ""
else:
b = fset[1].readline().strip()
l += (a + ",", ",,")[not a]
l += (b + ",", ",,")[not b]
l += ','
all_empty = all_empty and (not a) and (not b)
AppendFile(l, csv)
if all_empty:
break
return csv
global odssrc, timeseriescsv, testcsv, physDrive, testcapacity, model, testoffset
global serial, uname, fioVerString, odsdest, timeseriesclatcsv, timeseriesslatcsv
xmlsrc = GetContentXMLFromODS(odssrc)
xmlsrc = ReplaceSheetWithCSV_regex("Timeseries", timeseriescsv, xmlsrc)
xmlsrc = ReplaceSheetWithCSV_regex(
"TimeseriesCLAT", timeseriesclatcsv, xmlsrc)
xmlsrc = ReplaceSheetWithCSV_regex(
"TimeseriesSLAT", timeseriesslatcsv, xmlsrc)
xmlsrc = ReplaceSheetWithCSV_regex("Tests", testcsv, xmlsrc)
# Potentially add exceedance data if we have it
if fioOutputFormat == "json+":
csv = CombineExceedanceCSV(
[1, 4, 16, 32], "Rand", 30, 4096, 1, "exceedance30")
xmlsrc = ReplaceSheetWithCSV_regex("Exceedance", csv, xmlsrc)
# Remove draw:image references to deleted binary previews
xmlsrc = re.sub("<draw:image.*?/>", "", xmlsrc, flags=re.DOTALL)
# OpenOffice doesn't recalculate these cells on load?!
xmlsrc = xmlsrc.replace("_DRIVE", str(physDrive))
xmlsrc = xmlsrc.replace("_TESTCAP", str(testcapacity))
xmlsrc = xmlsrc.replace("_MODEL", str(model))
xmlsrc = xmlsrc.replace("_SERIAL", str(serial))
xmlsrc = xmlsrc.replace("_OS", str(uname))
xmlsrc = xmlsrc.replace("_FIO", str(fioVerString))
UpdateContentXMLToODS_text(odssrc, odsdest, xmlsrc)
fio = "" # FIO executable
fioVerString = "" # FIO self-reported version
fioOutputFormat = "json" # Can we make exceedance charts using JSON+ output?
cluster = False # Running multiple jobs in a cluster using fio --server
physDrive = "" # Device path to test
physDriveTxt = "" # Unadulterated drive line
physDriveDict = OrderedDict() # Device path to test
utilization = "" # Device utilization % 1..100
offset = "" # Test region offset % 0..99
yes = False # Skip user verification
quickie = False # Flag to indicate short runs, only for ezfio debugging!
nullio = False # Flag to do no IO at all, use nullio instead
fastPrecond = False # Only do 1x sequential write for preconditioning (no random)
verify = False # Use built-in FIO data verification
readOnly = False # Only run read-only tests
cpu = "" # CPU model
cpuCores = "" # # of cores (including virtual)
cpuFreqMHz = "" # "Nominal" speed of CPU
uname = "" # Kernel name/info
physDriveGiB = "" # Disk size in GiB (2^n)
physDriveGB = "" # Disk size in GB (10^n)
physDriveBase = "" # Basename (ex: nvme0n1)
testcapacity = "" # Total GiB to test
testoffset = "" # test region offset in GiB
model = "" # Drive model name
serial = "" # Drive serial number
ds = "" # Datestamp to appent to files/directories to uniquify
pwd = "" # $CWD
details = "" # Test details directory
testcsv = "" # Intermediate test output CSV file
timeseriescsv = "" # Intermediate iostat output CSV file
timeseriesclatcsv = "" # Intermediate iostat output CSV file
timeseriesslatcsv = "" # Intermediate iostat output CSV file
exceedancecsv = "" # Intermediate exceedance output CSV
odssrc = "" # Original ODS spreadsheet file
odsdest = "" # Generated results ODS spreadsheet file
oc = [] # The list of tests to run
aioNeeded = 4096 # Minimum AIO kernel setting to run all tests
# These globals are used to return the output results of the test thread
# Required because it's difficult to pass back values from a threading.().
ret_iops = 0 # Last test IOPS
ret_mbps = 0 # Last test MBPs
ret_lat = 0 # Last test in microseconds
if __name__ == "__main__":
ParseArgs()
CheckAdmin()
fio = FindFIO()
CheckFIOVersion()
CheckAIOLimits()
CollectSystemInfo()
CollectDriveInfo()
VerifyContinue()
SetupFiles()
DefineTests()
RunAllTests()
GenerateResultODS()
print("\nCOMPLETED!\nSpreadsheet file: " + odsdest)
gitextract_6z78mt2r/ ├── COPYING ├── README ├── combine.py ├── ezfio.bat ├── ezfio.ps1 ├── ezfio.py └── original.ods
SYMBOL INDEX (24 symbols across 2 files)
FILE: combine.py
function ParseArgs (line 40) | def ParseArgs():
function GenerateCombinedODS (line 64) | def GenerateCombinedODS():
FILE: ezfio.py
function AppendFile (line 51) | def AppendFile(text, filename):
function Run (line 58) | def Run(cmd):
function CheckAdmin (line 68) | def CheckAdmin():
function FindFIO (line 77) | def FindFIO():
function CheckFIOVersion (line 96) | def CheckFIOVersion():
function CheckAIOLimits (line 123) | def CheckAIOLimits():
function ParseArgs (line 148) | def ParseArgs():
function grep (line 261) | def grep(inlist, regex):
function CollectSystemInfo (line 270) | def CollectSystemInfo():
function VerifyContinue (line 306) | def VerifyContinue():
function CollectDriveInfo (line 320) | def CollectDriveInfo():
function CSVInfoHeader (line 373) | def CSVInfoHeader(f):
function SetupFiles (line 399) | def SetupFiles():
class FIOError (line 460) | class FIOError(Exception):
method __init__ (line 470) | def __init__(self, cmdline, code, stderr, stdout):
function TestName (line 478) | def TestName(seqrand, wmix, bs, threads, iodepth):
function SequentialConditioning (line 487) | def SequentialConditioning():
function RandomConditioning (line 548) | def RandomConditioning():
function RunTest (line 614) | def RunTest(iops_log, seqrand, wmix, bs, threads, iodepth, runtime):
function DefineTests (line 947) | def DefineTests():
function RunAllTests (line 1119) | def RunAllTests():
function GenerateResultODS (line 1218) | def GenerateResultODS():
Condensed preview — 7 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (164K chars).
[
{
"path": "COPYING",
"chars": 18093,
"preview": "\n GNU GENERAL PUBLIC LICENSE\n Version 2, June 1991\n\n Copyright (C) 1989, 1991 F"
},
{
"path": "README",
"chars": 3716,
"preview": "ezFIO V1.0\r\n(C) Copyright 2015-18 HGST\r\nearle.philhower.iii@hgst.com\r\n\r\n------------------------------------------------"
},
{
"path": "combine.py",
"chars": 10723,
"preview": "#!/usr/bin/python\n\n# ezfio 1.0\n# earle.philhower.iii@hgst.com\n#\n# ------------------------------------------------------"
},
{
"path": "ezfio.bat",
"chars": 564,
"preview": "@echo off\r\nREM Start EZFIO.PS1 in an elevated PowerShell interpreter.\r\nREM Here be dragons.\r\nREM First start a standard "
},
{
"path": "ezfio.ps1",
"chars": 63066,
"preview": "# ezfio 1.0\r\n# earle.philhower.iii@hgst.com\r\n#\r\n# ----------------------------------------------------------------------"
},
{
"path": "ezfio.py",
"chars": 60043,
"preview": "#!/usr/bin/python3\n\n\"\"\"ezfio 1.9\nearlephilhower@yahoo.com\n\n-------------------------------------------------------------"
}
]
// ... and 1 more files (download for full content)
About this extraction
This page contains the full source code of the earlephilhower/ezfio GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 7 files (152.5 KB), approximately 40.3k tokens, and a symbol index with 24 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.