Repository: oliverlloyd/jmeter-ec2 Branch: master Commit: ac52e7b228e4 Files: 11 Total size: 153.6 KB Directory structure: gitextract_jj01_dno/ ├── .gitignore ├── LICENSE ├── README.md ├── Vagrantfile ├── jmeter ├── jmeter-ec2.properties ├── jmeter-ec2.properties.vagrant ├── jmeter-ec2.sh ├── jmeter.properties ├── system.properties └── verify.sh ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ .DS_Store ================================================ FILE: LICENSE ================================================ GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . ================================================ FILE: README.md ================================================ # JMeter ec2 Script This shell script will allow you to run your local JMeter jmx files either using Amazon's EC2 service or you can provide it with a simple, comma-delimited list of hosts to use. Summary results are printed to the console as the script runs and then all result data is downloaded and concatenated to one file when the test completes ready for more detailed analysis offline. By default it will launch the required hardware using Amazon EC2. Using AWS it is much easier and cheaper to scale your test over multiple slaves but if you need to you can also pass in a list of pre-prepared hostnames and the test load will be distributed over these instead. Using your own servers can be useful when the target server to be tested can not be easily accessed from a location external to your test network or you want to repeat a test iteratively. The script does not use JMeter's Distributed Mode so you do not need to adjust the test parameters to ensure even distribution of the load; the script will automatically adjust the thread counts based on how many hosts are in use. As the test is running it will collate the results from each host in real time and display an output of the Generate Summary Results listener to the screen (showing both results host by host and an aggregated view for the entire run). Once execution is complete it will download each host's jtl file and collate them all together to give a single jtl file that can be viewed using the usual JMeter listeners. jmeter-ec2-screenshot-1 jmeter-ec2-screenshot-2 ## Getting Started ### Prerequisites * An Amazon ec2 account is required (unless valid hosts are specified using REMOTE_HOSTS property). * [AWS CLI](https://aws.amazon.com/cli/) must be installed. See the [userguide](http://docs.aws.amazon.com/cli/latest/userguide/) for setup information. * Testplans must contain a [Generate Summary Results Listener](https://jmeter.apache.org/usermanual/component_reference.html#Generate_Summary_Results). No other listeners are required. ### Setup 1. Create a project directory on your machine. For example: `~/Documents/WHERETOPUTMYSTUFF/`. This is where you store your testplan and any associated files. 2. Download or clone all files from this repo into a suitable directory (e.g. `/usr/local/`). 3. Extract the file `example-project.zip` into `~/Documents/WHERETOPUTMYSTUFF/`. You now have a template / example directory structure for your project. 4. Edit the file jmeter-ec2.properties as below: `INSTANCE_SECURITYGROUP="sg-123456"` The ID of your security group (or groups) created under your Amazon account. It must allow Port 22 to the local machine running this script. `PEM_FILE="euwest1"` Your Amazon key file. `PEM_PATH="/Users/oliver/.ec2"` The directory (not the full filepath) where the Amazon PEM file is located. **Important**: No trailing '/'! 5. Copy your JMeter jmx file into the /jmx directory under your root project directory (Ie. myproject) and rename it to the same name as the directory. For example, if you created the directory `/testing/myproject` then you should name the jmx file `myproject.jmx`. 6. Copy any data files that are required by your testplan to the /data sub directory. 7. Copy any jar files that are required by your testplan to the /plugins sub directory. 8. Open a terminal window and cd to the project directory you created (eg. cd /home/username/someproject). 9. Type: `count="1" ./path/to/jmeter-ec2.sh` Where '1' is the number of instances you wish to spread the test over. If you have provided a list of hosts using `REMOTE_HOSTS` then this value is ignored and all hosts in the list will be used. ### Advanced Usage percent=20 count="3" terminate="TRUE" setup="TRUE" env="UAT" release="3.23" comment="my notes" ./jmeter-ec2.sh' [count] - optional, default=1 [percent] - optional, default=100. Should be in the format 1-100 where 20 => 20% of threads will be run by the script. [setup] - optional, default=TRUE. Set to "FALSE" if a pre-defined host is being used that has already been setup (had files copied to it, jmeter installed, etc.) [terminate] - optional, default=TRUE. Set to "FALSE" if the instances created should not be terminated. [price] - optional, if specified spot instances will be requested at this price ### Advanced Properties `AMI_ID="[A linix based AMI]"` Recommended AMIs are provided in the jmeter-ec2.properties file. Both Java and JMeter are installed by the script dynamically if not present. `INSTANCE_TYPE="m3.medium"` `micro` type instances do work and are good for developing but they are not recommended for important test runs. Performance can be slow and you risk affecting test results. Note: Older generation instance types require a different type of AMI (paravirtual vs. hmv). `USER="ubuntu"` Different AMIs start with different basic users. This value could be 'ec2-user', 'root', 'admin' etc. `SUBNET_ID=""` The id of the subnet that the instance will belong to. So long as a [default VPC](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html) exists for your account you do not need to set this. `RUNNINGTOTAL_INTERVAL="3"` How often running totals are printed to the screen. Based on a count of the summariser.interval property. (If the Generate Summary Results listener is set to wait 10 seconds then every 30 (3 * 10) seconds an extra row showing an aggregated summary will be printed.) The summariser.interval property in the standard jmeter.properties file defaults to 180 seconds - in the file included with this project it is set to 15 seconds, like this we default to summary updates every 45 seconds. `REMOTE_HOSTS=""` If you do not wish to use ec2 you can provide a comma-separated list of pre-defined hosts. `REMOTE_PORT=""` Specify the port sshd is running on for `REMOTE_HOSTS` or ec2. Default 22. `ELASTIC_IPS=""` If using ec2, then you can also provide a comma-separated list of pre-defined elastic IPs. This is useful if your test needs to pass through a firewall. `JMETER_VERSION="apache-jmeter-2.13"` Allows the version to be chosen dynamically. ### Limitations: * JMeter V3 is not tested with this script. * There are [limits imposed by Amazon](http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ec2) on how many instances can be run in a new account - the default is 20 instances as of Oct 2011. * You cannot have jmeter variables in the testplan field `Thread Count`, this value must be numeric. * Testplan file paths cannot be dynamic, any jmeter variables in the filepath will be ignored. ### Why am I seeing `copying install.sh to 1 server(s)...lost connection`? This happens when it is not possible for the script to connect over port 22 to the instance that was created by AWS. There are a number of reasons why this can happen. **First, can you telnet to the instance?** Run the script to create a box but use: `count="1" terminate="FALSE"./path/to/jmeter-ec2.sh` Then, take the hostname of the instance just created and try: `telnet thehostname.com 22` If you see something like: > Trying thehostname.com... Connected to thehostname.com Escape character is '^]'. SSH-2.0-OpenSSH_6.6p1 Ubuntu-2ubuntu1 Then you **DO** have network access. If you see: > Trying 123.456.789.123... You **DO NOT** have network access. #### Things to try if you **DO** have network access **File permissions on your PEM file** Your .pem files [need to be secure](http://stackoverflow.com/questions/1454629/aws-ssh-access-permission-denied-publickey-issue). Use `chmod 600 yourfile.pem`. **The `USER` property is not correct** Different AMIs and OSs expect you to log in using different users. Make sure this value is set correctly. **Install the latest version of the ec2-api-tools** Check [here](http://aws.amazon.com/developertools/351/) and make sure you have the latest version installed. Use `$ ec2-version` to check. #### Things to try if you **DO NOT** have network access **Your Security Group is not configured properly** The `INSTANCE_SECURITYGROUP_IDS` property needs to reference the exact ids of one or more security group that exists in the correct region and that contains a rule that allows inbound traffic on port 22 from the machine you are running the script from, or everywhere if you are running the script remotely or just want to rule this out (be sure to reduce this scope later once you've got things working) **Check local network settings** Often port 22 can be blocked by over-zealous local network security settings. You often see this with poor quality wifi services, the type where you have to fill out a marketing form to get access. You can sometimes get around this by using a vpn but often they block this too and then your only choice is to put down your flat white and leave. ## Spot instances By default this shell script uses on-demand instances. You can use spot instances by requesting an hourly `price` for your EC2 instances. ### Usage: `count="3" price=0.0035 ./jmeter-ec2.sh'` > Spot Instances allow you to name your own price for Amazon EC2 computing capacity. You simply bid on spare Amazon EC2 > instances and run them whenever your bid exceeds the current Spot Price, which varies in real-time based on supply > and demand. The Spot Instance pricing model complements the On-Demand and Reserved Instance pricing models, > providing potentially the most cost-effective option for obtaining compute capacity, depending on your application. Read more at http://aws.amazon.com/ec2/purchasing-options/spot-instances/ [price] - optional, if specified spot instances will be requested at this price [count] - optional, default=1 ### Notes If your price is too low spot requests will fail with a status ``` price-too-low ```. To get the price history by instance type, use the ```ec2-describe-spot-price-history``` command from [AWS CLI](http://aws.amazon.com/cli/) : For example to get current price for t1.micro instance running Linux : ```ec2-describe-spot-price-history -H --instance-type t1.micro -d Linux/UNIX -s `date +"%Y-%m-%dT%H:%M:%SZ"```` ## Running locally with Vagrant [Vagrant](http://vagrantup.com) allows you to test your jmeter-ec2 scripts locally before pushing them to ec2. ### Prerequisites * [Vagrant](http://vagrantup.com) ### Usage: Use `jmeter-ec2.properties.vagrant` as a template for local provisioning. This file is set up to use Vagrant's ssh key, ports, etc. ``` # backup your properties files just in case cp jmeter-ec2.properties jmeter-ec2.properties.bak # use the vagrant properties file cp jmeter-ec2.properties.vagrant jmeter-ec2.properties # start vm and provision defaultjre vagrant up # run your project project="myproject" setup="TRUE" ./jmeter-ec2.sh ``` ### Note * You may need to edit the `Vagrantfile` to meet any specific networking needs. See Vagrant's [networking documentation](http://docs.vagrantup.com/v2/getting-started/networking.html) for details. ## General Notes: ### AWS Key Pairs To find your key pairs go to your ec2 dashboard -> Networking and Security -> Key Pairs. Make sure this key pair is in the REGION you also set in the properties file. ### AWS Security Groups To create or check your EC2 security groups go to your ec2 dashboard -> security groups. Create a security group (e.g. called jmeter) that allows inbound access on port 22 from the IP of the machine where you are running the script. ### Using AWS It is not uncommon for an instance to fail to start, this is part of using the Cloud and for that reason this script will dynamically respond to this event by adjusting the number of instances that are used for the test. For example, if you request 10 instances but 1 fails then the test will be run using only 9 machines. This should not be a problem as the load will still be evenly spread and the end results (the throughput) identical. In a similar fashion, should Amazon not provide all the instances you asked for (each account is limited) then the script will also adjust to this scenario. ### Using Jmeter Any testplan should always have suitable pacing to regulate throughput. This script distributes load based on threads, it is assumed that these threads are setup with suitable timers. If not, adding more hardware could create unpredictable results. ## License JMeter-ec2 is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. JMeter-ec2 is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with JMeter-ec2. If not, see . The source repository is at: [https://github.com/oliverlloyd/jmeter-ec2](https://github.com/oliverlloyd/jmeter-ec2) ================================================ FILE: Vagrantfile ================================================ # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure("2") do |config| config.vm.box = "quantal64" config.vm.box_url = "https://github.com/downloads/roderik/VagrantQuantal64Box/quantal64.box" # Forward jmeter's command port for shutdown. config.vm.network :forwarded_port, guest: 4445, host: 4445 # Create a private network, which allows host-only access to the machine # using a specific IP. # config.vm.network :private_network, ip: "192.168.33.10" # Create a public network, which generally matched to bridged network. # Bridged networks make the machine appear as another physical device on # your network. # config.vm.network :public_network # jmeter-ec2.sh requires java installed when using REMTOE_HOSTS config.vm.provision :shell do |shell| shell.inline = "sudo apt-get update" shell.inline = "sudo apt-get -y install default-jre" end end ================================================ FILE: jmeter ================================================ #! /bin/sh ## Licensed to the Apache Software Foundation (ASF) under one or more ## contributor license agreements. See the NOTICE file distributed with ## this work for additional information regarding copyright ownership. ## The ASF licenses this file to You under the Apache License, Version 2.0 ## (the "License"); you may not use this file except in compliance with ## the License. You may obtain a copy of the License at ## ## http://www.apache.org/licenses/LICENSE-2.0 ## ## Unless required by applicable law or agreed to in writing, software ## distributed under the License is distributed on an "AS IS" BASIS, ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ## See the License for the specific language governing permissions and ## limitations under the License. ## ============================================== ## Environment variables: ## JVM_ARGS - optional java args, e.g. -Dprop=val ## ## ============================================== # The following should be reasonably good values for most tests running # on Sun JVMs. Following is the analysis on which it is based. If it's total # gibberish to you, please study my article at # http://www.atg.com/portal/myatg/developer?paf_dm=full&paf_gear_id=1100010&detailArticle=true&id=9606 # # JMeter objects can generally be grouped into three life-length groups: # # - Per-sample objects (results, DOMs,...). An awful lot of those. # Life length of milliseconds to a few seconds. # # - Per-run objects (threads, listener data structures,...). Not that many # of those unless we use the table or tree listeners on heavy runs. # Life length of minutes to several hours, from creation to start of next run. # # - Per-work-session objects (test plans, GUIs,...). # Life length: for the life of the JVM. # This is the base heap size -- you may increase or decrease it to fit your # system's memory availablity: HEAP="-Xms2048m -Xmx2048m" # There's an awful lot of per-sample objects allocated during test run, so we # need a large eden to avoid too frequent scavenges -- you'll need to tune this # down proportionally if you reduce the HEAP values above: NEW="-XX:NewSize=256m -XX:MaxNewSize=256m" # This ratio and target have been proven OK in tests with a specially high # amount of per-sample objects (the HtmlParserHTMLParser tests): # SURVIVOR="-XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=50%" # Think about it: trying to keep per-run objects in tenuring definitely # represents a cost, but where's the benefit? They won't disappear before # the test is over, and at that point we will no longer care about performance. # # So we will have JMeter do an explicit Full GC before starting a test run, # but then we won't make any effort (or spend any CPU) to keep objects # in tenuring longer than the life of per-sample objects -- which is hopefully # shorter than the period between two scavenges): # TENURING="-XX:MaxTenuringThreshold=2" # This evacuation ratio is OK (see the comments for SURVIVOR) during test # runs -- not so sure about operations that bring a lot of long-lived information into # memory in a short period of time, such as loading tests or listener data files. # Increase it if you experience OutOfMemory problems during those operations # without having gone through a lot of Full GC-ing just before the OOM: # EVACUATION="-XX:MaxLiveObjectEvacuationRatio=20%" # Avoid the RMI-induced Full GCs to run too frequently -- once every ten minutes # should be more than enough: RMIGC="-Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000" # Increase MaxPermSize if you use a lot of Javascript in your Test Plan : PERM="-XX:PermSize=64m -XX:MaxPermSize=128m" # Finally, some tracing to help in case things go astray: #DEBUG="-verbose:gc -XX:+PrintTenuringDistribution" # Always dump on OOM (does not cost anything unless triggered) DUMP="-XX:+HeapDumpOnOutOfMemoryError" SERVER="-server" ARGS="$SERVER $DUMP $HEAP $NEW $SURVIVOR $TENURING $EVACUATION $RMIGC $PERM" # Added to counter ELB DNS caching - see: http://wiki.apache.org/jmeter/JMeterAndAmazon JVM_ARGS="-Dsun.net.inetaddr.ttl=0" java $ARGS $JVM_ARGS -jar `dirname $0`/ApacheJMeter.jar "$@" ================================================ FILE: jmeter-ec2.properties ================================================ #!/bin/bash # This is a java stye properties file for the jmeter-ec2 shell script # # It is treated like a normal shell script # # See README.txt for more details about each property # # Pre Installed AMIs (Dont forget to use the right AMI for your region.) # # Region OS AMI id Name # eu-west-1 Ubuntu ami-ae72f2dd Ireland # ap-south-1 ubuntu ami-2312664c Mumbai # ap-southeast-1 Ubuntu ami-aacf1ac9 Singapore # ap-northeast-1 Ubuntu ami-f050439e Tokyo # ap-northeast-2 Ubuntu ami-48a07426 Seoul # ap-southeast-2 Ubuntu ami-73735110 Sydney # eu-central-1 Ubuntu ami-7c759413 Frankfurt # sa-east-1 Ubuntu ami-6f038c03 Sao Paulo # us-east-1 Ubuntu ami-90d2c4fa N. Virginia # us-west-1 Ubuntu ami-10473b70 N.California # us-west-2 Ubuntu ami-caf501aa Oregan # AMI_ID="ami-ae72f2dd" # Should match the AMI # IMPORTANT - t2.micro is not recommend for anything beyond developement, it works from a shared resource pool that can fluctuate, skewing test results. INSTANCE_TYPE="t2.micro" # Do not change REMOTE_HOME="/home/ubuntu" # The name OR id of *your* security group in *your* Amazon account - the permissions for thius group need to give your local machine ssh access. INSTANCE_SECURITYGROUP_IDS="sg-48a8b32c" # The name of the Amazon Keypair that you want to use. It should exist in *your* AWS account for the region you are using. AMAZON_KEYPAIR_NAME="euwest1" # The full name of the pem file you downloaded from your Amazon account. Usualy .pem from AWS but you could generate your own and name it what you want. PEM_FILE="euwest1.pem" # The path to your pem file PEM_PATH=$HOME/.ssh # Should match the AMI USER="ubuntu" # Email to be used when tagging instances EMAIL="" # Specify the region you will be working in REGION="eu-west-1" # How often the script prints running totals to the screen (n * summariser.interval seconds) RUNNINGTOTAL_INTERVAL="3" # A list of static IPs that can be assigned to each ec2 host. Ignored if not set ELASTIC_IPS="" # The port number sshd is running on REMOTE_PORT="22" # The version of JMeter to be used. Must be the full name used in the dir structure. Does not work for versions prior to 2.5.1 JMETER_VERSION="apache-jmeter-2.13" # # EC2-VPC Usage # # jmeter-ec2 can configure EC2-VPC instances. You must: # - set SUBNET_ID to the id of the subnet that the instance will belong to # - make sure that the AMI is compatible with EC2-VPC # - enable DNS Resolution in the VPC # - enable DNS hostnames in the VPC # SUBNET_ID="" # The subnet that the instance will belong to. # Remote hosts # # If this is set then the script will ignore INSTANCE_COUNT passed in at the command line and read in this list of hostnames to run the test over # instead. If it is not set then n number of hosts will be requested from Amazon. # # Must be a comma-separated list, like this: # REMOTE_HOSTS="ec2-46-51-135-180.eu-west-1.compute.amazonaws.com,ec2-176-34-204-10.eu-west-1.compute.amazonaws.com" # or: # REMOTE_HOSTS="myhost.com,antherhost.com" # or: # REMOTE_HOSTS="blahblah.corp.synergy:2020,10.213.45.6" ================================================ FILE: jmeter-ec2.properties.vagrant ================================================ # #!/bin/bash # This is a java stye properties file for the jmeter-ec2 shell script # # It is treated like a normal shell script # # See README.txt for more details about each property # LOCAL_HOME="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" # The root for this script - all files should be put here as per the README REMOTE_HOME="/tmp" # This can be left as /tmp - it is a temporary working location PEM_FILE="insecure_private_key" # The full name of the pem file you downloaded from your Amazon account. Usualy .pem from AWS but you could generate your own and name it what you want. PEM_PATH="$HOME/.vagrant.d" # The path to your pem file USER="vagrant" # Should match the AMI REMOTE_PORT="2222" # The port number sshd is running on, RUNNINGTOTAL_INTERVAL="3" # How often the script prints running totals to the screen (n * summariser.interval seconds) JMETER_VERSION="apache-jmeter-2.7" # The version of JMeter to be used. Must be the full name used in the dir structure. Does not work for versions prior to 2.5.1. RUNNINGTOTAL_INTERVAL="3" # How often the script prints running totals to the screen (n * summariser.interval seconds) # REMOTE_HOSTS REMOTE_HOSTS="127.0.0.1" ================================================ FILE: jmeter-ec2.sh ================================================ #!/bin/bash # ======================================================================================== # jmeter-ec2.sh # https://github.com/oliverlloyd/jmeter-ec2 # ======================================================================================== # # Copyright 2012 - Oliver Lloyd - GNU GENERAL PUBLIC LICENSE # # JMeter-ec2 is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # JMeter-ec2 is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with JMeter-ec2. If not, see . # DATETIME=$(date "+%s") # First make sure we have the required params and if not print out an instructive message #if [ -z "$project" ] ; then if [ "$1" == "-h" ] ; then echo 'usage: project="abc" percent=20 setup="TRUE" terminate="TRUE" count="3" ./jmeter-ec2.sh' echo echo "[project] - required, directory and jmx name" echo "[count] - optional, default=1" echo "[percent] - optional, default=100" echo "[setup] - optional, default='TRUE'" echo "[terminate] - optional, default='TRUE'" echo "[price] - optional" echo exit fi # default to 100 if percent is not specified if [ -z "$percent" ] ; then percent=100 ; fi # default to TRUE if setup is not specified if [ -z "$setup" ] ; then setup="TRUE" ; fi # default to TRUE if terminate is not specified if [ -z "$terminate" ] ; then terminate="TRUE" ; fi # move count to instance_count if [ -z "$count" ] ; then count=1 ; fi instance_count=$count LOCAL_HOME="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" # Execute the jmeter-ec2.properties file, establishing these constants. . $LOCAL_HOME/jmeter-ec2.properties if [ -z "$project" ] ; then project=$(basename `pwd`) fi project_home=`pwd` # If exists then run a local version of the properties file to allow project customisations. if [ -f "$project_home/jmeter-ec2.properties" ] ; then . $project_home/jmeter-ec2.properties fi cd $EC2_HOME # check project directory exists if [ ! -d "$project_home" ] ; then echo "The directory $project_home does not exist." echo echo "Script exiting." exit fi # The test has not started yet (used to decide what to do when the script stops) teststarted=0 # do some basic checks to prevent problems later function check_prereqs() { # If there is a custom jmeter.properties, check for: # - jmeter.save.saveservice.output_format=csv # - jmeter.save.saveservice.thread_counts=true if [ -r $LOCAL_HOME/jmeter.properties ] ; then has_csv_output=$(grep -c "^\s*jmeter.save.saveservice.output_format=csv" $LOCAL_HOME/jmeter.properties) has_thread_counts=$(grep -c "^\s*jmeter.save.saveservice.thread_counts=true" $LOCAL_HOME/jmeter.properties) if [ $has_csv_output -eq "0" ] ; then echo "WARN: Please ensure the jmeter.properties file has 'jmeter.save.saveservice.output_format=csv'. Could not find it!" fi if [ $has_thread_counts -eq "0" ] ; then echo "WARN: Please ensure the jmeter.properties file has 'jmeter.save.saveservice.thread_counts=true'. Could not find it!" fi else echo "WARN: Did not see a custom jmeter.properties file. Please ensure the remote hosts have the required settings 'jmeter.save.saveservice.output_format=csv' and 'jmeter.save.saveservice.thread_counts=true'" fi # Check that the test plan exists if [ -f "$project_home/jmx/$project.jmx" ] ; then # Check that the jmx plan has a Generate Summary Reults listener (testclass="Summariser") summariser_count=$(grep -c "/dev/null ; then echo "ERROR: awscli does not appear to be installed or accessible from command line (tried aws)." exit fi } function runsetup() { # if REMOTE_HOSTS is not set then no hosts have been specified to run the test on so we will request them from Amazon if [ -z "$REMOTE_HOSTS" ] ; then # check if ELASTIC_IPS is set, if it is we need to make sure we have enough of them if [ ! -z "$ELASTIC_IPS" ] ; then # Not Null - same as -n elasticips=(`echo $ELASTIC_IPS | tr "," "\n" | tr -d ' '`) elasticips_count=${#elasticips[@]} if [ "$instance_count" -gt "$elasticips_count" ] ; then echo echo "You are trying to launch $instance_count instance but you have only specified $elasticips_count elastic IPs." echo "If you wish to use Staitc IPs for each test instance then you must increase the list of values given for ELASTIC_IPS in the properties file." echo echo "Alternatively, if you set the STATIC_IPS property to \"\" or do not specify it at all then the test will run without trying to assign static IPs." echo echo "Script exiting..." echo exit fi fi # default to 1 instance if a count is not specified if [ -z "$instance_count" ] ; then instance_count=1; fi echo echo " -------------------------------------------------------------------------------------" echo " jmeter-ec2 Automation Script - Running $project.jmx over $instance_count AWS Instance(s)" echo " -------------------------------------------------------------------------------------" echo echo vpcsettings="" spot_launch_specification="{ \"KeyName\": \"$AMAZON_KEYPAIR_NAME\", \"ImageId\": \"$AMI_ID\", \"InstanceType\": \"$INSTANCE_TYPE\" , \"SecurityGroupIds\": [\"$INSTANCE_SECURITYGROUP_IDS\"] }" # if subnet is specified if [ -n "$SUBNET_ID" ] ; then vpcsettings="--subnet-id $SUBNET_ID --associate-public-ip-address" spot_launch_specification="{ \"KeyName\": \"$AMAZON_KEYPAIR_NAME\", \"ImageId\": \"$AMI_ID\", \"InstanceType\": \"$INSTANCE_TYPE\" , \"SecurityGroupIds\": [\"$INSTANCE_SECURITYGROUP_IDS\"], \"SubnetId\": \"$SUBNET_ID\" }" fi # create the instance(s) and capture the instance id(s) if [ -z "$price" ] ; then echo -n "Requesting $instance_count instance(s)..." attempted_instanceids=(`aws ec2 run-instances \ --key-name "$AMAZON_KEYPAIR_NAME" \ --instance-type "$INSTANCE_TYPE" \ --security-group-ids "$INSTANCE_SECURITYGROUP_IDS" \ --count 1:$instance_count \ $vpcsettings \ --image-id $AMI_ID \ --region $REGION \ --output text --query 'Instances[].InstanceId'`) else echo "Using Spot instances..." # create the spot instance request(s) and capture the request id(s) echo "Requesting $instance_count instance(s)..." spot_instance_request_id=(`aws ec2 request-spot-instances \ --spot-price $price \ --instance-count $instance_count \ --region $REGION \ --launch-specification "$spot_launch_specification" \ --output text --query 'SpotInstanceRequests[].[SpotInstanceRequestId]'`) echo "Spot Instance request submitted, number of requests is: ${#spot_instance_request_id[@]}" status_check_count=0 status_check_limit=60 spot_request_fulfilled_count=0 spot_request_error_count=0 echo "Waiting for Spot instance requests to fulfill (may take a few minutes)" while [ "$spot_request_fulfilled_count" -ne "$instance_count" ] && [ $status_check_count -lt $status_check_limit ] do spot_request_statuses=(`aws ec2 describe-spot-instance-requests --spot-instance-request-ids ${spot_instance_request_id[@]} --region $REGION --output text --query 'SpotInstanceRequests[].[Status.Code]'`) spot_request_fulfilled_count=$(echo ${spot_request_statuses[@]} | tr ' ' '\n' | grep -c fulfilled) # if all spot requests failed exit before status_check_limit is reached spot_request_errors=(canceled-before-fulfillment capacity-not-available capacity-oversubscribed price-too-low) for x in "${spot_request_statuses[@]}" ; do for i in "${spot_request_errors[@]}"; do if [[ "$i" = "$x" ]]; then spot_request_error_count=$(( $spot_request_error_count + 1)) break fi done done if [[ "$spot_request_error_count" = "${#spot_instance_request_id[@]}" ]]; then echo echo "All Spot requests failed, exiting. Statuses were:" for x in "${spot_request_statuses[@]}" ; do echo " $x" done aws ec2 cancel-spot-instance-requests --spot-instance-request-ids $(printf " %s" "${spot_instance_request_id[@]}") --region $REGION exit fi echo -n "." status_check_count=$(( $status_check_count + 1)) sleep 5 done # create a filter for the ec2-describe-instance command, to get the instances associated with the spot requests spot_id_filter_values="" for x in "${spot_instance_request_id[@]}" ; do spot_id_filter_values+="${x}," done # append values to filter variable and trim last comma off end of string spot_id_filter="Name=spot-instance-request-id,Values=${spot_id_filter_values::${#spot_id_filter_values}-1}" echo "Will be using this Spot ID filter to find new instances: $spot_id_filter" # Instances might not be found immediatly, wait a few seconds if necessary status_check_count=0 status_check_limit=60 instances_ready=false while true; do instance_describe=`aws ec2 describe-instances --filters $spot_id_filter --region $REGION` if [[ $instance_describe != *"Client.InvalidInstanceID.NotFound"* ]]; then instances_ready=true fi status_check_count=$(( $status_check_count + 1)) echo "." if [ $instances_ready = true ] || [ $status_check_count -gt $status_check_limit ]; then break fi done attempted_instanceids=(`aws ec2 describe-instances \ --filters $spot_id_filter \ --region $REGION \ --output text \ --query 'Reservations[].Instances[].InstanceId'`) fi # check to see if Amazon returned the desired number of instances as a limit is placed restricting this and we need to handle the case where # less than the expected number is given wthout failing the test. countof_instanceids=${#attempted_instanceids[@]} if [ "$countof_instanceids" = 0 ] ; then echo echo "Amazon did not supply any instances, exiting" echo exit fi if [ $countof_instanceids != $instance_count ] ; then echo "$countof_instanceids instance(s) were given by Amazon, the test will continue using only these instance(s)." instance_count=$countof_instanceids else echo "success" fi echo # wait for each instance to be fully operational status_check_count=0 status_check_limit=270 status_check_limit=`echo "$status_check_limit + $countof_instanceids" | bc` # increase wait time based on instance count echo "waiting for instance status checks to pass (this can take several minutes)..." count_passed=0 while [ "$count_passed" -ne "$instance_count" ] && [ $status_check_count -lt $status_check_limit ] do # Update progress bar progressBar $countof_instanceids $count_passed status_check_count=$(( $status_check_count + 1)) count_passed=(`aws ec2 describe-instance-status --instance-ids ${attempted_instanceids[@]} \ --region $REGION \ --output json \ --query 'InstanceStatuses[].InstanceStatus.Details[].Status' | grep -c passed`) sleep 1 done progressBar $countof_instanceids $count_passed true echo if [ $status_check_count -lt $status_check_limit ] ; then # all hosts started ok because count_passed==instance_count # set the instanceids array to use from now on - attempted = actual for key in "${!attempted_instanceids[@]}" do instanceids["$key"]="${attempted_instanceids["$key"]}" done # set hosts array hosts=(`aws ec2 describe-instances --instance-ids ${attempted_instanceids[@]} \ --region $REGION \ --output text \ --query 'Reservations[].Instances[].PublicIpAddress'`) # echo "all hosts ready" else # Amazon probably failed to start a host [*** NOTE this is fairly common ***] so show a msg - TO DO. Could try to replace it with a new one? original_count=$countof_instanceids # filter requested instances for only those that started well healthy_instanceids=(`aws ec2 describe-instance-status --instance-id ${attempted_instanceids[@]} \ --filter Name=instance-status.reachability,Values=passed \ --filter Name=system-status.reachability,Values=passed \ --region $REGION \ --output text \ --query 'Reservations[].Instances[].InstanceId'`) hosts=(`aws ec2 describe-instances --instance-ids ${healthy_instanceids[@]} \ --region $REGION \ --output text \ --query 'Reservations[].Instances[].PublicIpAddress'`) if [ "${#healthy_instanceids[@]}" -eq 0 ] ; then countof_instanceids=0 echo "no instances successfully initialised, exiting" if [ "$terminate" = "TRUE" ] ; then echo echo # attempt to terminate any running instances - just to be sure echo "terminating instance(s)..." # We use attempted_instanceids here to make sure that there are no orphan instances left lying around aws ec2 terminate-instances --instance-ids ${attempted_instanceids[@]} \ --region $REGION \ --output text \ --query 'TerminatingInstances[].InstanceId' echo fi exit else countof_instanceids=${#healthy_instanceids[@]} fi # if we still see failed instances then write a message countof_failedinstances=`echo "$original_count - $countof_instanceids"|bc` if [ "$countof_failedinstances" -gt 0 ] ; then echo "$countof_failedinstances instances(s) failed to start, only $countof_instanceids machine(s) will be used in the test" instance_count=$countof_instanceids fi # set the array of instance ids based on only those that succeeded for key in "${!healthy_instanceids[@]}" # make sure you include the quotes there do instanceids["$key"]="${healthy_instanceids["$key"]}" done fi echo # assign a name tag to each instance echo "assigning tags..." (aws ec2 create-tags --resources ${attempted_instanceids[@]} --tags Key=Name,Value="jmeter-ec2-$project" --region $REGION) wait echo "complete" echo # if provided, assign elastic IPs to each instance if [ ! -z "$ELASTIC_IPS" ] ; then # Not Null - same as -n echo "assigning elastic ips..." for x in "${!instanceids[@]}" ; do (aws ec2 associate-address --instance-id ${instanceids[x]} --public-ip ${elasticips[x]} --region $REGION ) hosts[x]=${elasticips[x]} done wait echo "complete" echo echo -n "checking elastic ips..." for x in "${!instanceids[@]}" ; do # check for ssh connectivity on the new address while ssh -o StrictHostKeyChecking=no -q -i $PEM_PATH/$PEM_FILE \ $USER@${hosts[x]} -p $REMOTE_PORT true && test; \ do echo -n .; sleep 1; done # Note. If any IP is already in use on an instance that is still running then the ssh check above will return # a false positive. If this scenario is common you should put a sleep statement here. done wait echo "complete" echo fi else # the property REMOTE_HOSTS is set so we wil use this list of predefined hosts instead hosts=(`echo $REMOTE_HOSTS | tr "," "\n" | tr -d ' '`) instance_count=${#hosts[@]} echo echo " -------------------------------------------------------------------------------------" echo " jmeter-ec2 Automation Script - Running $project.jmx over $instance_count predefined host(s)" echo " -------------------------------------------------------------------------------------" echo echo # Check if remote hosts are up for host in ${hosts[@]} ; do if [ ! "$(ssh -q \ -o StrictHostKeyChecking=no \ -o "BatchMode=yes" \ -o "ConnectTimeout=15" \ -i "$PEM_PATH/$PEM_FILE" \ -p $REMOTE_PORT \ $USER@$host echo up)" == "up" ] ; then echo "Host $host is not responding, script exiting..." echo exit fi done fi # scp verify.sh if [ "$setup" = "TRUE" ] ; then echo "copying verify.sh to $instance_count server(s)..." for host in ${hosts[@]} ; do (scp -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" \ -P $REMOTE_PORT \ $LOCAL_HOME/verify.sh \ $LOCAL_HOME/jmeter-ec2.properties \ $USER@$host:$REMOTE_HOME \ && echo "done" > $project_home/$DATETIME-$host-scpverify.out) & done # check to see if the scp call is complete (could just use the wait command here...) res=0 while [ "$res" != "$instance_count" ] ; do # Update progress bar progressBar $instance_count $res # Count how many out files we have for the copy (if the file exists the copy completed) # Note. We send stderr to dev/null in the ls cmd below to prevent file not found errors filling the screen # and the sed command here trims whitespace res=$(ls -l $project_home/$DATETIME*scpverify.out 2>/dev/null | wc -l | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//') sleep 1 done progressBar $instance_count $res true echo echo # Install test software echo "running verify.sh on $instance_count server(s)..." for host in ${hosts[@]} ; do (ssh -nq -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" $USER@$host -p $REMOTE_PORT \ "$REMOTE_HOME/verify.sh $JMETER_VERSION 2>&1"\ > $project_home/$DATETIME-$host-verify.out) & done # check to see if the verify script is complete res=0 while [ "$res" != "$instance_count" ] ; do # Installation not complete (count of matches for 'software installed' not equal to count of hosts running the test) # Update progress bar progressBar $instance_count $res res=$(grep -c "software installed" $project_home/$DATETIME*verify.out \ | awk -F: '{ s+=$NF } END { print s }') # the awk command here sums up the output if multiple matches were found sleep 1 done progressBar $instance_count $res true echo echo fi # Create a working jmx file and edit it to adjust thread counts and filepaths (leave the original jmx intact!) cp $project_home/jmx/$project.jmx $project_home/working working_jmx="$project_home/working" temp_jmx="$project_home/temp" # first filepaths (this will help with things like csv files) # edit any 'stringProp filename=' references to use $REMOTE_DIR in place of whatever local path was being used # we assume that the required dat file is copied into the local /data directory filepaths=$(awk 'BEGIN { FS = ">" } ; /[^<]*<\/stringProp>/ {print $2}' $working_jmx | cut -d'<' -f1) # pull out filepath i=1 while read filepath ; do if [ -n "$filepath" ] ; then # this entry is not blank # extract the filename from the filepath using '/' separator filename=$( echo $filepath | awk -F"/" '{print $NF}' ) endresult="$REMOTE_HOME"/data/"$filename" if [[ $filepath =~ .*\$.* ]] ; then echo "The path $filepath contains a $ char, this currently fails the awk sub command." echo "You'll have to remove these from all filepaths. Sorry." echo echo "Script exiting" exit fi awk '/[^<]*<\/stringProp>/{c++;if(c=='"$i"') \ {sub("filename\">'"$filepath"'<","filename\">'"$endresult"'<")}}1' \ $working_jmx > $temp_jmx rm $working_jmx mv $temp_jmx $working_jmx fi # increment i i=$((i+1)) done <<<"$filepaths" # now we use the same working file to edit thread counts # to cope with the problem of trying to spread 10 threads over 3 hosts (10/3 has a remainder) the script creates a unique jmx for each host # and then passes out threads to them on a round robin basis # as part of this we begin here by creating a working jmx file for each separate host using _$y to isolate for y in "${!hosts[@]}" ; do # for each host create a working copy of the jmx file cp "$working_jmx" "$working_jmx"_"$y" done # loop through each threadgroup and then use a nested loop within that to edit the file for each host # pull out the current values for each thread group threadgroup_threadcounts=(`awk 'BEGIN { FS = ">" } ; /ThreadGroup\.num_threads\">[^<]*"${orig_threadcounts[$i]} replacestr="threads\">"${threads[$y]} awk -v "findthis=$findstr" -v "replacewiththis=$replacestr" \ 'BEGIN{c=0} \ /ThreadGroup\.num_threads\">[^<]* "$temp_jmx"_"$y" # using awk requires the use of a temp file to save the results of the command, update the working file with this file rm "$working_jmx"_"$y" mv "$temp_jmx"_"$y" "$working_jmx"_"$y" done # write update to screen - removed 23/04/2012 # echo "...$i) ${threadgroup_names[$i]} has ${threadgroup_threadcounts[$i]} thread(s), to be distributed over $instance_count instance(s)" unset threads done echo echo "thread counts updated" echo # scp the test files onto each host echo -n "copying test files to $instance_count server(s)..." # scp jmx dir echo -n "jmx files.." for y in "${!hosts[@]}" ; do (scp -q -C -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -r \ -i "$PEM_PATH/$PEM_FILE" -P $REMOTE_PORT \ $project_home/working_$y \ $USER@${hosts[$y]}:$REMOTE_HOME/execute.jmx) & done wait echo -n "done...." # scp data dir if [ "$setup" = "TRUE" ] ; then if [ -r $project_home/data ] ; then # don't try to upload this optional dir if it is not present echo -n "data dir.." for host in ${hosts[@]} ; do (scp -q -C -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -r \ -i "$PEM_PATH/$PEM_FILE" -P $REMOTE_PORT \ $project_home/data \ $USER@$host:$REMOTE_HOME/) & done wait echo -n "done...." fi # scp jmeter.properties if [ -r $LOCAL_HOME/jmeter.properties ] ; then # don't try to upload this optional file if it is not present echo -n "jmeter.properties.." for host in ${hosts[@]} ; do (scp -q -C -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" -P $REMOTE_PORT \ $LOCAL_HOME/jmeter.properties \ $USER@$host:$REMOTE_HOME/$JMETER_VERSION/bin/) & done wait echo -n "done...." fi # scp system.properties if [ -r $LOCAL_HOME/system.properties ] ; then # don't try to upload this optional file if it is not present echo -n "system.properties.." for host in ${hosts[@]} ; do (scp -q -C -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" -P $REMOTE_PORT \ $LOCAL_HOME/system.properties \ $USER@$host:$REMOTE_HOME/$JMETER_VERSION/bin/) & done wait echo -n "done...." fi # scp keystore if [ -r $LOCAL_HOME/keystore.jks ] ; then # don't try to upload this optional file if it is not present echo -n "keystore.jks.." for host in ${hosts[@]} ; do (scp -q -C -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" -P $REMOTE_PORT \ $LOCAL_HOME/keystore.jks \ $USER@$host:$REMOTE_HOME) & done wait echo -n "done...." fi # scp jmeter execution file if [ -r $LOCAL_HOME/jmeter ] ; then # don't try to upload this optional file if it is not present echo -n "jmeter execution file..." for host in ${hosts[@]} ; do (scp -q -C -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" -P $REMOTE_PORT \ $LOCAL_HOME/jmeter $LOCAL_HOME/jmeter \ $USER@$host:$REMOTE_HOME/$JMETER_VERSION/bin/) & done wait echo -n "done...." fi # scp any custom jar files if [ -r $LOCAL_HOME/plugins ] ; then # don't try to upload this optional dir if it is not present echo -n "custom jar file(s)..." for host in ${hosts[@]} ; do (scp -q -C -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" -P $REMOTE_PORT \ $LOCAL_HOME/plugins/*.jar \ $USER@$host:$REMOTE_HOME/$JMETER_VERSION/lib/ext/) & done wait echo -n "done...." fi # scp any project specific custom jar files if [ -r $project_home/plugins ] ; then # don't try to upload this optional dir if it is not present echo -n "project specific jar file(s)..." for host in ${hosts[@]} ; do (scp -q -C -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" -P $REMOTE_PORT \ $project_home/plugins/*.jar \ $USER@$host:$REMOTE_HOME/$JMETER_VERSION/lib/ext/) & done wait echo -n "done...." fi echo "all files uploaded" echo fi # Start JMeter echo "starting jmeter on:" for host in ${hosts[@]} ; do echo $host done for counter in ${!hosts[@]} ; do ( ssh -nq -o StrictHostKeyChecking=no \ -p $REMOTE_PORT \ -i "$PEM_PATH/$PEM_FILE" $USER@${hosts[$counter]} \ $REMOTE_HOME/$JMETER_VERSION/bin/jmeter.sh -n \ -t $REMOTE_HOME/execute.jmx \ -l $REMOTE_HOME/$project-$DATETIME-$counter.jtl \ >> $project_home/$DATETIME-${hosts[$counter]}-jmeter.out ) & done echo echo } function runtest() { # sleep_interval - how often we poll the jmeter output for results # this value should be the same as the Generate Summary Results interval set in jmeter.properties # to be certain, we read the value in here and adjust the wait to match (this prevents lots of duplicates being written to the screen) sleep_interval=$(awk 'BEGIN { FS = "=" } ; /summariser.interval/ {print $2}' $LOCAL_HOME/jmeter.properties) runningtotal_seconds=$(echo "$RUNNINGTOTAL_INTERVAL * $sleep_interval" | bc) # $epoch is used when importing to mysql (if enabled) because we want unix timestamps, not datetime, as this works better when graphing. epoch_seconds=$(date +%s) epoch_milliseconds=$(echo "$epoch_seconds* 1000" | bc) # milliseconds since Mick Jagger became famous start_date=$(date) # warning, epoch and start_date do not (absolutely) equal each other! echo "JMeter started at $start_date" echo "====================== START OF JMETER-EC2 TEST =================================" echo "> [updates: every $sleep_interval seconds | running total: every $runningtotal_seconds seconds]" echo ">" echo "> waiting for the test to start...to stop the test while it is running, press CTRL-C" teststarted=1 # TO DO: Are thse required? count_total=0 avg_total=0 count_overallhosts=0 avg_overallhosts=0 tps_overallhosts=0 errors_overallhosts=0 i=1 firstmodmatch="TRUE" res=0 while [ $res != $instance_count ] ; do # test not complete (count of matches for 'end of run' not equal to count of hosts running the test) # gather results data and write to screen for each host #while read host ; do for host in ${hosts[@]} ; do check=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1 | awk '{print $1}') # make sure the test has really started to write results to the file if [[ -n "$check" ]] ; then # not null if [ $check == "Generate" ] ; then # test has begun screenupdate=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results +" | tail -1) echo "> $(date +%T): $screenupdate | host: $host" # write results to screen # get the latest values count=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results +" | tail -1 | awk '{print $5}') # pull out the current count avg=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results +" | tail -1 | awk '{print $11}') # pull out current avg tps_raw=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results +" | tail -1 | awk '{print $9}') # pull out current tps errors_raw=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results +" | tail -1 | awk '{print $17}') # pull out current errors tps=${tps_raw%/s} # remove the trailing '/s' # get the latest summary values count_total=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1 | awk '{print $5}') avg_total=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1 | awk '{print $11}') tps_total_raw=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1 | awk '{print $9}') tps_recent_raw=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results +" | tail -1 | awk '{print $9}') tps_total=${tps_total_raw%/s} # remove the trailing '/s' tps_recent=${tps_recent_raw%/s} # remove the trailing '/s' errors_total=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1 | awk '{print $17}') count_overallhosts=$(echo "$count_overallhosts+$count_total" | bc) # add the value from this host to the values from other hosts avg_overallhosts=$(echo "$avg_overallhosts+$avg" | bc) tps_overallhosts=$(echo "$tps_overallhosts+$tps_total" | bc) tps_recent_overallhosts=$(echo "$tps_recent_overallhosts+$tps_recent" | bc) errors_overallhosts=$(echo "$errors_overallhosts+$errors_total" | bc) # add the value from this host to the values from other hosts fi fi done #<<<"${hosts_str}" # next host # calculate the average respone time over all hosts avg_overallhosts=$(echo "$avg_overallhosts/$instance_count" | bc) # every RUNNINGTOTAL_INTERVAL loops print a running summary (if each host is running) mod=$(echo "$i % $RUNNINGTOTAL_INTERVAL"|bc) if [ $mod == 0 ] ; then if [ $firstmodmatch == "TRUE" ] ; then # don't write summary results the first time (because it's not useful) firstmodmatch="FALSE" else # first check the results files to make sure data is available wait=0 for host in ${hosts[@]} ; do result_count=$(grep -c "Results =" $project_home/$DATETIME-$host-jmeter.out) if [ $result_count = 0 ] ; then wait=1 fi done # now write out the data to the screen if [ $wait == 0 ] ; then # each file is ready to summarise for host in ${hosts[@]} ; do screenupdate=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1) echo "> $(date +%T): $screenupdate | host: $host" # write results to screen done echo ">" echo "> $(date +%T): [RUNNING TOTALS] total count: $count_overallhosts, current avg: $avg_overallhosts (ms), average tps: $tps_overallhosts (p/sec), recent tps: $tps_recent_overallhosts (p/sec), total errors: $errors_overallhosts" echo ">" fi fi fi i=$(( $i + 1)) sleep $sleep_interval # we rely on JM to keep track of overall test totals (via Results =) so we only need keep count of values over multiple instances # there's no need for a running total outside of this loop so we reinitialise the vars here. count_total=0 avg_total=0 count_overallhosts=0 avg_overallhosts=0 tps_overallhosts=0 tps_recent_overallhosts=0 errors_overallhosts=0 # check to see if the test is complete res=$(grep -c "end of run" $project_home/$DATETIME*jmeter.out | awk -F: '{ s+=$NF } END { print s }') done # test complete # now the test is complete calculate a final summary and write to the screen for host in ${hosts[@]} ; do # get the final summary values count_total=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1 | awk '{print $5}') avg_total=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1 | awk '{print $11}') tps_total_raw=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1 | awk '{print $9}') tps_total=${tps_total_raw%/s} # remove the trailing '/s' tps_recent_raw=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results +" | tail -1 | awk '{print $9}') tps_recent=${tps_recent_raw%/s} # remove the trailing '/s' errors_total=$(tail -10 $project_home/$DATETIME-$host-jmeter.out | grep "Results =" | tail -1 | awk '{print $17}') # running totals count_overallhosts=$(echo "$count_overallhosts+$count_total" | bc) # add the value from this host to the values from other hosts avg_overallhosts=$(echo "$avg_overallhosts+$avg_total" | bc) tps_overallhosts=$(echo "$tps_overallhosts+$tps_total" | bc) # add the value from this host to the values from other hosts tps_recent_overallhosts=$(echo "$tps_recent_overallhosts+$tps_recent" | bc) errors_overallhosts=$(echo "$errors_overallhosts+$errors_total" | bc) # add the value from this host to the values from other hosts done # calculate averages over all hosts avg_overallhosts=$(echo "$avg_overallhosts/$instance_count" | bc) } function runcleanup() { # Turn off the CTRL-C trap now that we are already in the runcleanup function trap - INT if [ "$teststarted" -eq 1 ] ; then # display final results echo ">" echo ">" echo "> $(date +%T): [FINAL RESULTS] total count: $count_overallhosts, overall avg: $avg_overallhosts (ms), overall tps: $tps_overallhosts (p/sec), recent tps: $tps_recent_overallhosts (p/sec), errors: $errors_overallhosts" echo ">" echo "===================================================================== END OF JMETER-EC2 TEST ==================================================================================" echo echo # download the results for i in ${!hosts[@]} ; do echo -n "downloading results from ${hosts[$i]}..." scp -q -C -o UserKnownHostsFile=/dev/null \ -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" \ -P $REMOTE_PORT \ $USER@${hosts[$i]}:$REMOTE_HOME/$project-*.jtl \ $project_home/ # Append the hostname sed "s/$/,"${hosts[$i]}"/" $project_home/$project-$DATETIME-$i.jtl >> $project_home/$project-$DATETIME-$i-appended.jtl rm $project_home/$project-$DATETIME-$i.jtl echo "$project_home/$project-$DATETIME-$i.jtl complete" done echo # process the files into one jtl results file echo -n "processing results..." for (( i=0; i<$instance_count; i++ )) ; do cat $project_home/$project-$DATETIME-$i-appended.jtl >> $project_home/$project-$DATETIME-grouped.jtl rm $project_home/$project-$DATETIME-$i-appended.jtl # removes the individual results files (from each host) - might be useful to some people to keep these files? done # Sort File sort $project_home/$project-$DATETIME-grouped.jtl >> $project_home/$project-$DATETIME-sorted.jtl # Remove blank lines sed '/^$/d' $project_home/$project-$DATETIME-sorted.jtl >> $project_home/$project-$DATETIME-noblanks.jtl # Remove any lines containing "0,0,Error:" - which seems to be an intermittant bug in JM where the getTimestamp call fails with a nullpointer sed '/^0,0,Error:/d' $project_home/$project-$DATETIME-noblanks.jtl >> $project_home/$project-$DATETIME-complete.jtl # Calclulate test duration start_time=$(head -1 $project_home/$project-$DATETIME-complete.jtl | cut -d',' -f1) end_time=$(tail -1 $project_home/$project-$DATETIME-complete.jtl | cut -d',' -f1) duration=$(echo "$end_time-$start_time" | bc) if ! [ "$duration" -gt 0 ] ; then duration=0; fi fi # terminate any running instances created if [ -z "$REMOTE_HOSTS" ]; then if [ "$terminate" = "TRUE" ] ; then echo echo echo "terminating instance(s)..." # We use attempted_instanceids here to make sure that there are no orphan instances left lying around aws ec2 terminate-instances --instance-ids ${attempted_instanceids[@]} \ --region $REGION \ --output text \ --query 'TerminatingInstances[].InstanceId' echo fi fi # Tidy up if [ -e "$project_home/$project-$DATETIME-grouped.jtl" ] ; then rm $project_home/$project-$DATETIME-grouped.jtl ; fi if [ -e "$project_home/$project-$DATETIME-sorted.jtl" ] ; then rm $project_home/$project-$DATETIME-sorted.jtl ; fi if [ -e "$project_home/$project-$DATETIME-noblanks.jtl" ] ; then rm $project_home/$project-$DATETIME-noblanks.jtl ; fi if [ -e "$project_home/$project-$DATETIME-complete.jtl" ] ; then mkdir -p $project_home/results/ mv $project_home/$project-$DATETIME-complete.jtl $project_home/results/ fi # tidy up working files # for debugging purposes you could comment out these lines rm $project_home/$DATETIME*.out rm $project_home/working* echo echo " -------------------------------------------------------------------------------------" echo " jmeter-ec2 Automation Script - COMPLETE" echo if [ "$teststarted" -eq 1 ] ; then echo " Test Results: $project_home/results/$project-$DATETIME-complete.jtl" fi echo " -------------------------------------------------------------------------------------" echo } progressBarWidth=50 spinnerIndex=1 sp="/-\|" # Function to draw progress bar progressBar() { taskCount=$1 tasksDone=$2 progressDone=$3 # Calculate number of fill/empty slots in the bar progress=$(echo "$progressBarWidth/$taskCount*$tasksDone" | bc -l) fill=$(printf "%.0f\n" $progress) if [ $fill -gt $progressBarWidth ]; then fill=$progressBarWidth fi empty=$(($fill-$progressBarWidth)) # Percentage Calculation progressPercent=$(echo "100/$taskCount*$tasksDone" | bc -l) progressPercent=$(printf "%0.2f\n" $progressPercent) if [[ -n "${progressPercent}" && $(echo "$progressPercent>100" | bc) -gt 0 ]]; then progressPercent="100.00" fi # Output to screen printf "\r[" printf "%${fill}s" '' | tr ' ' \# printf "%${empty}s" '' | tr ' ' " " printf "] $progressPercent%% - ($tasksDone of $taskCount) " if [ $progressDone ] ; then printf " - Done." else printf " \b${sp:spinnerIndex++%${#sp}:1} " fi } function control_c(){ # Turn off the CTRL-C trap now that it has been invoked once already trap - INT if [ "$teststarted" -eq 1 ] ; then # Stop the running test on each host echo echo "> Stopping test..." for f in ${!hosts[@]} ; do ( ssh -nq -o StrictHostKeyChecking=no \ -i "$PEM_PATH/$PEM_FILE" $USER@${hosts[$f]} -p $REMOTE_PORT \ $REMOTE_HOME/$JMETER_VERSION/bin/stoptest.sh ) & done wait echo ">" fi runcleanup exit } # trap keyboard interrupt (control-c) trap control_c SIGINT check_prereqs runsetup runtest runcleanup ================================================ FILE: jmeter.properties ================================================ ################################################################################ # Apache JMeter Property file ################################################################################ ## Licensed to the Apache Software Foundation (ASF) under one or more ## contributor license agreements. See the NOTICE file distributed with ## this work for additional information regarding copyright ownership. ## The ASF licenses this file to You under the Apache License, Version 2.0 ## (the "License"); you may not use this file except in compliance with ## the License. You may obtain a copy of the License at ## ## http://www.apache.org/licenses/LICENSE-2.0 ## ## Unless required by applicable law or agreed to in writing, software ## distributed under the License is distributed on an "AS IS" BASIS, ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ## See the License for the specific language governing permissions and ## limitations under the License. ################################################################################ # # THIS FILE SHOULD NOT BE MODIFIED # # This avoids having to re-apply the modifications when upgrading JMeter # Instead only user.properties should be modified: # 1/ copy the property you want to modify to user.properties from jmeter.properties # 2/ Change its value there # ################################################################################ #Preferred GUI language. Comment out to use the JVM default locale's language. #language=en # Additional locale(s) to add to the displayed list. # The current default list is: en, fr, de, no, es, tr, ja, zh_CN, zh_TW, pl, pt_BR # [see JMeterMenuBar#makeLanguageMenu()] # The entries are a comma-separated list of language names #locales.add=zu # Netscape HTTP Cookie file cookies=cookies #--------------------------------------------------------------------------- # File format configuration for JMX and JTL files #--------------------------------------------------------------------------- # Properties: # file_format - affects both JMX and JTL files # file_format.testplan - affects JMX files only # file_format.testlog - affects JTL files only # # Possible values are: # 2.1 - initial format using XStream # 2.2 - updated format using XStream, with shorter names # N.B. format 2.0 (Avalon) is no longer supported #--------------------------------------------------------------------------- # XML Parser #--------------------------------------------------------------------------- # XML Reader(Parser) - Must implement SAX 2 specs xml.parser=org.apache.xerces.parsers.SAXParser # Path to a Properties file containing Namespace mapping in the form # prefix=Namespace # Example: # ns=http://biz.aol.com/schema/2006-12-18 #xpath.namespace.config= #--------------------------------------------------------------------------- # SSL configuration #--------------------------------------------------------------------------- ## SSL System properties are now in system.properties # JMeter no longer converts javax.xxx property entries in this file into System properties. # These must now be defined in the system.properties file or on the command-line. # The system.properties file gives more flexibility. # By default, SSL session contexts are now created per-thread, rather than being shared. # The original behaviour can be enabled by setting the JMeter property: #https.sessioncontext.shared=true # Default HTTPS protocol level: #https.default.protocol=TLS # This may need to be changed here (or in user.properties) to: #https.default.protocol=SSLv3 # List of protocols to enable. You may have to select only a subset if you find issues with target server. # This is needed when server does not support Socket version negotiation, this can lead to: # javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated # java.net.SocketException: Connection reset # see https://issues.apache.org/bugzilla/show_bug.cgi?id=54759 #https.socket.protocols=SSLv2Hello SSLv3 TLSv1 # Control if we allow reuse of cached SSL context between iterations # set the value to 'false' to reset the SSL context each iteration #https.use.cached.ssl.context=true # Start and end index to be used with keystores with many entries # The default is to use entry 0, i.e. the first #https.keyStoreStartIndex=0 #https.keyStoreEndIndex=0 #--------------------------------------------------------------------------- # Look and Feel configuration #--------------------------------------------------------------------------- #Classname of the Swing default UI # # The LAF classnames that are available are now displayed as ToolTip text # when hovering over the Options/Look and Feel selection list. # # You can either use a full class name, as shown above, # or one of the strings "System" or "CrossPlatform" which means # JMeter will use the corresponding string returned by UIManager.getLookAndFeelClassName() # LAF can be overridden by os.name (lowercased, spaces replaced by '_') # Sample os.name LAF: #jmeter.laf.windows_xp=javax.swing.plaf.metal.MetalLookAndFeel # Failing that, the OS family = os.name, but only up to first space: # Sample OS family LAF: #jmeter.laf.windows=com.sun.java.swing.plaf.windows.WindowsLookAndFeel # Mac apparently looks better with the System LAF jmeter.laf.mac=System # Failing that, the JMeter default laf can be defined: #jmeter.laf=System # If none of the above jmeter.laf properties are defined, JMeter uses the CrossPlatform LAF. # This is because the CrossPlatform LAF generally looks better than the System LAF. # See https://issues.apache.org/bugzilla/show_bug.cgi?id=52026 for details # N.B. the laf can be defined in user.properties. # LoggerPanel display # default to false #jmeter.loggerpanel.display=false # Enable LogViewer Panel to receive log event even if closed # Enabled since 2.12 # Note this has some impact on performances, but as GUI mode must # not be used for Load Test it is acceptable #jmeter.loggerpanel.enable_when_closed=true # Error/Fatal Log count display # defaults to true #jmeter.errorscounter.display=true # Max characters kept in LoggerPanel, default to 80000 chars # O means no limit #jmeter.loggerpanel.maxlength=80000 # Toolbar display # default: #jmeter.toolbar.display=true # Toolbar icon definitions #jmeter.toolbar.icons=org/apache/jmeter/images/toolbar/icons-toolbar.properties # Toolbar list #jmeter.toolbar=new,open,close,save,save_as_testplan,|,cut,copy,paste,|,expand,collapse,toggle,|,test_start,test_stop,test_shutdown,|,test_start_remote_all,test_stop_remote_all,test_shutdown_remote_all,|,test_clear,test_clear_all,|,search,search_reset,|,function_helper,help # Toolbar icons default size: 22x22. Available sizes are: 22x22, 32x32, 48x48 #jmeter.toolbar.icons.size=22x22 # Icon definitions # default: #jmeter.icons=org/apache/jmeter/images/icon.properties # alternate: #jmeter.icons=org/apache/jmeter/images/icon_1.properties #Components to not display in JMeter GUI (GUI class name or static label) # These elements are deprecated: HTML Parameter Mask,HTTP User Parameter Modifier, Webservice (SOAP) Request not_in_menu=org.apache.jmeter.protocol.http.modifier.gui.ParamModifierGui, HTTP User Parameter Modifier, org.apache.jmeter.protocol.http.control.gui.WebServiceSamplerGui # Number of items in undo history # Feature is disabled by default (0) # Set it to a number > 0 (25 can be a good default) # The bigger it is, the more it consumes memory #undo.history.size=0 #--------------------------------------------------------------------------- # Remote hosts and RMI configuration #--------------------------------------------------------------------------- # Remote Hosts - comma delimited remote_hosts=127.0.0.1 #remote_hosts=localhost:1099,localhost:2010 # RMI port to be used by the server (must start rmiregistry with same port) #server_port=1099 # To change the port to (say) 1234: # On the server(s) # - set server_port=1234 # - start rmiregistry with port 1234 # On Windows this can be done by: # SET SERVER_PORT=1234 # JMETER-SERVER # # On Unix: # SERVER_PORT=1234 jmeter-server # # On the client: # - set remote_hosts=server:1234 # Parameter that controls the RMI port used by the RemoteSampleListenerImpl (The Controler) # Default value is 0 which means port is randomly assigned # You may need to open Firewall port on the Controller machine #client.rmi.localport=0 # When distributed test is starting, there may be several attempts to initialize # remote engines. By default, only single try is made. Increase following property # to make it retry for additional times #client.tries=1 # If there is initialization retries, following property sets delay between attempts #client.retries_delay=5000 # When all initialization tries was made, test will fail if some remote engines are failed # Set following property to true to ignore failed nodes and proceed with test #client.continue_on_fail=false # To change the default port (1099) used to access the server: #server.rmi.port=1234 # To use a specific port for the JMeter server engine, define # the following property before starting the server: #server.rmi.localport=4000 # From JMeter 2.3.1, the jmeter server creates the RMI registry as part of the server process. # To stop the server creating the RMI registry: #server.rmi.create=false # From JMeter 2.3.1, define the following property to cause JMeter to exit after the first test #server.exitaftertest=true # Prefix used by IncludeController when building file name #includecontroller.prefix= #--------------------------------------------------------------------------- # Logging Configuration #--------------------------------------------------------------------------- # Note: JMeter uses Avalon (Excalibur) LogKit # Logging Format # see http://excalibur.apache.org/apidocs/org/apache/log/format/PatternFormatter.html # # Default format: #log_format=%{time:yyyy/MM/dd HH:mm:ss} %5.5{priority} - %{category}: %{message} %{throwable} # \n is automatically added to the end of the string # # Predefined formats in the JMeter LoggingManager: #log_format_type=default #log_format_type=thread_prefix #log_format_type=thread_suffix # default is as above # thread_prefix adds the thread name as a prefix to the category # thread_suffix adds the thread name as a suffix to the category # Note that thread name is not included by default, as it requires extra processing. # # To change the logging format, define either log_format_type or log_format # If both are defined, the type takes precedence # Note that these properties cannot be defined using the -J or -D JMeter # command-line flags, as the format will have already been determined by then # However, they can be defined as JVM properties #Logging levels for the logging categories in JMeter. Correct values are FATAL_ERROR, ERROR, WARN, INFO, and DEBUG # To set the log level for a package or individual class, use: # log_level.[package_name].[classname]=[PRIORITY_LEVEL] # But omit "org.apache" from the package name. The classname is optional. Further examples below. log_level.jmeter=INFO log_level.jmeter.junit=DEBUG #log_level.jmeter.control=DEBUG #log_level.jmeter.testbeans=DEBUG #log_level.jmeter.engine=DEBUG #log_level.jmeter.threads=DEBUG #log_level.jmeter.gui=WARN #log_level.jmeter.testelement=DEBUG #log_level.jmeter.util=WARN #log_level.jmeter.protocol.http=DEBUG # For CookieManager, AuthManager etc: #log_level.jmeter.protocol.http.control=DEBUG #log_level.jmeter.protocol.ftp=WARN #log_level.jmeter.protocol.jdbc=DEBUG #log_level.jmeter.protocol.java=WARN #log_level.jmeter.testelements.property=DEBUG log_level.jorphan=INFO #Log file for log messages. # You can specify a different log file for different categories via: # log_file.[category]=[filename] # category is equivalent to the package/class names described above # Combined log file (for jmeter and jorphan) #log_file=jmeter.log # To redirect logging to standard output, try the following: # (it will probably report an error, but output will be to stdout) #log_file= # Or define separate logs if required: #log_file.jorphan=jorphan.log #log_file.jmeter=jmeter.log # If the filename contains paired single-quotes, then the name is processed # as a SimpleDateFormat format applied to the current date, for example: #log_file='jmeter_'yyyyMMddHHmmss'.tmp' # N.B. When JMeter starts, it sets the system property: # org.apache.commons.logging.Log # to # org.apache.commons.logging.impl.LogKitLogger # if not already set. This causes Apache and Commons HttpClient to use the same logging as JMeter # Further logging configuration # Excalibur logging provides the facility to configure logging using # configuration files written in XML. This allows for such features as # log file rotation which are not supported directly by JMeter. # # If such a file specified, it will be applied to the current logging # hierarchy when that has been created. # #log_config=logkit.xml #--------------------------------------------------------------------------- # HTTP Java configuration #--------------------------------------------------------------------------- # Number of connection retries performed by HTTP Java sampler before giving up #http.java.sampler.retries=10 # 0 now means don't retry connection (in 2.3 and before it meant no tries at all!) #--------------------------------------------------------------------------- # Commons HTTPClient configuration #--------------------------------------------------------------------------- # define a properties file for overriding Commons HttpClient parameters # See: http://hc.apache.org/httpclient-3.x/preference-api.html # Uncomment this line if you put anything in httpclient.parameters file #httpclient.parameters.file=httpclient.parameters # define a properties file for overriding Apache HttpClient parameters # See: TBA # Uncomment this line if you put anything in hc.parameters file #hc.parameters.file=hc.parameters # Following properties apply to both Commons and Apache HttpClient # set the socket timeout (or use the parameter http.socket.timeout) # for AJP Sampler and HttpClient3 implementation. # Note for HttpClient3 implementation it is better to use GUI to set timeout # or use http.socket.timeout in httpclient.parameters # Value is in milliseconds #httpclient.timeout=0 # 0 == no timeout # Set the http version (defaults to 1.1) #httpclient.version=1.0 (or use the parameter http.protocol.version) # Define characters per second > 0 to emulate slow connections #httpclient.socket.http.cps=0 #httpclient.socket.https.cps=0 #Enable loopback protocol #httpclient.loopback=true # Define the local host address to be used for multi-homed hosts #httpclient.localaddress=1.2.3.4 # AuthManager Kerberos configuration # Name of application module used in jaas.conf #kerberos_jaas_application=JMeter # Should ports be stripped from urls before constructing SPNs # for spnego authentication #kerberos.spnego.strip_port=true # Sample logging levels for Commons HttpClient # # Commons HttpClient Logging information can be found at: # http://hc.apache.org/httpclient-3.x/logging.html # Note that full category names are used, i.e. must include the org.apache. # Info level produces no output: #log_level.org.apache.commons.httpclient=debug # Might be useful: #log_level.org.apache.commons.httpclient.Authenticator=trace # Show headers only #log_level.httpclient.wire.header=debug # Full wire debug produces a lot of output; consider using separate file: #log_level.httpclient.wire=debug #log_file.httpclient=httpclient.log # Apache Commons HttpClient logging examples # # Enable header wire + context logging - Best for Debugging #log_level.org.apache.http=DEBUG #log_level.org.apache.http.wire=ERROR # Enable full wire + context logging #log_level.org.apache.http=DEBUG # Enable context logging for connection management #log_level.org.apache.http.impl.conn=DEBUG # Enable context logging for connection management / request execution #log_level.org.apache.http.impl.conn=DEBUG #log_level.org.apache.http.impl.client=DEBUG #log_level.org.apache.http.client=DEBUG #--------------------------------------------------------------------------- # Apache HttpComponents HTTPClient configuration (HTTPClient4) #--------------------------------------------------------------------------- # Number of retries to attempt (default 0) #httpclient4.retrycount=0 # Idle connection timeout (ms) to apply if the server does not send Keep-Alive headers #httpclient4.idletimeout=0 # Note: this is currently an experimental fix #--------------------------------------------------------------------------- # Apache HttpComponents HTTPClient configuration (HTTPClient 3.1) #--------------------------------------------------------------------------- # Number of retries to attempt (default 0) #httpclient3.retrycount=0 #--------------------------------------------------------------------------- # HTTP Cache Manager configuration #--------------------------------------------------------------------------- # # Space or comma separated list of methods that can be cached #cacheable_methods=GET # N.B. This property is currently a temporary solution for Bug 56162 # Since 2.12, JMeter does not create anymore a Sample Result with 204 response # code for a resource found in cache which is inline with what browser do. #cache_manager.cached_resource_mode=RETURN_NO_SAMPLE # You can choose between 3 modes: # RETURN_NO_SAMPLE (default) # RETURN_200_CACHE # RETURN_CUSTOM_STATUS # Those mode have the following behaviours: # RETURN_NO_SAMPLE : this mode returns no Sample Result, it has no additional configuration # RETURN_200_CACHE : this mode will return Sample Result with response code to 200 and response message to "(ex cache)", you can modify response message by setting # RETURN_200_CACHE.message=(ex cache) # RETURN_CUSTOM_STATUS : This mode lets you select what response code and message you want to return, if you use this mode you need to set those properties # RETURN_CUSTOM_STATUS.code= # RETURN_CUSTOM_STATUS.message= #--------------------------------------------------------------------------- # Results file configuration #--------------------------------------------------------------------------- # This section helps determine how result data will be saved. # The commented out values are the defaults. # legitimate values: xml, csv, db. Only xml and csv are currently supported. jmeter.save.saveservice.output_format=csv # true when field should be saved; false otherwise # assertion_results_failure_message only affects CSV output #jmeter.save.saveservice.assertion_results_failure_message=false # # legitimate values: none, first, all #jmeter.save.saveservice.assertion_results=none # #jmeter.save.saveservice.data_type=true #jmeter.save.saveservice.label=true #jmeter.save.saveservice.response_code=true # response_data is not currently supported for CSV output #jmeter.save.saveservice.response_data=false # Save ResponseData for failed samples #jmeter.save.saveservice.response_data.on_error=false #jmeter.save.saveservice.response_message=true #jmeter.save.saveservice.successful=true #jmeter.save.saveservice.thread_name=true #jmeter.save.saveservice.time=true #jmeter.save.saveservice.subresults=true #jmeter.save.saveservice.assertions=true #jmeter.save.saveservice.latency=true #jmeter.save.saveservice.connect_time=false #jmeter.save.saveservice.samplerData=false #jmeter.save.saveservice.responseHeaders=false #jmeter.save.saveservice.requestHeaders=false #jmeter.save.saveservice.encoding=false #jmeter.save.saveservice.bytes=true #jmeter.save.saveservice.url=false #jmeter.save.saveservice.filename=false jmeter.save.saveservice.hostname=false jmeter.save.saveservice.thread_counts=true #jmeter.save.saveservice.sample_count=false #jmeter.save.saveservice.idle_time=false # Timestamp format - this only affects CSV output files # legitimate values: none, ms, or a format suitable for SimpleDateFormat #jmeter.save.saveservice.timestamp_format=ms #jmeter.save.saveservice.timestamp_format=yyyy/MM/dd HH:mm:ss.SSS # For use with Comma-separated value (CSV) files or other formats # where the fields' values are separated by specified delimiters. # Default: #jmeter.save.saveservice.default_delimiter=, # For TAB, since JMeter 2.3 one can use: #jmeter.save.saveservice.default_delimiter=\t # Only applies to CSV format files: jmeter.save.saveservice.print_field_names=false # Optional list of JMeter variable names whose values are to be saved in the result data files. # Use commas to separate the names. For example: #sample_variables=SESSION_ID,REFERENCE # N.B. The current implementation saves the values in XML as attributes, # so the names must be valid XML names. # Versions of JMeter after 2.3.2 send the variable to all servers # to ensure that the correct data is available at the client. # Optional xml processing instruction for line 2 of the file: #jmeter.save.saveservice.xml_pi= # Prefix used to identify filenames that are relative to the current base #jmeter.save.saveservice.base_prefix=~/ # AutoFlush on each line written in XML or CSV output # Setting this to true will result in less test results data loss in case of Crash # but with impact on performances, particularly for intensive tests (low or no pauses) # Since JMeter 2.10, this is false by default #jmeter.save.saveservice.autoflush=false #--------------------------------------------------------------------------- # Settings that affect SampleResults #--------------------------------------------------------------------------- # Save the start time stamp instead of the end # This also affects the timestamp stored in result files sampleresult.timestamp.start=true # Whether to use System.nanoTime() - otherwise only use System.currentTimeMillis() #sampleresult.useNanoTime=true # Use a background thread to calculate the nanoTime offset # Set this to <= 0 to disable the background thread #sampleresult.nanoThreadSleep=5000 #--------------------------------------------------------------------------- # Upgrade property #--------------------------------------------------------------------------- # File that holds a record of name changes for backward compatibility issues upgrade_properties=/bin/upgrade.properties #--------------------------------------------------------------------------- # JMeter Test Script recorder configuration # # N.B. The element was originally called the Proxy recorder, which is why the # properties have the prefix "proxy". #--------------------------------------------------------------------------- # If the recorder detects a gap of at least 5s (default) between HTTP requests, # it assumes that the user has clicked a new URL #proxy.pause=5000 # Add numeric prefix to Sampler names (default true) #proxy.number.requests=true # List of URL patterns that will be added to URL Patterns to exclude # Separate multiple lines with ; #proxy.excludes.suggested=.*\\.(bmp|css|js|gif|ico|jpe?g|png|swf|woff) # Change the default HTTP Sampler (currently HttpClient4) # Java: #jmeter.httpsampler=HTTPSampler #or #jmeter.httpsampler=Java # # Apache HTTPClient: #jmeter.httpsampler=HTTPSampler2 #or #jmeter.httpsampler=HttpClient3.1 # # HttpClient4.x #jmeter.httpsampler=HttpClient4 # By default JMeter tries to be more lenient with RFC2616 redirects and allows # relative paths. # If you want to test strict conformance, set this value to true # When the property is true, JMeter follows http://tools.ietf.org/html/rfc3986#section-5.2 #jmeter.httpclient.strict_rfc2616=false # Default content-type include filter to use #proxy.content_type_include=text/html|text/plain|text/xml # Default content-type exclude filter to use #proxy.content_type_exclude=image/.*|text/css|application/.* # Default headers to remove from Header Manager elements # (Cookie and Authorization are always removed) #proxy.headers.remove=If-Modified-Since,If-None-Match,Host # Binary content-type handling # These content-types will be handled by saving the request in a file: #proxy.binary.types=application/x-amf,application/x-java-serialized-object # The files will be saved in this directory: #proxy.binary.directory=user.dir # The files will be created with this file filesuffix: #proxy.binary.filesuffix=.binary #--------------------------------------------------------------------------- # Test Script Recorder certificate configuration #--------------------------------------------------------------------------- #proxy.cert.directory= #proxy.cert.file=proxyserver.jks #proxy.cert.type=JKS #proxy.cert.keystorepass=password #proxy.cert.keypassword=password #proxy.cert.factory=SunX509 # define this property if you wish to use your own keystore #proxy.cert.alias= # The default validity for certificates created by JMeter #proxy.cert.validity=7 # Use dynamic key generation (if supported by JMeter/JVM) # If false, will revert to using a single key with no certificate #proxy.cert.dynamic_keys=true #--------------------------------------------------------------------------- # Test Script Recorder miscellaneous configuration #--------------------------------------------------------------------------- # Whether to attempt disabling of samples that resulted from redirects # where the generated samples use auto-redirection #proxy.redirect.disabling=true # SSL configuration #proxy.ssl.protocol=TLS #--------------------------------------------------------------------------- # JMeter Proxy configuration #--------------------------------------------------------------------------- # use command-line flags for user-name and password #http.proxyDomain=NTLM domain, if required by HTTPClient sampler #--------------------------------------------------------------------------- # HTTPSampleResponse Parser configuration #--------------------------------------------------------------------------- # Space-separated list of parser groups HTTPResponse.parsers=htmlParser wmlParser # for each parser, there should be a parser.types and a parser.className property #--------------------------------------------------------------------------- # HTML Parser configuration #--------------------------------------------------------------------------- # Define the HTML parser to be used. # Default parser: # This new parser (since 2.10) should perform better than all others # see https://issues.apache.org/bugzilla/show_bug.cgi?id=55632 #htmlParser.className=org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser # Other parsers: # Default parser before 2.10 #htmlParser.className=org.apache.jmeter.protocol.http.parser.HtmlParserHTMLParser #htmlParser.className=org.apache.jmeter.protocol.http.parser.JTidyHTMLParser # Note that Regexp extractor may detect references that have been commented out. # In many cases it will work OK, but you should be aware that it may generate # additional references. #htmlParser.className=org.apache.jmeter.protocol.http.parser.RegexpHTMLParser # This parser is based on JSoup, it should be the most accurate but less performant # than LagartoBasedHtmlParser #htmlParser.className=org.apache.jmeter.protocol.http.parser.JsoupBasedHtmlParser #Used by HTTPSamplerBase to associate htmlParser with content types below htmlParser.types=text/html application/xhtml+xml application/xml text/xml #--------------------------------------------------------------------------- # WML Parser configuration #--------------------------------------------------------------------------- wmlParser.className=org.apache.jmeter.protocol.http.parser.RegexpHTMLParser #Used by HTTPSamplerBase to associate wmlParser with content types below wmlParser.types=text/vnd.wap.wml #--------------------------------------------------------------------------- # Remote batching configuration #--------------------------------------------------------------------------- # How is Sample sender implementations configured: # - true (default) means client configuration will be used # - false means server configuration will be used #sample_sender_client_configured=true # Remote batching support # Since JMeter 2.9, default is MODE_STRIPPED_BATCH, which returns samples in # batch mode (every 100 samples or every minute by default) # Note also that MODE_STRIPPED_BATCH strips response data from SampleResult, so if you need it change to # another mode # Hold retains samples until end of test (may need lots of memory) # Batch returns samples in batches # Statistical returns sample summary statistics # hold_samples was originally defined as a separate property, # but can now also be defined using mode=Hold # mode can also be the class name of an implementation of org.apache.jmeter.samplers.SampleSender #mode=Standard #mode=Batch #mode=Hold #mode=Statistical #Set to true to key statistical samples on threadName rather than threadGroup #key_on_threadname=false #mode=Stripped #mode=StrippedBatch #mode=org.example.load.MySampleSender # #num_sample_threshold=100 # Value is in milliseconds #time_threshold=60000 # # Asynchronous sender; uses a queue and background worker process to return the samples #mode=Asynch # default queue size #asynch.batch.queue.size=100 # Same as Asynch but strips response data from SampleResult #mode=StrippedAsynch # # DiskStore: as for Hold mode, but serialises the samples to disk, rather than saving in memory #mode=DiskStore # Same as DiskStore but strips response data from SampleResult #mode=StrippedDiskStore # Note: the mode is currently resolved on the client; # other properties (e.g. time_threshold) are resolved on the server. # To set the Monitor Health Visualiser buffer size, enter the desired value # monitor.buffer.size=800 #--------------------------------------------------------------------------- # JDBC Request configuration #--------------------------------------------------------------------------- # Max number of PreparedStatements per Connection for PreparedStatement cache #jdbcsampler.maxopenpreparedstatements=100 # String used to indicate a null value #jdbcsampler.nullmarker=]NULL[ #--------------------------------------------------------------------------- # OS Process Sampler configuration #--------------------------------------------------------------------------- # Polling to see if process has finished its work, used when a timeout is configured on sampler #os_sampler.poll_for_timeout=100 #--------------------------------------------------------------------------- # TCP Sampler configuration #--------------------------------------------------------------------------- # The default handler class #tcp.handler=TCPClientImpl # # eolByte = byte value for end of line # set this to a value outside the range -128 to +127 to skip eol checking #tcp.eolByte=1000 # # TCP Charset, used by org.apache.jmeter.protocol.tcp.sampler.TCPClientImpl # default to Platform defaults charset as returned by Charset.defaultCharset().name() #tcp.charset= # # status.prefix and suffix = strings that enclose the status response code #tcp.status.prefix=Status= #tcp.status.suffix=. # # status.properties = property file to convert codes to messages #tcp.status.properties=mytestfiles/tcpstatus.properties # The length prefix used by LengthPrefixedBinaryTCPClientImpl implementation # defaults to 2 bytes. #tcp.binarylength.prefix.length=2 #--------------------------------------------------------------------------- # Summariser - Generate Summary Results - configuration (mainly applies to non-GUI mode) #--------------------------------------------------------------------------- # # Define the following property to automatically start a summariser with that name # (applies to non-GUI mode only) #summariser.name=summary # # interval between summaries (in seconds) default 30 seconds summariser.interval=15 # # Write messages to log file #summariser.log=true # # Write messages to System.out #summariser.out=true #--------------------------------------------------------------------------- # Aggregate Report and Aggregate Graph - configuration #--------------------------------------------------------------------------- # # Percentiles to display in reports # Can be float value between 0 and 100 # First percentile to display, defaults to 90% #aggregate_rpt_pct1=90 # Second percentile to display, defaults to 95% #aggregate_rpt_pct2=95 # Second percentile to display, defaults to 99% #aggregate_rpt_pct3=99 #--------------------------------------------------------------------------- # Aggregate Report and Aggregate Graph - configuration #--------------------------------------------------------------------------- # # Backend metrics sliding window size for Percentiles, Min, Max #backend_metrics_window=100 #--------------------------------------------------------------------------- # BeanShell configuration #--------------------------------------------------------------------------- # BeanShell Server properties # # Define the port number as non-zero to start the http server on that port #beanshell.server.port=9000 # The telnet server will be started on the next port # # Define the server initialisation file beanshell.server.file=../extras/startup.bsh # # Define a file to be processed at startup # This is processed using its own interpreter. #beanshell.init.file= # # Define the intialisation files for BeanShell Sampler, Function and other BeanShell elements # N.B. Beanshell test elements do not share interpreters. # Each element in each thread has its own interpreter. # This is retained between samples. #beanshell.sampler.init=BeanShellSampler.bshrc #beanshell.function.init=BeanShellFunction.bshrc #beanshell.assertion.init=BeanShellAssertion.bshrc #beanshell.listener.init=etc #beanshell.postprocessor.init=etc #beanshell.preprocessor.init=etc #beanshell.timer.init=etc # The file BeanShellListeners.bshrc contains sample definitions # of Test and Thread Listeners. #--------------------------------------------------------------------------- # MailerModel configuration #--------------------------------------------------------------------------- # Number of successful samples before a message is sent #mailer.successlimit=2 # # Number of failed samples before a message is sent #mailer.failurelimit=2 #--------------------------------------------------------------------------- # CSVRead configuration #--------------------------------------------------------------------------- # CSVRead delimiter setting (default ",") # Make sure that there are no trailing spaces or tabs after the delimiter # characters, or these will be included in the list of valid delimiters #csvread.delimiter=, #csvread.delimiter=; #csvread.delimiter=! #csvread.delimiter=~ # The following line has a tab after the = #csvread.delimiter= #--------------------------------------------------------------------------- # __time() function configuration # # The properties below can be used to redefine the default formats #--------------------------------------------------------------------------- #time.YMD=yyyyMMdd #time.HMS=HHmmss #time.YMDHMS=yyyyMMdd-HHmmss #time.USER1= #time.USER2= #--------------------------------------------------------------------------- # CSV DataSet configuration #--------------------------------------------------------------------------- # String to return at EOF (if recycle not used) #csvdataset.eofstring= #--------------------------------------------------------------------------- # LDAP Sampler configuration #--------------------------------------------------------------------------- # Maximum number of search results returned by a search that will be sorted # to guarantee a stable ordering (if more results then this limit are retruned # then no sorting is done). Set to 0 to turn off all sorting, in which case # "Equals" response assertions will be very likely to fail against search results. # #ldapsampler.max_sorted_results=1000 # Number of characters to log for each of three sections (starting matching section, diff section, # ending matching section where not all sections will appear for all diffs) diff display when an Equals # assertion fails. So a value of 100 means a maximum of 300 characters of diff text will be displayed # (+ a number of extra characters like "..." and "[[["/"]]]" which are used to decorate it). #assertion.equals_section_diff_len=100 # test written out to log to signify start/end of diff delta #assertion.equals_diff_delta_start=[[[ #assertion.equals_diff_delta_end=]]] #--------------------------------------------------------------------------- # Miscellaneous configuration #--------------------------------------------------------------------------- # If defined, then start the mirror server on the port #mirror.server.port=8081 # ORO PatternCacheLRU size #oro.patterncache.size=1000 #TestBeanGui # #propertyEditorSearchPath=null # Turn expert mode on/off: expert mode will show expert-mode beans and properties #jmeter.expertMode=true # Maximum redirects to follow in a single sequence (default 5) #httpsampler.max_redirects=5 # Maximum frame/iframe nesting depth (default 5) #httpsampler.max_frame_depth=5 # Maximum await termination timeout (secs) when concurrent download embedded resources (default 60) #httpsampler.await_termination_timeout=60 # Revert to BUG 51939 behaviour (no separate container for embedded resources) by setting the following false: #httpsampler.separate.container=true # If embedded resources download fails due to missing resources or other reasons, if this property is true # Parent sample will not be marked as failed httpsampler.ignore_failed_embedded_resources=true # The encoding to be used if none is provided (default ISO-8859-1) #sampleresult.default.encoding=ISO-8859-1 # Network response size calculation method # Use real size: number of bytes for response body return by webserver # (i.e. the network bytes received for response) # if set to false, the (uncompressed) response data size will used (default before 2.5) # Include headers: add the headers size in real size #sampleresult.getbytes.body_real_size=true #sampleresult.getbytes.headers_size=true # CookieManager behaviour - should cookies with null/empty values be deleted? # Default is true. Use false to revert to original behaviour #CookieManager.delete_null_cookies=true # CookieManager behaviour - should variable cookies be allowed? # Default is true. Use false to revert to original behaviour #CookieManager.allow_variable_cookies=true # CookieManager behaviour - should Cookies be stored as variables? # Default is false #CookieManager.save.cookies=false # CookieManager behaviour - prefix to add to cookie name before storing it as a variable # Default is COOKIE_; to remove the prefix, define it as one or more spaces #CookieManager.name.prefix= # CookieManager behaviour - check received cookies are valid before storing them? # Default is true. Use false to revert to previous behaviour #CookieManager.check.cookies=true # (2.0.3) JMeterThread behaviour has been changed to set the started flag before # the controllers are initialised. This is so controllers can access variables earlier. # In case this causes problems, the previous behaviour can be restored by uncommenting # the following line. #jmeterthread.startearlier=false # (2.2.1) JMeterThread behaviour has changed so that PostProcessors are run in forward order # (as they appear in the test plan) rather than reverse order as previously. # Uncomment the following line to revert to the original behaviour #jmeterthread.reversePostProcessors=true # (2.2) StandardJMeterEngine behaviour has been changed to notify the listeners after # the running version is enabled. This is so they can access variables. # In case this causes problems, the previous behaviour can be restored by uncommenting # the following line. #jmeterengine.startlistenerslater=false # Number of milliseconds to wait for a thread to stop #jmeterengine.threadstop.wait=5000 #Whether to invoke System.exit(0) in server exit code after stopping RMI #jmeterengine.remote.system.exit=false # Whether to call System.exit(1) on failure to stop threads in non-GUI mode. # This only takes effect if the test was explictly requested to stop. # If this is disabled, it may be necessary to kill the JVM externally #jmeterengine.stopfail.system.exit=true # Whether to force call System.exit(0) at end of test in non-GUI mode, even if # there were no failures and the test was not explicitly asked to stop. # Without this, the JVM may never exit if there are other threads spawned by # the test which never exit. #jmeterengine.force.system.exit=false # How long to pause (in ms) in the daemon thread before reporting that the JVM has failed to exit. # If the value is <= 0, the JMeter does not start the daemon thread #jmeter.exit.check.pause=2000 # If running non-GUI, then JMeter listens on the following port for a shutdown message. # To disable, set the port to 1000 or less. #jmeterengine.nongui.port=4445 # # If the initial port is busy, keep trying until this port is reached # (to disable searching, set the value less than or equal to the .port property) #jmeterengine.nongui.maxport=4455 # How often to check for shutdown during ramp-up (milliseconds) #jmeterthread.rampup.granularity=1000 #Should JMeter expand the tree when loading a test plan? # default value is false since JMeter 2.7 #onload.expandtree=false #JSyntaxTextArea configuration #jsyntaxtextarea.wrapstyleword=true #jsyntaxtextarea.linewrap=true #jsyntaxtextarea.codefolding=true # Set 0 to disable undo feature in JSyntaxTextArea #jsyntaxtextarea.maxundos=50 # Set this to false to disable the use of JSyntaxTextArea for the Console Logger panel #loggerpanel.usejsyntaxtext=true # Maximum size of HTML page that can be displayed; default=200 * 1024 # Set to 0 to disable the size check and display the whole response #view.results.tree.max_size=204800 # Order of Renderers in View Results Tree # Note full class names should be used for non jmeter core renderers # For JMeter core renderers, class names start with . and are automatically # prefixed with org.apache.jmeter.visualizers view.results.tree.renderers_order=.RenderAsText,.RenderAsRegexp,.RenderAsCssJQuery,.RenderAsXPath,.RenderAsHTML,.RenderAsHTMLWithEmbedded,.RenderAsDocument,.RenderAsJSON,.RenderAsXML # Maximum size of Document that can be parsed by Tika engine; defaut=10 * 1024 * 1024 (10MB) # Set to 0 to disable the size check #document.max_size=0 #JMS options # Enable the following property to stop JMS Point-to-Point Sampler from using # the properties java.naming.security.[principal|credentials] when creating the queue connection #JMSSampler.useSecurity.properties=false # Set the following value to true in order to skip the delete confirmation dialogue #confirm.delete.skip=false # Used by Webservice Sampler (SOAP) # Size of Document Cache #soap.document_cache=50 # Used by JSR223 elements # Size of compiled scripts cache #jsr223.compiled_scripts_cache_size=100 #--------------------------------------------------------------------------- # Classpath configuration #--------------------------------------------------------------------------- # List of paths (separated by ;) to search for additional JMeter plugin classes, # for example new GUI elements and samplers. # A path item can either be a jar file or a directory. # Any jar file in such a directory will be automatically included, # jar files in sub directories are ignored. # The given value is in addition to any jars found in the lib/ext directory. # Do not use this for utility or plugin dependency jars. #search_paths=/app1/lib;/app2/lib # List of paths that JMeter will search for utility and plugin dependency classes. # Use your platform path separator to separate multiple paths. # A path item can either be a jar file or a directory. # Any jar file in such a directory will be automatically included, # jar files in sub directories are ignored. # The given value is in addition to any jars found in the lib directory. # All entries will be added to the class path of the system class loader # and also to the path of the JMeter internal loader. # Paths with spaces may cause problems for the JVM #user.classpath=../classes;../lib;../app1/jar1.jar;../app2/jar2.jar # List of paths (separated by ;) that JMeter will search for utility # and plugin dependency classes. # A path item can either be a jar file or a directory. # Any jar file in such a directory will be automatically included, # jar files in sub directories are ignored. # The given value is in addition to any jars found in the lib directory # or given by the user.classpath property. # All entries will be added to the path of the JMeter internal loader only. # For plugin dependencies using plugin_dependency_paths should be preferred over # user.classpath. #plugin_dependency_paths=../dependencies/lib;../app1/jar1.jar;../app2/jar2.jar # Classpath finder # ================ # The classpath finder currently needs to load every single JMeter class to find # the classes it needs. # For non-GUI mode, it's only necessary to scan for Function classes, but all classes # are still loaded. # All current Function classes include ".function." in their name, # and none include ".gui." in the name, so the number of unwanted classes loaded can be # reduced by checking for these. However, if a valid function class name does not match # these restrictions, it will not be loaded. If problems are encountered, then comment # or change the following properties: classfinder.functions.contain=.functions. classfinder.functions.notContain=.gui. #--------------------------------------------------------------------------- # Additional property files to load #--------------------------------------------------------------------------- # Should JMeter automatically load additional JMeter properties? # File name to look for (comment to disable) user.properties=user.properties # Should JMeter automatically load additional system properties? # File name to look for (comment to disable) system.properties=system.properties # Comma separated list of files that contain reference to templates and their description # Path must be relative to jmeter root folder #template.files=/bin/templates/templates.xml ================================================ FILE: system.properties ================================================ # Sample system.properties file # ## Licensed to the Apache Software Foundation (ASF) under one or more ## contributor license agreements. See the NOTICE file distributed with ## this work for additional information regarding copyright ownership. ## The ASF licenses this file to You under the Apache License, Version 2.0 ## (the "License"); you may not use this file except in compliance with ## the License. You may obtain a copy of the License at ## ## http://www.apache.org/licenses/LICENSE-2.0 ## ## Unless required by applicable law or agreed to in writing, software ## distributed under the License is distributed on an "AS IS" BASIS, ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ## See the License for the specific language governing permissions and ## limitations under the License. # Commons Logging properties # Used by HttpComponents 4.x, see: # http://hc.apache.org/httpcomponents-client-4.3.x/logging.html # # By default, Commons Logging is configured by JMeter to use the same logging system # as the main JMeter code; to configure it please see jmeter.properties. # # Uncomment to enable debugging of Commons Logging setup; may be useful if # implementation cannot be instantiated: #org.apache.commons.logging.diagnostics.dest=STDERR # # Uncomment to enable Commons Logging to use standard output #org.apache.commons.logging.Log=org.apache.commons.logging.impl.SimpleLog #org.apache.commons.logging.simplelog.showdatetime=true # # Uncomment the following two lines to generate basic debug logging for HC4.x #org.apache.commons.logging.simplelog.log.org.apache.http=DEBUG #org.apache.commons.logging.simplelog.log.org.apache.http.wire=ERROR # Java networking-related properties # # For details of Oracle Java network properties, see for example: # http://download.oracle.com/javase/1.5.0/docs/guide/net/properties.html # #java.net.preferIPv4Stack=false #java.net.preferIPv6Addresses=false #networkaddress.cache.ttl=-1 #networkaddress.cache.negative.ttl=10 # # # SSL properties (moved from jmeter.properties) # # See http://download.oracle.com/javase/1.5.0/docs/guide/security/jsse/JSSERefGuide.html#Customization # for information on the javax.ssl system properties # Truststore properties (trusted certificates) #javax.net.ssl.trustStore #javax.net.ssl.trustStorePassword #javax.net.ssl.trustStoreProvider #javax.net.ssl.trustStoreType [default = KeyStore.getDefaultType()] # Keystore properties (client certificates) # Location #javax.net.ssl.keyStore # #The password to your keystore #javax.net.ssl.keyStorePassword # #javax.net.ssl.keyStoreProvider #javax.net.ssl.keyStoreType [default = KeyStore.getDefaultType()] # SSL debugging: # See http://download.oracle.com/javase/1.5.0/docs/guide/security/jsse/JSSERefGuide.html#Debug # # javax.net.debug=help - generates the list below: #all turn on all debugging #ssl turn on ssl debugging # #The following can be used with ssl: # record enable per-record tracing # handshake print each handshake message # keygen print key generation data # session print session activity # defaultctx print default SSL initialization # sslctx print SSLContext tracing # sessioncache print session cache tracing # keymanager print key manager tracing # trustmanager print trust manager tracing # # handshake debugging can be widened with: # data hex dump of each handshake message # verbose verbose handshake message printing # # record debugging can be widened with: # plaintext hex dump of record plaintext # # Examples: #javax.net.debug=ssl #javax.net.debug=sslctx,session,sessioncache # # # We enable the following property to allow headers such as "Host" to be passed through. # See http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6996110 sun.net.http.allowRestrictedHeaders=true #Uncomment for Kerberos authentication and edit the 2 config files to match your domains #With the following configuration krb5.conf and jaas.conf must be located in bin folder #You can modify these file paths to use absolute location #java.security.krb5.conf=krb5.conf #java.security.auth.login.config=jaas.conf # Location of keytool application # This property can be defined if JMeter cannot find the application automatically # It should not be necessary in most cases. #keytool.directory=/bin ================================================ FILE: verify.sh ================================================ #!/bin/bash # # jmeter-ec2 - Install Script (Runs on remote ec2 server) # function install_jmeter_plugins() { echo "Installing plugins..." wget -q -O ~/JMeterPlugins-Extras.jar https://s3.amazonaws.com/jmeter-ec2/JMeterPlugins-Extras.jar wget -q -O ~/JMeterPlugins-Standard.jar https://s3.amazonaws.com/jmeter-ec2/JMeterPlugins-Standard.jar mv ~/JMeterPlugins*.jar ~/$JMETER_VERSION/lib/ext/ } function install_java() { echo "Updating apt-get..." sudo apt-get -qqy update echo "Installing java..." sudo DEBIAN_FRONTEND=noninteractive apt-get -qqy install openjdk-7-jre echo "Java installed" } function install_jmeter() { # ------------------------------------------------ # Decide where to download jmeter from # # Order of preference: # 1. S3, if we have a copy of the file # 2. Mirror, if the desired version is current # 3. Archive, as a backup # ------------------------------------------------ if [ $(curl -sI https://s3.amazonaws.com/jmeter-ec2/$JMETER_VERSION.tgz | grep -c "403 Forbidden") -eq "0" ] ; then # We have a copy on S3 so use that echo "Downloading jmeter from S3..." wget -q -O ~/$JMETER_VERSION.tgz https://s3.amazonaws.com/jmeter-ec2/$JMETER_VERSION.tgz elif [ $(echo $(curl -s 'http://www.apache.org/dist/jmeter/binaries/') | grep -c "$JMETER_VERSION") -gt "0" ] ; then # Nothing found on S3 but this is the current version of jmeter so use the preferred mirror to download echo "downloading jmeter from a Mirror..." wget -q -O ~/$JMETER_VERSION.tgz "http://www.apache.org/dyn/closer.cgi?filename=jmeter/binaries/$JMETER_VERSION.tgz&action=download" else # Fall back to the archive server echo "Downloading jmeter from Apache Archive..." wget -q -O ~/$JMETER_VERSION.tgz http://archive.apache.org/dist/jmeter/binaries/$JMETER_VERSION.tgz fi # Untar downloaded file echo "Unpacking jmeter..." tar -xf ~/$JMETER_VERSION.tgz # install jmeter-plugins [http://code.google.com/p/jmeter-plugins/] install_jmeter_plugins echo "Jmeter installed" } JMETER_VERSION=$1 cd ~ # Java if java -version 2>&1 >/dev/null | grep -q "java version" ; then echo "Java is already installed" else install_java fi # JMeter if [ ! -d "$JMETER_VERSION" ] ; then # install jmeter install_jmeter else echo "JMeter is already installed" fi # Done echo "software installed"