Repository: concurrencylabs/aws-pricing-tools
Branch: master
Commit: f03a296e0b69
Files: 41
Total size: 260.4 KB
Directory structure:
gitextract_2lg3vbht/
├── .gitignore
├── LICENSE.md
├── MANIFEST.in
├── README.md
├── awspricecalculator/
│ ├── __init__.py
│ ├── awslambda/
│ │ ├── __init__.py
│ │ └── pricing.py
│ ├── common/
│ │ ├── __init__.py
│ │ ├── consts.py
│ │ ├── errors.py
│ │ ├── models.py
│ │ └── phelper.py
│ ├── datatransfer/
│ │ ├── __init__.py
│ │ └── pricing.py
│ ├── dynamodb/
│ │ ├── __init__.py
│ │ └── pricing.py
│ ├── ec2/
│ │ ├── __init__.py
│ │ └── pricing.py
│ ├── emr/
│ │ ├── __init__.py
│ │ └── pricing.py
│ ├── kinesis/
│ │ ├── __init__.py
│ │ └── pricing.py
│ ├── rds/
│ │ ├── __init__.py
│ │ └── pricing.py
│ └── redshift/
│ ├── __init__.py
│ └── pricing.py
├── cloudformation/
│ ├── function-plus-schedule.json
│ └── lambda-metric-filters.yml
├── functions/
│ └── calculate-near-realtime.py
├── install.sh
├── requirements-dev.txt
├── requirements.txt
├── scripts/
│ ├── README.md
│ ├── emr-pricing.py
│ ├── get-latest-index.py
│ ├── lambda-optimization.py
│ └── redshift-pricing.py
├── serverless.env.yml
├── serverless.yml
├── setup.py
└── test/
└── events/
└── constant-tag.json
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Don't include anything installed into the virtualenv by pip
.Python
bin
lib
include
pip-selfcheck.json
# We don't need compiled Python artifacts in the repo
__pycache__
*egg-info
*.pyc
*.pyo
.idea/*
.serverless/*
awspricecalculator/data/*
# ignore vendored files
vendored
#temporarily out
awspricecalculator/s3*
awspricecalculator/common/utils.py
scripts/ec2-pricing.py
scripts/rds-pricing.py
scripts/s3-pricing.py
scripts/lambda-pricing.py
scripts/dynamodb-pricing.py
scripts/kinesis-pricing.py
scripts/context.py
scripts/propagate-lambda-code.py
================================================
FILE: LICENSE.md
================================================
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based on the Program.
To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work.
A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices".
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
"Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
================================================
FILE: MANIFEST.in
================================================
recursive-include awspricecalculator/data *.csv *.json
================================================
FILE: README.md
================================================
## Concurrency Labs - AWS Price Calculator tool
This repository uses the AWS Price List API to implement price calculation utilities.
Supported services:
* EC2
* ELB
* EBS
* RDS
* Lambda
* Dynamo DB
* Kinesis
Visit the following URLs for more details:
https://www.concurrencylabs.com/blog/aws-pricing-lambda-realtime-calculation-function/
https://www.concurrencylabs.com/blog/aws-lambda-cost-optimization-tools/
https://www.concurrencylabs.com/blog/calculate-near-realtime-pricing-serverless-applications/
The code is structured in the following way:
**awspricecalculator**. The modules in this package search data within the AWS Price List API index files.
They take price dimension parameters as inputs and return results in JSON format. This package
is called by Lambda functions or other Python scripts.
**functions**. This is where our Lambda functions live. Functions are packaged using the Serverless framework.
**scripts**. Here are some Python scripts to help with management and price optimizations. See README.md in the scripts
folder for more details.
## Available Lambda functions:
### calculate-near-realtime
This function is called by a schedule configured using CloudWatch Events.
The function receives a JSON object configured in the schedule. The JSON object supports the following format:
Tag-based: ```{"tag":{"key":"mykey","value":"myvalue"}}```.
The function finds resources with the corresponding tag, gets current usage using CloudWatch metrics,
projects usage into a longer time period (a month), calls pricecalculator to calculate price
and puts results in CloudWatch metrics under the namespace ```ConcurrencyLabs/Pricing/NearRealTimeForecast```.
Supported services are EC2, EBS, ELB, RDS and Lambda. Not all price dimensions are supported for all services, though.
You can configure as many CloudWatch Events as you want, each one with a different tag.
**Rules:**
* The function only considers for price calculation those resources that are tagged. For example, if there is an untagged ELB
with tagged EC2 instances, the function will only consider the EC2 instances for the calculation.
If there is a tagged ELB with untagged EC2 instances, the function will only calculate price
for the ELB.
* The behavior described above is intended for simplicity, otherwise the function would have to
cover a number of combinations that might or might not be suitable to all users of the function.
* To keep it simple, if you want a resource to be included in the calculation, then tag it. Otherwise
leave it untagged.
**Limitations**
The function doesn't support cost estimations for the following:
* EC2 data transfer for instances not registered to an ELB
* EC2 Reserved Instances
* EBS Snapshots
* RDS data trasfer
* Lambda data transfer
* Kinesis PUT Payload Units are partially calculated based on CloudWatch metrics (there's no 100% accuracy for this price dimension)
* Dynamo DB storage
* Dynamo DB data transfer
## Install - using CloudFormation (recommended)
I created a CloudFormation template that deploys the Lambda function, as well as the CloudWatch Events
schedule. All you have to do is specify the tag key and value you want to calculate pricing for.
For example: TagKey:stack, TagValue:mywebapp
Click here to get started:
### Metrics
The function publishes a metric named `EstimatedCharges` to CloudWatch, under namespace `ConcurrencyLabs/Pricing/NearRealTimeForecast` and it uses
the following dimensions:
* Currency: USD
* ForecastPeriod: monthly
* ServiceName: ec2, rds, lambda, kinesis, dynamodb
* Tag: mykey=myvalue
### Updating to the latest version using CloudFormation
This function will be updated regularly in order to fix bugs, update AWS Price List data and also to add more functionality.
This means you will likely have to update the function at some point. I recommend installing
the function using the CloudFormation template, since it will simplify the update process.
To update the function, just go to the CloudFormation console, select the stack you've created
and click on Actions -> Update Stack:

Then select "Specify an Amazon S3 template URL" and enter the following value:
```
http://concurrencylabs-cfn-templates.s3.amazonaws.com/lambda-near-realtime-pricing/function-plus-schedule.json
```

And that's it. CloudFormation will update the function with the latest code.
## Install Locally (if you want to modify it) - Manual steps
If you only want to install the Lambda function, you don't need to follow the steps below, just follow
the instructions in the "Install - Using CloudFormation" section above.
If you want to setup a dev environment, run a local copy, make some modifications and then install in your AWS account, then keep reading...
### Clone the repo
```
git clone https://github.com/concurrencylabs/aws-pricing-tools aws-pricing-tools
```
### (Optional, but recommended) Create an isolated Python environment using virtualenv
It's always a good practice to create an isolated environment so we have greater control over
the dependencies in our project, including the Python runtime.
If you don't have virtualenv installed, run:
```
pip install virtualenv
```
For more details on virtualenv, click here.
Now, create a Python 2.7 virtual environment in the location where you cloned the repo into. If you want to name your project
aws-pricing-tools, then run (one level up from the dir, use the same local name you used when you cloned
the repo):
```
virtualenv aws-pricing-tools -p python2.7
```
After your environment is created, it's time to activate it. Go to the recently created
folder of your project (i.e. aws-pricing-tools) and from there run:
```
source bin/activate
```
### Install Requirements
From your project root folder, run:
```
./install.sh
```
This will install the following dependencies to the ```vendored``` directory:
* **tinydb** - The code in this repo queries the Price List API csv records using the tinydb library.
* **numpy** - Used for statistics in the lambda optimization script
... and the following dependencies in your default site-packages location:
* **python-local-lambda** - lets me test my Lambda functions locally using test events in my workstation.
* **boto3** - AWS Python SDK to call AWS APIs.
### Install the Serverless Framework

Since the pricing tool runs on AWS Lambda, I decided to use the Serverless Framework.
This framework enormously simplifies the development, configuration and deployment of Function as a Service (a.k.a. FaaS, or "serverless")
code into AWS Lambda.
You should follow the instructions described here,
which can be summarized in the following steps:
1. Make sure you have Node.js installed in your workstation.
```
node --version
```
2. Install the Serverless Framework
```
npm install -g serverless
```
3. Confirm Serverless has been installed
```
serverless --version
```
The steps in this post were tested using version ```1.6.1```
4. Serverless needs access to your AWS account, so it can create and update AWS Lambda
functions, among other operations. Therefore, you have to make sure Serverless can access
a set of IAM User credentials. Follow these instructions.
In the long term, you should make sure these credentials are limited to only the API operations
Serverless requires - avoid Administrator access, which is a bad security and operational practice.
5. Checkout the code from this repo into your virtualenv folder.
### Set environment variables
```
export AWS_DEFAULT_PROFILE=
export AWS_DEFAULT_REGION=
```
### How to test the function locally
**Download the latest AWS Price List API Index file**
The code needs a local copy of the the AWS Price List API index file.
The GitHub repo doesn't come with the index file, therefore you have to
download it the first time you run your code and every time AWS publishes a new
Price List API index.
Also, this index file is constantly updated by AWS. I recommend subscribing to the AWS Price List API
change notifications.
In order to download the latest index file, go to the ```scripts```` folder and run:
```
python get-latest-index.py --service=all
```
The script takes a few seconds to execute since some index files are a little heavy (like the EC2 one).
**Run a test**
Once you have the virtualenv activated, all dependencies installed, environment
variables set and the latest AWS Price List index file, it's time to run a test.
Update ```test/events/constant-tag.json``` with a tag key/value pair that exists in your AWS account.
Then run, from the **root** location in the local repo and replace and with actual values:
```
python-lambda-local functions/calculate-near-realtime.py test/events/constant-tag.json -l lib/ -l . -f handler -t 30 -a arn:aws:lambda:::function:calculate-near-realtime
```
### Deploy the Serverless Project
From your project root folder, run:
```
serverless deploy
```
================================================
FILE: awspricecalculator/__init__.py
================================================
import os, sys
__location__ = os.path.dirname(os.path.realpath(__file__))
sys.path.append(os.path.join(__location__, "../"))
sys.path.append(os.path.join(__location__, "../vendored"))
================================================
FILE: awspricecalculator/awslambda/__init__.py
================================================
================================================
FILE: awspricecalculator/awslambda/pricing.py
================================================
import json
import logging
from ..common import consts, phelper
from ..common.models import PricingResult
import tinydb
log = logging.getLogger()
regiondbs = {}
indexMetadata = {}
def calculate(pdim):
log.info("Calculating Lambda pricing with the following inputs: {}".format(str(pdim.__dict__)))
global regiondbs
global indexMetadata
ts = phelper.Timestamp()
ts.start('totalCalculationAwsLambda')
#Load On-Demand DB
dbs = regiondbs.get(consts.SERVICE_LAMBDA+pdim.region+pdim.termType,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_LAMBDA, phelper.get_partition_keys(consts.SERVICE_LAMBDA, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND))
regiondbs[consts.SERVICE_LAMBDA+pdim.region+pdim.termType]=dbs
cost = 0
pricing_records = []
awsPriceListApiVersion = indexMetadata['Version']
priceQuery = tinydb.Query()
#TODO: add support to include/ignore free-tier (include a flag)
serverlessDb = dbs[phelper.create_file_key([consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_SERVERLESS])]
#Requests
if pdim.requestCount:
query = ((priceQuery['Group'] == 'AWS-Lambda-Requests'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_LAMBDA, serverlessDb, query, pdim.requestCount, pricing_records, cost)
#GB-s (aka compute time)
if pdim.avgDurationMs:
query = ((priceQuery['Group'] == 'AWS-Lambda-Duration'))
usageUnits = pdim.GBs
pricing_records, cost = phelper.calculate_price(consts.SERVICE_LAMBDA, serverlessDb, query, usageUnits, pricing_records, cost)
#Data Transfer
dataTransferDb = dbs[phelper.create_file_key([consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DATA_TRANSFER])]
#To internet
if pdim.dataTransferOutInternetGb:
query = ((priceQuery['To Location'] == 'External') & (priceQuery['Transfer Type'] == 'AWS Outbound'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_LAMBDA, dataTransferDb, query, pdim.dataTransferOutInternetGb, pricing_records, cost)
#Intra-regional data transfer - in/out/between EC2 AZs or using IPs or ELB
if pdim.dataTransferOutIntraRegionGb:
query = ((priceQuery['Transfer Type'] == 'IntraRegion'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_LAMBDA, dataTransferDb, query, pdim.dataTransferOutIntraRegionGb, pricing_records, cost)
#Inter-regional data transfer - out to other AWS regions
if pdim.dataTransferOutInterRegionGb:
query = ((priceQuery['Transfer Type'] == 'InterRegion Outbound') & (priceQuery['To Location'] == consts.REGION_MAP[pdim.toRegion]))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_LAMBDA, dataTransferDb, query, pdim.dataTransferOutInterRegionGb, pricing_records, cost)
extraargs = {'priceDimensions':pdim}
pricing_result = PricingResult(awsPriceListApiVersion, pdim.region, cost, pricing_records, **extraargs)
log.debug(json.dumps(vars(pricing_result),sort_keys=False,indent=4))
log.debug("Total time to compute: [{}]".format(ts.finish('totalCalculationAwsLambda')))
return pricing_result.__dict__
================================================
FILE: awspricecalculator/common/__init__.py
================================================
================================================
FILE: awspricecalculator/common/consts.py
================================================
import os, logging
# COMMON
#_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
AWS_PRICE_CALCULATOR_VERSION = "v2.0"
LOG_LEVEL = os.environ.get('LOG_LEVEL',logging.INFO)
DEFAULT_CURRENCY = "USD"
FORECAST_PERIOD_MONTHLY = "monthly"
FORECAST_PERIOD_YEARLY = "yearly"
HOURS_IN_MONTH = 720
SERVICE_CODE_AWS_DATA_TRANSFER = 'AWSDataTransfer'
REGION_MAP = {'us-east-1':'US East (N. Virginia)',
'us-east-2':'US East (Ohio)',
'us-west-1':'US West (N. California)',
'us-west-2':'US West (Oregon)',
'ca-central-1':'Canada (Central)',
'eu-west-1':'EU (Ireland)',
'eu-west-2':'EU (London)',
'eu-west-3':'EU (Paris)',
'eu-north-1':'EU (Stockholm)',
'eu-central-1':'EU (Frankfurt)',
'ap-northeast-1':'Asia Pacific (Tokyo)',
'ap-northeast-2':'Asia Pacific (Seoul)',
'ap-northeast-3':'Asia Pacific (Osaka-Local)',
'ap-southeast-1':'Asia Pacific (Singapore)',
'ap-southeast-2':'Asia Pacific (Sydney)',
'sa-east-1':'South America (Sao Paulo)',
'ap-south-1':'Asia Pacific (Mumbai)',
'cn-northwest-1':'China (Ningxia)',
'ap-east-1':'Asia Pacific (Hong Kong)'
}
#TODO: update for China region
REGION_PREFIX_MAP = {'us-east-1':'',
'us-east-2':'USE2-',
'us-west-1':'USW1-',
'us-west-2':'USW2-',
'ca-central-1':'CAN1-',
'eu-west-1':'EU-',
'eu-west-2':'EUW2-',
'eu-west-3':'EUW3-',
'eu-north-1':'EUN1-',
'eu-central-1':'EUC1-',
'ap-east-1':'APE1-' ,
'ap-northeast-1':'APN1-',
'ap-northeast-2':'APN2-',
'ap-northeast-3':'APN3-',
'ap-southeast-1':'APS1-',
'ap-southeast-2':'APS2-',
'sa-east-1':'SAE1-',
'ap-south-1':'APS3-',
'cn-northwest-1':'',
'US East (N. Virginia)':'',
'US East (Ohio)':'USE2-',
'US West (N. California)':'USW1-',
'US West (Oregon)':'USW2-',
'Canada (Central)':'CAN1-',
'EU (Ireland)':'EU-',
'EU (London)':'EUW2-',
'EU (Paris)':'EUW3-',
'EU (Stockholm)':'EUN1-',
'EU (Frankfurt)':'EUC1-',
'Asia Pacific (Tokyo)':'APN1-',
'Asia Pacific (Seoul)':'APN2-',
'Asia Pacific (Singapore)':'APS1-',
'Asia Pacific (Sydney)':'APS2-',
'South America (Sao Paulo)':'SAE1-',
'Asia Pacific (Mumbai)':'APS3-',
'AWS GovCloud (US)':'UGW1-',
'External':'',
'Any': ''
}
REGION_REPORT_MAP = {'us-east-1':'N. Virginia',
'us-east-2':'Ohio',
'us-west-1':'N. California',
'us-west-2':'Oregon',
'ca-central-1':'Canada',
'eu-west-1':'Ireland',
'eu-west-2':'London',
'eu-north-1':'Stockholm',
'eu-central-1':'Frankfurt',
'ap-east-1':'Hong Kong',
'ap-northeast-1':'Tokyo',
'ap-northeast-2':'Seoul',
'ap-northeast-3':'Osaka',
'ap-southeast-1':'Singapore',
'ap-southeast-2':'Sydney',
'sa-east-1':'Sao Paulo',
'ap-south-1':'Mumbai',
'cn-northwest-1':'Ningxia',
'eu-west-3':'Paris'
}
SERVICE_EC2 = 'ec2'
SERVICE_ELB = 'elb'
SERVICE_EBS = 'ebs'
SERVICE_S3 = 's3'
SERVICE_RDS = 'rds'
SERVICE_LAMBDA = 'lambda'
SERVICE_DYNAMODB= 'dynamodb'
SERVICE_KINESIS = 'kinesis'
SERVICE_DATA_TRANSFER = 'datatransfer'
SERVICE_EMR = 'emr'
SERVICE_REDSHIFT = 'redshift'
SERVICE_ALL = 'all'
NOT_APPLICABLE = 'NA'
SUPPORTED_SERVICES = (SERVICE_S3, SERVICE_EC2, SERVICE_RDS, SERVICE_LAMBDA, SERVICE_DYNAMODB, SERVICE_KINESIS,
SERVICE_EMR, SERVICE_REDSHIFT)
SUPPORTED_REGIONS = ('us-east-1','us-east-2', 'us-west-1', 'us-west-2','ca-central-1', 'eu-west-1','eu-west-2',
'eu-central-1', 'ap-east-1', 'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3', 'ap-southeast-1', 'ap-southeast-2',
'sa-east-1','ap-south-1', 'eu-west-3', 'eu-north-1'
)
SUPPORTED_EC2_INSTANCE_TYPES = ('a1.2xlarge','a1.4xlarge','a1.large','a1.medium','a1.xlarge','c1.medium','c1.xlarge','c3.2xlarge',
'c3.4xlarge','c3.8xlarge','c3.large','c3.xlarge','c4.2xlarge','c4.4xlarge','c4.8xlarge','c4.large',
'c4.xlarge','c5.18xlarge','c5.2xlarge','c5.4xlarge','c5.9xlarge','c5.large','c5.xlarge','c5d.18xlarge',
'c5d.2xlarge','c5d.4xlarge','c5d.9xlarge','c5d.large','c5d.xlarge','c5n.18xlarge','c5n.2xlarge',
'c5n.4xlarge','c5n.9xlarge','c5n.large','c5n.xlarge','cc2.8xlarge','cr1.8xlarge','d2.2xlarge',
'd2.4xlarge','d2.8xlarge','d2.xlarge','f1.16xlarge','f1.2xlarge','f1.4xlarge','g2.2xlarge',
'g2.8xlarge','g3.16xlarge','g3.4xlarge','g3.8xlarge','g3s.xlarge','h1.16xlarge','h1.2xlarge',
'h1.4xlarge','h1.8xlarge','hs1.8xlarge','i2.2xlarge','i2.4xlarge','i2.8xlarge','i2.xlarge',
'i3.16xlarge','i3.2xlarge','i3.4xlarge','i3.8xlarge','i3.large','i3.xlarge','m1.large',
'm1.medium','m1.small','m1.xlarge','m2.2xlarge','m2.4xlarge','m2.xlarge','m3.2xlarge',
'm3.large','m3.medium','m3.xlarge','m4.10xlarge','m4.16xlarge','m4.2xlarge','m4.4xlarge',
'm4.large','m4.xlarge','m5.12xlarge','m5.24xlarge','m5.2xlarge','m5.4xlarge','m5.large',
'm5.metal','m5.xlarge','m5a.12xlarge','m5a.24xlarge','m5a.2xlarge','m5a.4xlarge','m5a.large',
'm5a.xlarge','m5d.12xlarge','m5d.24xlarge','m5d.2xlarge','m5d.4xlarge','m5d.large','m5d.metal',
'm5d.xlarge','p2.16xlarge','p2.8xlarge','p2.xlarge','p3.16xlarge','p3.2xlarge','p3.8xlarge',
'p3dn.24xlarge','r3.2xlarge','r3.4xlarge','r3.8xlarge','r3.large','r3.xlarge','r4.16xlarge',
'r4.2xlarge','r4.4xlarge','r4.8xlarge','r4.large','r4.xlarge',
'r5.12xlarge','r5.24xlarge', 'r5.8xlarge',
'r5.2xlarge','r5.4xlarge','r5.large','r5.xlarge','r5a.12xlarge','r5a.24xlarge','r5a.2xlarge',
'r5a.4xlarge','r5a.large','r5a.xlarge','r5d.12xlarge','r5d.24xlarge','r5d.2xlarge','r5d.4xlarge',
'r5d.large','r5d.xlarge','t1.micro','t2.2xlarge','t2.large','t2.medium','t2.micro','t2.nano',
't2.small','t2.xlarge',
't3.2xlarge','t3.large','t3.medium','t3.micro','t3.nano','t3.small','t3.xlarge',
't3a.nano', 't3a.micro','t3a.small','t3a.medium','t3a.large','t3a.xlarge','t3a.2xlarge',
'x1.16xlarge','x1.32xlarge','x1e.16xlarge','x1e.2xlarge','x1e.32xlarge','x1e.4xlarge',
'x1e.8xlarge','x1e.xlarge','z1d.12xlarge','z1d.2xlarge','z1d.3xlarge','z1d.6xlarge','z1d.large','z1d.xlarge')
SUPPORTED_EMR_INSTANCE_TYPES = ('c1.medium','c1.xlarge','c3.2xlarge','c3.4xlarge','c3.8xlarge','c3.large','c3.xlarge','c4.2xlarge',
'c4.4xlarge','c4.8xlarge','c4.large','c4.xlarge','c5.18xlarge','c5.2xlarge','c5.4xlarge',
'c5.9xlarge','c5.xlarge','c5d.18xlarge','c5d.2xlarge','c5d.4xlarge','c5d.9xlarge','c5d.xlarge',
'c5n.18xlarge','c5n.2xlarge','c5n.4xlarge','c5n.9xlarge','c5n.xlarge',
'cc2.8xlarge',
'cr1.8xlarge','d2.2xlarge','d2.4xlarge','d2.8xlarge','d2.xlarge','g2.2xlarge','g3.16xlarge',
'g3.4xlarge','g3.8xlarge','g3s.xlarge','h1.16xlarge','h1.2xlarge','h1.4xlarge','h1.8xlarge',
'hs1.8xlarge','i2.2xlarge','i2.4xlarge','i2.8xlarge','i2.xlarge','i3.16xlarge',
'i3.2xlarge','i3.4xlarge','i3.8xlarge','i3.xlarge','m1.large','m1.medium','m1.small','m1.xlarge',
'm2.2xlarge','m2.4xlarge','m2.xlarge','m3.2xlarge','m3.large','m3.medium','m3.xlarge','m4.10xlarge',
'm4.16xlarge','m4.2xlarge','m4.4xlarge','m4.large','m4.xlarge','m5.12xlarge','m5.24xlarge',
'm5.2xlarge','m5.4xlarge','m5.xlarge','m5a.12xlarge','m5a.24xlarge','m5a.2xlarge','m5a.4xlarge',
'm5a.xlarge',
'm5d.12xlarge','m5d.24xlarge','m5d.2xlarge','m5d.4xlarge','m5d.xlarge','p2.16xlarge','p2.8xlarge',
'p2.xlarge','p3.16xlarge','p3.2xlarge','p3.8xlarge','r3.2xlarge','r3.4xlarge','r3.8xlarge',
'r3.xlarge','r4.16xlarge','r4.2xlarge','r4.4xlarge','r4.8xlarge','r4.large','r4.xlarge',
'r5.12xlarge','r5.24xlarge','r5.2xlarge','r5.4xlarge','r5.xlarge','r5a.12xlarge','r5a.24xlarge',
'r5a.2xlarge','r5a.4xlarge','r5a.xlarge',
'r5d.2xlarge','r5d.4xlarge','r5d.xlarge','z1d.12xlarge','z1d.2xlarge','z1d.3xlarge',
'z1d.6xlarge','z1d.xlarge')
SUPPORTED_REDSHIFT_INSTANCE_TYPES = ('ds1.xlarge','dc1.8xlarge','dc1.large','ds2.8xlarge',
'ds1.8xlarge','ds2.xlarge','dc2.8xlarge','dc2.large')
SUPPORTED_INSTANCE_TYPES_MAP = {SERVICE_EC2:SUPPORTED_EC2_INSTANCE_TYPES, SERVICE_EMR:SUPPORTED_EMR_INSTANCE_TYPES ,
SERVICE_REDSHIFT:SUPPORTED_REDSHIFT_INSTANCE_TYPES}
SERVICE_INDEX_MAP = {SERVICE_S3:'AmazonS3', SERVICE_EC2:'AmazonEC2', SERVICE_RDS:'AmazonRDS',
SERVICE_LAMBDA:'AWSLambda', SERVICE_DYNAMODB:'AmazonDynamoDB',
SERVICE_KINESIS:'AmazonKinesis', SERVICE_EMR:'ElasticMapReduce', SERVICE_REDSHIFT:'AmazonRedshift',
SERVICE_DATA_TRANSFER:'AWSDataTransfer'}
SCRIPT_TERM_TYPE_ON_DEMAND = 'on-demand'
SCRIPT_TERM_TYPE_RESERVED = 'reserved'
TERM_TYPE_RESERVED = 'Reserved'
TERM_TYPE_ON_DEMAND = 'OnDemand'
SUPPORTED_TERM_TYPES = (SCRIPT_TERM_TYPE_ON_DEMAND, SCRIPT_TERM_TYPE_RESERVED)
TERM_TYPE_MAP = {SCRIPT_TERM_TYPE_ON_DEMAND:'OnDemand', SCRIPT_TERM_TYPE_RESERVED:'Reserved'}
PRODUCT_FAMILY_COMPUTE_INSTANCE = 'Compute Instance'
PRODUCT_FAMILY_DATABASE_INSTANCE = 'Database Instance'
PRODUCT_FAMILY_DATA_TRANSFER = 'Data Transfer'
PRODUCT_FAMILY_FEE = 'Fee'
PRODUCT_FAMILY_API_REQUEST = 'API Request'
PRODUCT_FAMILY_STORAGE = 'Storage'
PRODUCT_FAMILY_SYSTEM_OPERATION = 'System Operation'
PRODUCT_FAMILY_LOAD_BALANCER = 'Load Balancer'
PRODUCT_FAMILY_APPLICATION_LOAD_BALANCER = 'Load Balancer-Application'
PRODUCT_FAMILY_NETWORK_LOAD_BALANCER = 'Load Balancer-Network'
PRODUCT_FAMILY_SNAPSHOT = "Storage Snapshot"
PRODUCT_FAMILY_SERVERLESS = "Serverless"
PRODUCT_FAMILY_DB_STORAGE = "Database Storage"
PRODUCT_FAMILY_DB_PIOPS = "Provisioned IOPS"
PRODUCT_FAMILY_KINESIS_STREAMS = "Kinesis Streams"
PRODUCT_FAMILY_EMR_INSTANCE = "Elastic Map Reduce Instance"
PRODUCT_FAMILIY_BUNDLE = 'Bundle'
PRODUCT_FAMILIY_REDSHIFT_CONCURRENCY_SCALING = 'Redshift Concurrency Scaling'
PRODUCT_FAMILIY_REDSHIFT_DATA_SCAN = 'Redshift Data Scan'
PRODUCT_FAMILIY_STORAGE_SNAPSHOT = 'Storage Snapshot'
SUPPORTED_PRODUCT_FAMILIES = (PRODUCT_FAMILY_COMPUTE_INSTANCE, PRODUCT_FAMILY_DATABASE_INSTANCE,
PRODUCT_FAMILY_DATA_TRANSFER,PRODUCT_FAMILY_FEE, PRODUCT_FAMILY_API_REQUEST,
PRODUCT_FAMILY_STORAGE, PRODUCT_FAMILY_SYSTEM_OPERATION, PRODUCT_FAMILY_LOAD_BALANCER,
PRODUCT_FAMILY_APPLICATION_LOAD_BALANCER, PRODUCT_FAMILY_NETWORK_LOAD_BALANCER,
PRODUCT_FAMILY_SNAPSHOT,PRODUCT_FAMILY_SERVERLESS,PRODUCT_FAMILY_DB_STORAGE,
PRODUCT_FAMILY_DB_PIOPS,PRODUCT_FAMILY_KINESIS_STREAMS, PRODUCT_FAMILY_EMR_INSTANCE,
PRODUCT_FAMILIY_BUNDLE, PRODUCT_FAMILIY_REDSHIFT_CONCURRENCY_SCALING, PRODUCT_FAMILIY_REDSHIFT_DATA_SCAN,
PRODUCT_FAMILIY_STORAGE_SNAPSHOT
)
SUPPORTED_RESERVED_PRODUCT_FAMILIES = (PRODUCT_FAMILY_COMPUTE_INSTANCE, PRODUCT_FAMILY_DATABASE_INSTANCE)
SUPPORTED_PRODUCT_FAMILIES_BY_SERVICE_DICT = {
SERVICE_EC2:[PRODUCT_FAMILY_COMPUTE_INSTANCE,PRODUCT_FAMILY_DATA_TRANSFER, PRODUCT_FAMILY_FEE,
PRODUCT_FAMILY_STORAGE,PRODUCT_FAMILY_SYSTEM_OPERATION,PRODUCT_FAMILY_LOAD_BALANCER,
PRODUCT_FAMILY_APPLICATION_LOAD_BALANCER,PRODUCT_FAMILY_NETWORK_LOAD_BALANCER,
PRODUCT_FAMILY_SNAPSHOT],
SERVICE_RDS:[PRODUCT_FAMILY_DATABASE_INSTANCE, PRODUCT_FAMILY_DATA_TRANSFER,PRODUCT_FAMILY_FEE,
PRODUCT_FAMILY_DB_STORAGE,PRODUCT_FAMILY_DB_PIOPS,PRODUCT_FAMILY_SNAPSHOT ],
SERVICE_S3:[PRODUCT_FAMILY_STORAGE, PRODUCT_FAMILY_FEE,PRODUCT_FAMILY_API_REQUEST,PRODUCT_FAMILY_SYSTEM_OPERATION, PRODUCT_FAMILY_DATA_TRANSFER ],
SERVICE_LAMBDA:[PRODUCT_FAMILY_SERVERLESS, PRODUCT_FAMILY_DATA_TRANSFER, PRODUCT_FAMILY_FEE,
PRODUCT_FAMILY_API_REQUEST],
SERVICE_KINESIS:[PRODUCT_FAMILY_KINESIS_STREAMS],
SERVICE_DYNAMODB:[PRODUCT_FAMILY_DB_STORAGE, PRODUCT_FAMILY_DB_PIOPS, PRODUCT_FAMILY_FEE ],
SERVICE_EMR:[PRODUCT_FAMILY_EMR_INSTANCE],
SERVICE_REDSHIFT:[PRODUCT_FAMILY_COMPUTE_INSTANCE, PRODUCT_FAMILIY_BUNDLE, PRODUCT_FAMILIY_REDSHIFT_CONCURRENCY_SCALING,
PRODUCT_FAMILIY_REDSHIFT_DATA_SCAN, PRODUCT_FAMILIY_STORAGE_SNAPSHOT],
SERVICE_DATA_TRANSFER:[PRODUCT_FAMILY_DATA_TRANSFER]
}
INFINITY = 'Inf'
SORT_CRITERIA_REGION = 'region'
SORT_CRITERIA_INSTANCE_TYPE = 'instance-type'
SORT_CRITERIA_OS = 'os'
SORT_CRITERIA_DB_INSTANCE_CLASS = 'db-instance-class'
SORT_CRITERIA_DB_ENGINE = 'engine'
SORT_CRITERIA_S3_STORAGE_CLASS = 'storage-class'
SORT_CRITERIA_S3_STORAGE_SIZE_GB = 'storage-size-gb'
SORT_CRITERIA_S3_DATA_RETRIEVAL_GB = 'data-retrieval-gb'
SORT_CRITERIA_S3_STORAGE_CLASS_DATA_RETRIEVAL_GB = 'storage-class-data-retrieval-gb'
SORT_CRITERIA_TO_REGION = 'to-region'
SORT_CRITERIA_LAMBDA_MEMORY = 'memory'
SORT_CRITERIA_TERM_TYPE = 'term-type'
SORT_CRITERIA_TERM_TYPE_REGION = 'term-type-region'
SORT_CRITERIA_VALUE_SEPARATOR = ','
#_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
#EC2
EC2_OPERATING_SYSTEM_LINUX = 'Linux'
EC2_OPERATING_SYSTEM_BYOL = 'Windows BYOL'
EC2_OPERATING_SYSTEM_WINDOWS = 'Windows'
EC2_OPERATING_SYSTEM_SUSE = 'Suse'
#EC2_OPERATING_SYSTEM_SQL_WEB = 'SQL Web'
EC2_OPERATING_SYSTEM_RHEL = 'RHEL'
SCRIPT_EC2_TENANCY_SHARED = 'shared'
SCRIPT_EC2_TENANCY_DEDICATED = 'dedicated'
SCRIPT_EC2_TENANCY_HOST = 'host'
EC2_TENANCY_SHARED = 'Shared'
EC2_TENANCY_DEDICATED = 'Dedicated'
EC2_TENANCY_HOST = 'Host'
EC2_TENANCY_MAP = {SCRIPT_EC2_TENANCY_SHARED:EC2_TENANCY_SHARED,
SCRIPT_EC2_TENANCY_DEDICATED:EC2_TENANCY_DEDICATED,
SCRIPT_EC2_TENANCY_HOST:EC2_TENANCY_HOST}
SCRIPT_EC2_CAPACITY_RESERVATION_STATUS_USED = 'used'
SCRIPT_EC2_CAPACITY_RESERVATION_STATUS_UNUSED = 'unused'
SCRIPT_EC2_CAPACITY_RESERVATION_STATUS_ALLOCATED = 'allocated'
EC2_CAPACITY_RESERVATION_STATUS_USED = 'Used'
EC2_CAPACITY_RESERVATION_STATUS_UNUSED = 'UnusedCapacityReservation'
EC2_CAPACITY_RESERVATION_STATUS_ALLOCATED = 'AllocatedCapacityReservation'
EC2_CAPACITY_RESERVATION_STATUS_MAP = {SCRIPT_EC2_CAPACITY_RESERVATION_STATUS_USED: EC2_CAPACITY_RESERVATION_STATUS_USED,
SCRIPT_EC2_CAPACITY_RESERVATION_STATUS_UNUSED: EC2_CAPACITY_RESERVATION_STATUS_UNUSED,
SCRIPT_EC2_CAPACITY_RESERVATION_STATUS_ALLOCATED: EC2_CAPACITY_RESERVATION_STATUS_ALLOCATED}
STORAGE_MEDIA_SSD = "SSD-backed"
STORAGE_MEDIA_HDD = "HDD-backed"
STORAGE_MEDIA_S3 = "AmazonS3"
EBS_VOLUME_TYPE_MAGNETIC = "Magnetic"
EBS_VOLUME_TYPE_GENERAL_PURPOSE = "General Purpose"
EBS_VOLUME_TYPE_PIOPS = "Provisioned IOPS"
EBS_VOLUME_TYPE_THROUGHPUT_OPTIMIZED = "Throughput Optimized HDD"
EBS_VOLUME_TYPE_COLD_HDD = "Cold HDD"
#Values that are valid in the calling script (which could be a Lambda function or any Python module)
#OS
SCRIPT_OPERATING_SYSTEM_LINUX = 'linux'
SCRIPT_OPERATING_SYSTEM_WINDOWS_BYOL = 'windowsbyol'
SCRIPT_OPERATING_SYSTEM_WINDOWS = 'windows'
SCRIPT_OPERATING_SYSTEM_SUSE = 'suse'
#SCRIPT_OPERATING_SYSTEM_SQL_WEB = 'sqlweb'
SCRIPT_OPERATING_SYSTEM_RHEL = 'rhel'
#License Model
SCRIPT_EC2_LICENSE_MODEL_BYOL = 'byol'
SCRIPT_EC2_LICENSE_MODEL_INCLUDED = 'included'
SCRIPT_EC2_LICENSE_MODEL_NONE_REQUIRED = 'none-required'
#EBS
SCRIPT_EBS_VOLUME_TYPE_STANDARD = 'standard'
SCRIPT_EBS_VOLUME_TYPE_IO1 = 'io1'
SCRIPT_EBS_VOLUME_TYPE_GP2 = 'gp2'
SCRIPT_EBS_VOLUME_TYPE_SC1 = 'sc1'
SCRIPT_EBS_VOLUME_TYPE_ST1 = 'st1'
#Reserved Instances
SCRIPT_EC2_OFFERING_CLASS_STANDARD = 'standard'
SCRIPT_EC2_OFFERING_CLASS_CONVERTIBLE = 'convertible'
EC2_OFFERING_CLASS_STANDARD = 'standard'
EC2_OFFERING_CLASS_CONVERTIBLE = 'convertible'
SUPPORTED_EC2_OFFERING_CLASSES = [SCRIPT_EC2_OFFERING_CLASS_STANDARD, SCRIPT_EC2_OFFERING_CLASS_CONVERTIBLE]
SUPPORTED_RDS_OFFERING_CLASSES = [SCRIPT_EC2_OFFERING_CLASS_STANDARD]
SUPPORTED_EMR_OFFERING_CLASSES = [SCRIPT_EC2_OFFERING_CLASS_STANDARD, SCRIPT_EC2_OFFERING_CLASS_CONVERTIBLE]
SUPPORTED_REDSHIFT_OFFERING_CLASSES = [SCRIPT_EC2_OFFERING_CLASS_STANDARD]
SUPPORTED_OFFERING_CLASSES_MAP = {SERVICE_EC2:SUPPORTED_EC2_OFFERING_CLASSES, SERVICE_RDS: SUPPORTED_RDS_OFFERING_CLASSES,
SERVICE_EMR:SUPPORTED_EMR_OFFERING_CLASSES,
SERVICE_REDSHIFT: SUPPORTED_REDSHIFT_OFFERING_CLASSES }
EC2_OFFERING_CLASS_MAP = {SCRIPT_EC2_OFFERING_CLASS_STANDARD:EC2_OFFERING_CLASS_STANDARD,
SCRIPT_EC2_OFFERING_CLASS_CONVERTIBLE: EC2_OFFERING_CLASS_CONVERTIBLE}
SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT = 'partial-upfront'
SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT = 'all-upfront'
SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT = 'no-upfront'
EC2_PURCHASE_OPTION_PARTIAL_UPFRONT = 'Partial Upfront'
EC2_PURCHASE_OPTION_ALL_UPFRONT = 'All Upfront'
EC2_PURCHASE_OPTION_NO_UPFRONT = 'No Upfront'
SCRIPT_EC2_RESERVED_YEARS_1 = '1'
SCRIPT_EC2_RESERVED_YEARS_3 = '3'
EC2_SUPPORTED_RESERVED_YEARS = (SCRIPT_EC2_RESERVED_YEARS_1, SCRIPT_EC2_RESERVED_YEARS_3)
EC2_RESERVED_YEAR_MAP = {SCRIPT_EC2_RESERVED_YEARS_1:'1yr', SCRIPT_EC2_RESERVED_YEARS_3:'3yr'}
EC2_SUPPORTED_PURCHASE_OPTIONS = (SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT, SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT, SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT)
EC2_PURCHASE_OPTION_MAP = {SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT:EC2_PURCHASE_OPTION_PARTIAL_UPFRONT,
SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT: EC2_PURCHASE_OPTION_ALL_UPFRONT, SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT: EC2_PURCHASE_OPTION_NO_UPFRONT
}
SUPPORTED_EC2_OPERATING_SYSTEMS = (SCRIPT_OPERATING_SYSTEM_LINUX,
SCRIPT_OPERATING_SYSTEM_WINDOWS,
SCRIPT_OPERATING_SYSTEM_WINDOWS_BYOL,
SCRIPT_OPERATING_SYSTEM_SUSE,
#SCRIPT_OPERATING_SYSTEM_SQL_WEB,
SCRIPT_OPERATING_SYSTEM_RHEL)
SUPPORTED_EC2_LICENSE_MODELS = (SCRIPT_EC2_LICENSE_MODEL_BYOL, SCRIPT_EC2_LICENSE_MODEL_INCLUDED, SCRIPT_EC2_LICENSE_MODEL_NONE_REQUIRED)
EC2_LICENSE_MODEL_MAP = {SCRIPT_EC2_LICENSE_MODEL_BYOL: 'Bring your own license',
SCRIPT_EC2_LICENSE_MODEL_INCLUDED: 'License Included',
SCRIPT_EC2_LICENSE_MODEL_NONE_REQUIRED: 'No License required'
}
EC2_OPERATING_SYSTEMS_MAP = {SCRIPT_OPERATING_SYSTEM_LINUX:'Linux',
SCRIPT_OPERATING_SYSTEM_WINDOWS_BYOL:'Windows',
SCRIPT_OPERATING_SYSTEM_WINDOWS:'Windows',
SCRIPT_OPERATING_SYSTEM_SUSE:'SUSE',
#SCRIPT_OPERATING_SYSTEM_SQL_WEB:'SQL Web',
SCRIPT_OPERATING_SYSTEM_RHEL:'RHEL'}
SUPPORTED_EBS_VOLUME_TYPES = (SCRIPT_EBS_VOLUME_TYPE_STANDARD,
SCRIPT_EBS_VOLUME_TYPE_IO1,
SCRIPT_EBS_VOLUME_TYPE_GP2,
SCRIPT_EBS_VOLUME_TYPE_SC1,
SCRIPT_EBS_VOLUME_TYPE_ST1
)
EBS_VOLUME_TYPES_MAP = {
SCRIPT_EBS_VOLUME_TYPE_STANDARD : {'storageMedia':STORAGE_MEDIA_HDD , 'volumeType':EBS_VOLUME_TYPE_MAGNETIC},
SCRIPT_EBS_VOLUME_TYPE_IO1 : {'storageMedia':STORAGE_MEDIA_SSD , 'volumeType':EBS_VOLUME_TYPE_PIOPS},
SCRIPT_EBS_VOLUME_TYPE_GP2 : {'storageMedia':STORAGE_MEDIA_SSD , 'volumeType':EBS_VOLUME_TYPE_GENERAL_PURPOSE},
SCRIPT_EBS_VOLUME_TYPE_SC1 : {'storageMedia':STORAGE_MEDIA_HDD , 'volumeType':EBS_VOLUME_TYPE_COLD_HDD},
SCRIPT_EBS_VOLUME_TYPE_ST1 : {'storageMedia':STORAGE_MEDIA_HDD , 'volumeType':EBS_VOLUME_TYPE_THROUGHPUT_OPTIMIZED}
}
#_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
#RDS
SUPPORTED_RDS_INSTANCE_CLASSES = ('db.t1.micro', 'db.m1.small', 'db.m1.medium', 'db.m1.large', 'db.m1.xlarge',
'db.m2.xlarge', 'db.m2.2xlarge', 'db.m2.4xlarge',
'db.m3.medium', 'db.m3.large', 'db.m3.xlarge', 'db.m3.2xlarge',
'db.m4.large', 'db.m4.xlarge', 'db.m4.2xlarge', 'db.m4.4xlarge', 'db.m4.10xlarge', 'db.m4.16xlarge',
'db.m5.large', 'db.m5.xlarge', 'db.m5.2xlarge', 'db.m5.4xlarge', 'db.m5.12xlarge', 'db.m5.24xlarge',
'db.r3.large', 'db.r3.xlarge', 'db.r3.2xlarge', 'db.r3.4xlarge', 'db.r3.8xlarge',
'db.r4.large', 'db.r4.xlarge', 'db.r4.2xlarge', 'db.r4.4xlarge', 'db.r4.8xlarge', 'db.r4.16xlarge',
'db.r5.large', 'db.r5.xlarge', 'db.r5.2xlarge', 'db.r5.4xlarge', 'db.r5.12xlarge', 'db.r5.24xlarge',
'db.t2.micro', 'db.t2.small', 'db.t2.2xlarge', 'db.t2.large', 'db.t2.xlarge', 'db.t2.medium',
'db.t3.micro', 'db.t3.small', 'db.t3.medium', 'db.t3.large', 'db.t3.xlarge', 'db.t3.2xlarge',
'db.x1.16xlarge', 'db.x1.32xlarge', 'db.x1e.16xlarge', 'db.x1e.2xlarge', 'db.x1e.32xlarge', 'db.x1e.4xlarge', 'db.x1e.8xlarge', 'db.x1e.xlarge'
)
SCRIPT_RDS_STORAGE_TYPE_STANDARD = 'standard'
SCRIPT_RDS_STORAGE_TYPE_AURORA = 'aurora' #Aurora has its own type of storage, which is billed by IO operations and size
SCRIPT_RDS_STORAGE_TYPE_GP2 = 'gp2'
SCRIPT_RDS_STORAGE_TYPE_IO1 = 'io1'
RDS_VOLUME_TYPE_MAGNETIC = 'Magnetic'
RDS_VOLUME_TYPE_AURORA = 'General Purpose-Aurora'
RDS_VOLUME_TYPE_GP2 = 'General Purpose'
RDS_VOLUME_TYPE_IO1 = 'Provisioned IOPS'
RDS_VOLUME_TYPES_MAP = {
SCRIPT_RDS_STORAGE_TYPE_STANDARD : RDS_VOLUME_TYPE_MAGNETIC,
SCRIPT_RDS_STORAGE_TYPE_AURORA : RDS_VOLUME_TYPE_AURORA,
SCRIPT_RDS_STORAGE_TYPE_GP2 : RDS_VOLUME_TYPE_GP2,
SCRIPT_RDS_STORAGE_TYPE_IO1 : RDS_VOLUME_TYPE_IO1
}
SUPPORTED_RDS_STORAGE_TYPES = (SCRIPT_RDS_STORAGE_TYPE_STANDARD, SCRIPT_RDS_STORAGE_TYPE_AURORA, SCRIPT_RDS_STORAGE_TYPE_GP2, SCRIPT_RDS_STORAGE_TYPE_IO1)
RDS_DEPLOYMENT_OPTION_SINGLE_AZ = 'Single-AZ'
RDS_DEPLOYMENT_OPTION_MULTI_AZ = 'Multi-AZ'
RDS_DEPLOYMENT_OPTION_MULTI_AZ_MIRROR = 'Multi-AZ (SQL Server Mirror)'
RDS_DB_ENGINE_MYSQL = 'MySQL'
RDS_DB_ENGINE_MARIADB = 'MariaDB'
RDS_DB_ENGINE_ORACLE = 'Oracle'
RDS_DB_ENGINE_SQL_SERVER = 'SQL Server'
RDS_DB_ENGINE_POSTGRESQL = 'PostgreSQL'
RDS_DB_ENGINE_AURORA_MYSQL = 'Aurora MySQL'
RDS_DB_ENGINE_AURORA_POSTGRESQL = 'Aurora PostgreSQL'
RDS_DB_EDITION_ENTERPRISE = 'Enterprise'
RDS_DB_EDITION_STANDARD = 'Standard'
RDS_DB_EDITION_STANDARD_ONE = 'Standard One'
RDS_DB_EDITION_STANDARD_TWO = 'Standard Two'
RDS_DB_EDITION_EXPRESS = 'Express'
RDS_DB_EDITION_WEB = 'Web'
SCRIPT_RDS_DATABASE_ENGINE_MYSQL = 'mysql'
SCRIPT_RDS_DATABASE_ENGINE_MARIADB = 'mariadb'
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD = 'oracle-se'
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD_ONE = 'oracle-se1'
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD_TWO = 'oracle-se2'
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_ENTERPRISE = 'oracle-ee'
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_ENTERPRISE = 'sqlserver-ee'
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_STANDARD = 'sqlserver-se'
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_EXPRESS = 'sqlserver-ex'
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_WEB = 'sqlserver-web'
SCRIPT_RDS_DATABASE_ENGINE_POSTGRESQL = 'postgres' #to be consistent with RDS API - https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html
SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL = 'aurora'
SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL_LONG = 'aurora-mysql' #some items in the RDS API now return aurora-mysql as a valid engine (instead of just aurora)
SCRIPT_RDS_DATABASE_ENGINE_AURORA_POSTGRESQL = 'aurora-postgresql'
RDS_SUPPORTED_DB_ENGINES = (SCRIPT_RDS_DATABASE_ENGINE_MYSQL,SCRIPT_RDS_DATABASE_ENGINE_MARIADB,
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD, SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD_ONE,
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD_TWO,SCRIPT_RDS_DATABASE_ENGINE_ORACLE_ENTERPRISE,
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_ENTERPRISE, SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_STANDARD,
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_EXPRESS, SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_WEB,
SCRIPT_RDS_DATABASE_ENGINE_POSTGRESQL, SCRIPT_RDS_DATABASE_ENGINE_AURORA_POSTGRESQL,
SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL, SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL_LONG
)
SCRIPT_RDS_LICENSE_MODEL_INCLUDED = 'license-included'
SCRIPT_RDS_LICENSE_MODEL_BYOL = 'bring-your-own-license'
SCRIPT_RDS_LICENSE_MODEL_PUBLIC = 'general-public-license'
RDS_SUPPORTED_LICENSE_MODELS = (SCRIPT_RDS_LICENSE_MODEL_INCLUDED, SCRIPT_RDS_LICENSE_MODEL_BYOL, SCRIPT_RDS_LICENSE_MODEL_PUBLIC)
RDS_LICENSE_MODEL_MAP = {SCRIPT_RDS_LICENSE_MODEL_INCLUDED:'License included',
SCRIPT_RDS_LICENSE_MODEL_BYOL:'Bring your own license',
SCRIPT_RDS_LICENSE_MODEL_PUBLIC:'No license required'}
RDS_ENGINE_MAP = {SCRIPT_RDS_DATABASE_ENGINE_MYSQL:{'engine':RDS_DB_ENGINE_MYSQL,'edition':''},
SCRIPT_RDS_DATABASE_ENGINE_MARIADB:{'engine':RDS_DB_ENGINE_MARIADB ,'edition':''},
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD:{'engine':RDS_DB_ENGINE_ORACLE ,'edition':RDS_DB_EDITION_STANDARD},
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD_ONE:{'engine':RDS_DB_ENGINE_ORACLE ,'edition':RDS_DB_EDITION_STANDARD_ONE},
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD_TWO:{'engine':RDS_DB_ENGINE_ORACLE ,'edition':RDS_DB_EDITION_STANDARD_TWO},
SCRIPT_RDS_DATABASE_ENGINE_ORACLE_ENTERPRISE:{'engine':RDS_DB_ENGINE_ORACLE ,'edition':RDS_DB_EDITION_ENTERPRISE},
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_ENTERPRISE:{'engine':RDS_DB_ENGINE_SQL_SERVER ,'edition':RDS_DB_EDITION_ENTERPRISE},
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_STANDARD:{'engine':RDS_DB_ENGINE_SQL_SERVER ,'edition':RDS_DB_EDITION_STANDARD},
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_EXPRESS:{'engine':RDS_DB_ENGINE_SQL_SERVER ,'edition':RDS_DB_EDITION_EXPRESS},
SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_WEB:{'engine':RDS_DB_ENGINE_SQL_SERVER ,'edition':RDS_DB_EDITION_WEB},
SCRIPT_RDS_DATABASE_ENGINE_POSTGRESQL:{'engine':RDS_DB_ENGINE_POSTGRESQL ,'edition':''},
SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL:{'engine':RDS_DB_ENGINE_AURORA_MYSQL ,'edition':''},
SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL_LONG:{'engine':RDS_DB_ENGINE_AURORA_MYSQL ,'edition':''},
SCRIPT_RDS_DATABASE_ENGINE_AURORA_POSTGRESQL:{'engine':RDS_DB_ENGINE_AURORA_POSTGRESQL ,'edition':''}
}
#_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
#S3
S3_USAGE_GROUP_REQUESTS_TIER_1 = 'S3-API-Tier1'
S3_USAGE_GROUP_REQUESTS_TIER_2 = 'S3-API-Tier2'
S3_USAGE_GROUP_REQUESTS_TIER_3 = 'S3-API-Tier3'
S3_USAGE_GROUP_REQUESTS_SIA_TIER1 = 'S3-API-SIA-Tier1'
S3_USAGE_GROUP_REQUESTS_SIA_TIER2 = 'S3-API-SIA-Tier2'
S3_USAGE_GROUP_REQUESTS_SIA_RETRIEVAL = 'S3-API-SIA-Retrieval'
S3_USAGE_GROUP_REQUESTS_ZIA_TIER1 = 'S3-API-ZIA-Tier1'
S3_USAGE_GROUP_REQUESTS_ZIA_TIER2 = 'S3-API-ZIA-Tier2'
S3_USAGE_GROUP_REQUESTS_ZIA_RETRIEVAL = 'S3-API-ZIA-Retrieval'
S3_STORAGE_CLASS_STANDARD = 'General Purpose'
S3_STORAGE_CLASS_SIA = 'Infrequent Access'
S3_STORAGE_CLASS_ZIA = 'Infrequent Access'
S3_STORAGE_CLASS_GLACIER = 'Archive'
S3_STORAGE_CLASS_REDUCED_REDUNDANCY = 'Non-Critical Data'
SUPPORTED_REQUEST_TYPES = ('PUT','COPY','POST','LIST','GET')
SCRIPT_STORAGE_CLASS_INFREQUENT_ACCESS = 'STANDARD_IA'
SCRIPT_STORAGE_CLASS_ONE_ZONE_INFREQUENT_ACCESS = 'ONEZONE_IA'
SCRIPT_STORAGE_CLASS_STANDARD = 'STANDARD'
SCRIPT_STORAGE_CLASS_GLACIER = 'GLACIER'
SCRIPT_STORAGE_CLASS_REDUCED_REDUNDANCY = 'REDUCED_REDUNDANCY'
SUPPORTED_S3_STORAGE_CLASSES = (SCRIPT_STORAGE_CLASS_STANDARD,
SCRIPT_STORAGE_CLASS_INFREQUENT_ACCESS,
SCRIPT_STORAGE_CLASS_ONE_ZONE_INFREQUENT_ACCESS,
SCRIPT_STORAGE_CLASS_GLACIER,
SCRIPT_STORAGE_CLASS_REDUCED_REDUNDANCY)
S3_STORAGE_CLASS_MAP = {SCRIPT_STORAGE_CLASS_INFREQUENT_ACCESS:S3_STORAGE_CLASS_SIA,
SCRIPT_STORAGE_CLASS_ONE_ZONE_INFREQUENT_ACCESS:S3_STORAGE_CLASS_ZIA,
SCRIPT_STORAGE_CLASS_STANDARD:S3_STORAGE_CLASS_STANDARD,
SCRIPT_STORAGE_CLASS_GLACIER:S3_STORAGE_CLASS_GLACIER,
SCRIPT_STORAGE_CLASS_REDUCED_REDUNDANCY:S3_STORAGE_CLASS_REDUCED_REDUNDANCY}
S3_USAGE_TYPE_DICT = {
SCRIPT_STORAGE_CLASS_STANDARD:'TimedStorage-ByteHrs',
SCRIPT_STORAGE_CLASS_INFREQUENT_ACCESS:'TimedStorage-SIA-ByteHrs',
SCRIPT_STORAGE_CLASS_ONE_ZONE_INFREQUENT_ACCESS:'TimedStorage-ZIA-ByteHrs',
SCRIPT_STORAGE_CLASS_GLACIER:'TimedStorage-GlacierByteHrs',
SCRIPT_STORAGE_CLASS_REDUCED_REDUNDANCY:'TimedStorage-RRS-ByteHrs'
}
S3_VOLUME_TYPE_DICT = {
SCRIPT_STORAGE_CLASS_STANDARD:'Standard',
SCRIPT_STORAGE_CLASS_INFREQUENT_ACCESS:'Standard - Infrequent Access',
SCRIPT_STORAGE_CLASS_ONE_ZONE_INFREQUENT_ACCESS:'One Zone - Infrequent Access',
SCRIPT_STORAGE_CLASS_GLACIER:'Amazon Glacier',
SCRIPT_STORAGE_CLASS_REDUCED_REDUNDANCY:'Reduced Redundancy'
}
#_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
#LAMBDA
LAMBDA_MEM_SIZES = [64,128,192,256,320,384,448,512,576,640,704,768,832,896,960,1024,1088,1152,1216,1280,1344,1408,
1472,1536,1600,1664,1728,1792,1856,1920,1984,2048,2112,2176,2240,2304,2368,2432,2496,2560,2624,2688,
2752,2816,2880,2944,3008]
================================================
FILE: awspricecalculator/common/errors.py
================================================
import json
import os
class ValidationError(Exception):
"""Exception raised for errors in the input.
Attributes:
message -- explanation of the error
"""
def __init__(self, message):
self.message = message
class NoDataFoundError(Exception):
"""Exception raised when no data could be found for a particular set of inputs
Attributes:
message -- explanation of the error
"""
def __init__(self, message):
self.message = message
================================================
FILE: awspricecalculator/common/models.py
================================================
import math, logging
import os, sys
from . import consts
from .errors import ValidationError
log = logging.getLogger()
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
site_pkgs = os.path.abspath(os.path.join(__location__, os.pardir, os.pardir,"lib", "python3.7", "site-packages" ))
sys.path.append(site_pkgs)
from tabulate import tabulate
class ElbPriceDimension():
def __init__(self, hours, dataProcessedGb):
self.hours = hours
self.dataProcessedGb = dataProcessedGb
class S3PriceDimension():
def __init__(self, **kwargs):
self.region = ''
self.termType = consts.SCRIPT_TERM_TYPE_ON_DEMAND
self.storageClass = ''
self.storageSizeGb = 0
#TODO:Implement requestType and requestNumber as Count, such that a single call to the price calculator can account for multiple request types
self.requestType = ''
self.requestNumber = 0
self.dataRetrievalGb = 0
self.dataTransferOutInternetGb = 0
self.region = kwargs.get('region','')
self.storageClass = kwargs.get('storageClass','')
self.storageSizeGb = int(kwargs.get('storageSizeGb',0))
self.requestType = kwargs.get('requestType','')
self.requestNumber = int(kwargs.get('requestNumber',0))
self.dataRetrievalGb = int(kwargs.get('dataRetrievalGb',0))
self.dataTransferOutInternetGb = int(kwargs.get('dataTransferOutInternetGb',0))
self.validate()
def validate(self):
validation_ok = True
validation_message = ""
if not self.storageClass:
validation_message += "Storage class cannot be empty\n"
validation_ok = False
if self.storageClass and self.storageClass not in consts.SUPPORTED_S3_STORAGE_CLASSES:
validation_message += "Invalid storage class:[{}]\n".format(self.storageClass)
validation_ok = False
if self.region not in consts.SUPPORTED_REGIONS:
validation_message += "Invalid region:[{}]\n".format(self.region)
validation_ok = False
if self.requestNumber and not self.requestType:
validation_message += "requestType cannot be empty if you specity requestNumber\n"
validation_ok = False
if self.requestType and self.requestType not in consts.SUPPORTED_REQUEST_TYPES:
validation_message += "Invalid request type:[{}]\n".format(self.requestType)
validation_ok = False
if not validation_ok:
raise ValidationError(validation_message)
return validation_ok
class Ec2PriceDimension():
def __init__(self, **kargs):
self.region = kargs['region']
self.termType = kargs.get('termType',consts.SCRIPT_TERM_TYPE_ON_DEMAND)
self.instanceType = kargs.get('instanceType','')
self.operatingSystem = kargs.get('operatingSystem',consts.SCRIPT_OPERATING_SYSTEM_LINUX)
self.instanceHours = int(kargs.get('instanceHours',0))
#TODO: Add support for pre-installed software (i.e. SQL Web in Windows instances)
self.preInstalledSoftware = 'NA'
self.licenseModel = consts.SCRIPT_EC2_LICENSE_MODEL_NONE_REQUIRED
if self.operatingSystem == consts.SCRIPT_OPERATING_SYSTEM_WINDOWS:
self.licenseModel = consts.SCRIPT_EC2_LICENSE_MODEL_NONE_REQUIRED
if self.operatingSystem == consts.SCRIPT_OPERATING_SYSTEM_WINDOWS_BYOL:
self.licenseModel = consts.SCRIPT_EC2_LICENSE_MODEL_BYOL
#Capacity Reservations
#TODO: add support for allocated and unused Capacity Reservations
self.capacityReservationStatus = consts.SCRIPT_EC2_CAPACITY_RESERVATION_STATUS_USED
#Reserved Instances
self.offeringClass = kargs.get('offeringClass',consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD)
if not self.offeringClass: self.offeringClass = consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD
self.instanceCount = int(kargs.get('instanceCount',0))
self.offeringType = kargs.get('offeringType','')
self.years = int(kargs.get('years',1))
#offeringType doesn't apply for on-demand
if self.termType == consts.SCRIPT_TERM_TYPE_ON_DEMAND: self.offeringType = ""
#Data Transfer
self.dataTransferOutInternetGb = int(kargs.get('dataTransferOutInternetGb',0))
self.dataTransferOutIntraRegionGb = int(kargs.get('dataTransferOutIntraRegionGb',0))
self.dataTransferOutInterRegionGb = int(kargs.get('dataTransferOutInterRegionGb',0))
self.toRegion = kargs.get('toRegion','')
#Storage
self.pIops = int(kargs.get('pIops',0))
self.storageMedia = ''
self.ebsVolumeType = kargs.get('ebsVolumeType','')
if self.ebsVolumeType in consts.EBS_VOLUME_TYPES_MAP: self.storageMedia = consts.EBS_VOLUME_TYPES_MAP[self.ebsVolumeType]['storageMedia']
self.volumeType = ''
if self.ebsVolumeType in consts.EBS_VOLUME_TYPES_MAP: self.volumeType = consts.EBS_VOLUME_TYPES_MAP[self.ebsVolumeType]['volumeType']
if not self.volumeType: self.volumeType = consts.SCRIPT_EBS_VOLUME_TYPE_GP2
self.ebsStorageGbMonth = int(kargs.get('ebsStorageGbMonth',0))
self.ebsSnapshotGbMonth = int(kargs.get('ebsSnapshotGbMonth',0))
#Elastic Load Balancer (classic)
self.elbHours = int(kargs.get('elbHours',0))
self.elbDataProcessedGb = int(kargs.get('elbDataProcessedGb',0))
#Application Load Balancer
self.albHours = int(kargs.get('albHours',0))
self.albLcus= int(kargs.get('albLcus',0))
#TODO: add support for Network Load Balancer
#TODO: Add support for dedicated tenancies
self.tenancy = kargs.get('tenancy',consts.SCRIPT_EC2_TENANCY_SHARED)
self.validate()
def validate(self):
validation_message = ""
if self.instanceType and self.instanceType not in consts.SUPPORTED_EC2_INSTANCE_TYPES:
validation_message += "instance-type is "+self.instanceType+", must be one of the following values:"+str(consts.SUPPORTED_EC2_INSTANCE_TYPES)
if self.region not in consts.SUPPORTED_REGIONS:
validation_message += "region is "+self.region+", must be one of the following values:"+str(consts.SUPPORTED_REGIONS)
if not self.operatingSystem:
validation_message += "operating-system cannot be empty\n"
if self.operatingSystem and self.operatingSystem not in consts.SUPPORTED_EC2_OPERATING_SYSTEMS:
validation_message += "operating-system is "+self.operatingSystem+", must be one of the following values:"+str(consts.SUPPORTED_EC2_OPERATING_SYSTEMS)
if self.ebsVolumeType and self.ebsVolumeType not in consts.SUPPORTED_EBS_VOLUME_TYPES:
validation_message += "ebs-volume-type is "+self.ebsVolumeType+", must be one of the following values:"+str(consts.SUPPORTED_EBS_VOLUME_TYPES)
if self.dataTransferOutInterRegionGb > 0 and not self.toRegion:
validation_message += "Must specify a to-region if you specify data-transfer-out-interregion-gb\n"
if self.dataTransferOutInterRegionGb and self.toRegion not in consts.SUPPORTED_REGIONS:
validation_message += "to-region is "+self.toRegion+", must be one of the following values:"+str(consts.SUPPORTED_REGIONS)
if self.dataTransferOutInterRegionGb and self.region == self.toRegion:
validation_message += "source and destination regions must be different for inter-regional data transfers\n"
if self.termType not in consts.SUPPORTED_TERM_TYPES:
validation_message += "term-type is "+self.termType+", must be one of the following values:[{}]".format(consts.SUPPORTED_TERM_TYPES)
if self.termType == consts.SCRIPT_TERM_TYPE_RESERVED:
if not self.offeringClass:
validation_message += "offering-class must be specified for Reserved instances\n"
if self.offeringClass and self.offeringClass not in (consts.SUPPORTED_EC2_OFFERING_CLASSES):
validation_message += "offering-class is "+self.offeringClass+", must be one of the following values:"+str(consts.SUPPORTED_EC2_OFFERING_CLASSES)
if not self.offeringType:
validation_message += "offering-type must be specified\n"
if self.offeringType and self.offeringType not in (consts.EC2_SUPPORTED_PURCHASE_OPTIONS):
validation_message += "offering-type is "+self.offeringType+", must be one of the following values:"+str(consts.EC2_SUPPORTED_PURCHASE_OPTIONS)
if self.offeringType == consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT and self.instanceHours:
validation_message += "instance-hours cannot be set if term-type=reserved and offering-type=all-upfront\n"
if self.offeringType == consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT and not self.instanceCount:
validation_message += "instance-count is mandatory if term-type=reserved and offering-type=all-upfront\n"
if not self.years:
validation_message += "years cannot be empty for Reserved instances"
#TODO: add validation for max number of IOPS
#TODO: add validation for negative numbers
validation_ok = True
if validation_message:
raise ValidationError(validation_message)
return validation_ok
class RdsPriceDimension():
def __init__(self, **kargs):
self.region = kargs.get('region','')
#DB Instance
self.dbInstanceClass = kargs.get('dbInstanceClass','')
if kargs.get('engine',''): self.engine = kargs['engine']
else: self.engine = consts.SCRIPT_RDS_DATABASE_ENGINE_MYSQL
self.licenseModel = kargs.get('licenseModel')
if self.engine in (consts.SCRIPT_RDS_DATABASE_ENGINE_MYSQL, consts.SCRIPT_RDS_DATABASE_ENGINE_POSTGRESQL,
consts.SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL, consts.SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL_LONG,
consts.SCRIPT_RDS_DATABASE_ENGINE_AURORA_POSTGRESQL,
consts.SCRIPT_RDS_DATABASE_ENGINE_MARIADB):
self.licenseModel = consts.SCRIPT_RDS_LICENSE_MODEL_PUBLIC
if self.engine in (consts.SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_STANDARD, consts.SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_ENTERPRISE,
consts.SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_WEB, consts.SCRIPT_RDS_DATABASE_ENGINE_SQL_SERVER_EXPRESS,
consts.SCRIPT_RDS_DATABASE_ENGINE_ORACLE_ENTERPRISE, consts.SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD,
consts.SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD_ONE, consts.SCRIPT_RDS_DATABASE_ENGINE_ORACLE_STANDARD_TWO) \
and not self.licenseModel:
self.licenseModel = consts.SCRIPT_RDS_LICENSE_MODEL_INCLUDED
self.instanceHours = int(kargs.get('instanceHours',0))
tmpmultiaz = str(kargs.get('multiAz','false')).lower()
self.deploymentOption = ''
if tmpmultiaz == 'true': self.deploymentOption = consts.RDS_DEPLOYMENT_OPTION_MULTI_AZ
if tmpmultiaz == 'false': self.deploymentOption = consts.RDS_DEPLOYMENT_OPTION_SINGLE_AZ
#OnDemand vs. Reserved
self.termType = kargs.get('termType',consts.SCRIPT_TERM_TYPE_ON_DEMAND)
self.offeringClass = consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD #TODO: add support for others, besides 'standard'
if not self.offeringClass: self.offeringClass = consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD
self.offeringType = kargs.get('offeringType','')
self.instanceCount = int(kargs.get('instanceCount',0))
self.years = int(kargs.get('years',1))
#offeringType doesn't apply for on-demand
if self.termType == consts.SCRIPT_TERM_TYPE_ON_DEMAND: self.offeringType = ""
#TODO - create a separate model for DataTransfer
#Data Transfer
self.dataTransferOutInternetGb = kargs.get('dataTransferOutInternetGb',0)
self.dataTransferOutIntraRegionGb = kargs.get('dataTransferOutIntraRegionGb',0)
self.dataTransferOutInterRegionGb = kargs.get('dataTransferOutInterRegionGb',0)
self.toRegion = kargs.get('toRegion','')
#Storage
self.storageGbMonth = int(kargs.get('storageGbMonth',0))
self.storageType = kargs.get('storageType','')
self.iops= int(kargs.get('iops',0))
self.ioRequests= int(kargs.get('ioRequests',0))
self.backupStorageGbMonth = int(kargs.get('backupStorageGbMonth',0))
self.validate()
if self.engine in (consts.SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL, consts.SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL):
self.storageType = consts.SCRIPT_RDS_STORAGE_TYPE_AURORA
self.volumeType = self.calculate_volume_type()
def calculate_volume_type(self):
#TODO:add condition for Aurora
if self.storageType in consts.RDS_VOLUME_TYPES_MAP:
return consts.RDS_VOLUME_TYPES_MAP[self.storageType]
def validate(self):
#TODO: add validations for data transfer
#TODO: add validations for different combinations of engine, edition and license
#TODO: add validations for multiAz (deploymentOption cannot be empty)
validation_ok = True
validation_message = ""
valid_engine = True
if self.dbInstanceClass and self.dbInstanceClass not in consts.SUPPORTED_RDS_INSTANCE_CLASSES:
validation_message = "\n" + "db-instance-class must be one of the following values:"+str(consts.SUPPORTED_RDS_INSTANCE_CLASSES)
if self.region not in consts.SUPPORTED_REGIONS:
validation_message += "\n" + "region must be one of the following values:"+str(consts.SUPPORTED_REGIONS)
if self.engine and self.engine not in consts.RDS_SUPPORTED_DB_ENGINES:
validation_message += "\n" + "engine must be one of the following values:"+str(consts.RDS_SUPPORTED_DB_ENGINES)
valid_engine = False
if valid_engine and self.engine in (consts.SCRIPT_RDS_DATABASE_ENGINE_MYSQL,consts.SCRIPT_RDS_DATABASE_ENGINE_POSTGRESQL,
consts.SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL, consts.SCRIPT_RDS_DATABASE_ENGINE_AURORA_POSTGRESQL):
if self.licenseModel not in (consts.RDS_SUPPORTED_LICENSE_MODELS):
validation_message += "\n" + "you have specified license model [{}] - license-model must be one of the following values:{}".format(self.licenseModel, consts.RDS_SUPPORTED_LICENSE_MODELS)
if self.engine in (consts.SCRIPT_RDS_DATABASE_ENGINE_AURORA_MYSQL, consts.SCRIPT_RDS_DATABASE_ENGINE_AURORA_POSTGRESQL):
if self.storageType in (consts.SCRIPT_RDS_STORAGE_TYPE_STANDARD, consts.SCRIPT_RDS_STORAGE_TYPE_GP2, consts.SCRIPT_RDS_STORAGE_TYPE_IO1):
validation_message += "\nyou have specified {} storage type, which is invalid for DB engine {}".format(self.storageType, self.engine)
if self.storageType:
if self.storageType not in consts.SUPPORTED_RDS_STORAGE_TYPES:
validation_message += "\n" + "storage-type must be one of the following values:"+str(consts.SUPPORTED_RDS_STORAGE_TYPES)
if self.storageType == consts.SCRIPT_RDS_STORAGE_TYPE_IO1 and not self.iops:
validation_message += "\n" + "you must specify an iops value for storage type io1"
if self.storageType == consts.SCRIPT_RDS_STORAGE_TYPE_IO1 and self.storageGbMonth and self.storageGbMonth < 100 :
validation_message += "\nyou have specified {}GB of storage. You must specify at least 100GB of storage for io1".format(self.storageGbMonth)
if self.termType not in consts.SUPPORTED_TERM_TYPES:
validation_message += "term-type is "+self.termType+", must be one of the following values:[{}]".format(consts.SUPPORTED_TERM_TYPES)
#TODO: move to a common place for all reserved pricing (EC2, RDS, etc.)
if self.termType == consts.SCRIPT_TERM_TYPE_RESERVED:
if not self.offeringClass:
validation_message += "offering-class must be specified for Reserved instances\n"
if self.offeringClass and self.offeringClass not in (consts.SUPPORTED_EC2_OFFERING_CLASSES):
validation_message += "offering-class is "+self.offeringClass+", must be one of the following values:"+str(consts.SUPPORTED_EC2_OFFERING_CLASSES)
if not self.offeringType:
validation_message += "offering-type must be specified\n"
if self.offeringType and self.offeringType not in (consts.EC2_SUPPORTED_PURCHASE_OPTIONS):
validation_message += "offering-type is "+self.offeringType+", must be one of the following values:"+str(consts.EC2_SUPPORTED_PURCHASE_OPTIONS)
if self.offeringType == consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT and self.instanceHours:
validation_message += "instance-hours cannot be set if term-type=reserved and offering-type=all-upfront\n"
if self.offeringType == consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT and not self.instanceCount:
validation_message += "instance-count is mandatory if term-type=reserved and offering-type=all-upfront\n"
if not self.years:
validation_message += "years cannot be empty for Reserved instances"
if validation_message:
log.error("{}".format(validation_message))
raise ValidationError(validation_message)
return
class EmrPriceDimension():
def __init__(self, **kargs):
self.region = kargs['region']
self.termType = kargs.get('termType',consts.SCRIPT_TERM_TYPE_ON_DEMAND)
self.instanceType = kargs.get('instanceType','')
self.years = int(kargs.get('years',1))
self.instanceCount = int(kargs.get('instanceCount',0))
ec2InstanceHours = 0
if self.termType == consts.SCRIPT_TERM_TYPE_RESERVED:
self.instanceHours = calculate_instance_hours_year(self.instanceCount, self.years)
else:
self.instanceHours = int(kargs.get('instanceHours',0))
ec2InstanceHours = self.instanceHours #only set ec2InstanceHours for OnDemand, otherwise EC2 validation will fail
self.offeringClass = kargs.get('offeringClass',consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD)
if not self.offeringClass: self.offeringClass = consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD
self.offeringType = kargs.get('offeringType', consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT)
#offeringType doesn't apply for on-demand
if self.termType == consts.SCRIPT_TERM_TYPE_ON_DEMAND: self.offeringType = ""
#Data Transfer
self.dataTransferOutInternetGb = kargs.get('dataTransferOutInternetGb',0)
self.dataTransferOutIntraRegionGb = kargs.get('dataTransferOutIntraRegionGb',0)
self.dataTransferOutInterRegionGb = kargs.get('dataTransferOutInterRegionGb',0)
self.toRegion = kargs.get('toRegion','')
ec2Args = {'region':self.region, 'termType':self.termType, 'instanceType':self.instanceType,
'instanceHours':ec2InstanceHours, 'pIops':int(kargs.get('pIops',0)), 'ebsVolumeType':kargs.get('ebsVolumeType',''),
'ebsStorageGbMonth':int(kargs.get('ebsStorageGbMonth',0)), 'instanceCount': kargs.get('instanceCount',0),
'years': self.years, 'offeringType':self.offeringType, 'offeringClass':self.offeringClass}
#self.ec2PriceDims = Ec2PriceDimension(**ec2Args)
self.ec2PriceDims = ec2Args
self.validate()
def validate(self):
validation_message = ""
#TODO: add supported EMR Instance Type validation
if self.region not in consts.SUPPORTED_REGIONS:
validation_message += "region is "+self.region+", must be one of the following values:"+str(consts.SUPPORTED_REGIONS)
if self.termType not in consts.SUPPORTED_TERM_TYPES:
validation_message += "term-type is "+self.termType+", must be one of the following values:[{}]".format(consts.SUPPORTED_TERM_TYPES)
if self.offeringClass not in consts.SUPPORTED_EMR_OFFERING_CLASSES:
validation_message += "offeringClass is "+self.offeringClass+", must be one of the following values:[{}]".format(consts.SUPPORTED_EMR_OFFERING_CLASSES)
validation_ok = True
if validation_message:
raise ValidationError(validation_message)
return validation_ok
class RedshiftPriceDimension():
def __init__(self, **kargs):
self.region = kargs['region']
self.termType = kargs.get('termType',consts.SCRIPT_TERM_TYPE_ON_DEMAND)
self.instanceType = kargs.get('instanceType','')
self.instanceHours = int(kargs.get('instanceHours',0))
#Reserved Instances
self.offeringClass = kargs.get('offeringClass',consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD)
if not self.offeringClass: self.offeringClass = consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD
self.instanceCount = int(kargs.get('instanceCount',0))
self.offeringType = kargs.get('offeringType','')
self.years = int(kargs.get('years',1))
#offeringType doesn't apply for on-demand
if self.termType == consts.SCRIPT_TERM_TYPE_ON_DEMAND: self.offeringType = ""
#TODO: add storage snapshots, data scan, concurrency scaling,
#Data Transfer
self.dataTransferOutInternetGb = kargs.get('dataTransferOutInternetGb',0)
self.dataTransferOutIntraRegionGb = kargs.get('dataTransferOutIntraRegionGb',0)
self.dataTransferOutInterRegionGb = kargs.get('dataTransferOutInterRegionGb',0)
self.toRegion = kargs.get('toRegion','')
self.validate()
def validate(self):
validation_message = ""
if self.region not in consts.SUPPORTED_REGIONS:
validation_message += "region is "+self.region+", must be one of the following values:"+str(consts.SUPPORTED_REGIONS)
if self.termType not in consts.SUPPORTED_TERM_TYPES:
validation_message += "term-type is "+self.termType+", must be one of the following values:[{}]".format(consts.SUPPORTED_TERM_TYPES)
if self.offeringClass not in consts.SUPPORTED_REDSHIFT_OFFERING_CLASSES:
validation_message += "offeringClass is "+self.offeringClass+", must be one of the following values:[{}]".format(consts.SUPPORTED_REDSHIFT_OFFERING_CLASSES)
if self.instanceType not in consts.SUPPORTED_REDSHIFT_INSTANCE_TYPES:
validation_message += "instanceType is "+self.instanceType+", must be one of the following values:[{}]".format(consts.SUPPORTED_REDSHIFT_INSTANCE_TYPES)
validation_ok = True
if validation_message:
raise ValidationError(validation_message)
return validation_ok
class LambdaPriceDimension():
def __init__(self,**kargs):
self.region = kargs.get('region','')
self.termType = consts.SCRIPT_TERM_TYPE_ON_DEMAND
self.requestCount = kargs.get('requestCount',0)
self.avgDurationMs = kargs.get('avgDurationMs',0)
self.memoryMb = kargs.get('memoryMb',0)
self.dataTransferOutInternetGb = kargs.get('dataTransferOutInternetGb',0)
self.dataTransferOutIntraRegionGb = kargs.get('dataTransferOutIntraRegionGb')
self.dataTransferOutInterRegionGb = kargs.get('dataTransferOutInterRegionGb',0)
self.toRegion = kargs.get('toRegion','')
self.validate()
self.GBs = self.requestCount * (float(self.avgDurationMs) / 1000) * (float(self.memoryMb) / 1024)
def validate(self):
validation_message = ""
if not self.region:
validation_message += "Region must be specified\n"
if self.region not in consts.SUPPORTED_REGIONS:
validation_message += "Region is "+self.region+" ,must be one of the following:"+str(consts.SUPPORTED_REGIONS)
if self.requestCount and self.avgDurationMs == 0:
validation_message += "Cannot have value for requestCount and avgDurationMs=0\n"
if self.requestCount and self.memoryMb== 0:
validation_message += "Cannot have value for requestCount and memoryMb=0\n"
if self.requestCount == 0 and (self.avgDurationMs > 0 or self.memoryMb > 0):
validation_message += "Cannot have value for average duration or memory if requestCount is zero\n"
if self.dataTransferOutInterRegionGb > 0 and not self.toRegion:
validation_message += "Must specify a to-region if you specify data-transfer-out-interregion-gb\n"
if validation_message:
log.error("{}".format(validation_message))
raise ValidationError(validation_message)
return
class DynamoDBPriceDimension():
def __init__(self,**kargs):
self.region = kargs.get('region','')
self.termType = consts.SCRIPT_TERM_TYPE_ON_DEMAND
self.readCapacityUnitHours = kargs.get('readCapacityUnitHours',0)
self.writeCapacityUnitHours = kargs.get('writeCapacityUnitHours',0)
self.requestCount = kargs.get('requestCount',0)#used for reads to DDB Streams
self.dataTransferOutGb = kargs.get('dataTransferOutGb',0)
"""
self.dataTransferOutInterRegionGb = kargs.get('dataTransferOutInterRegionGb',0)
self.toRegion = kargs.get('toRegion','')
"""
self.validate()
def validate(self):
validation_message = ""
if not self.region:
validation_message += "Region must be specified\n"
if self.region not in consts.SUPPORTED_REGIONS:
validation_message += "Region is "+self.region+", must be one of the following:"+str(consts.SUPPORTED_REGIONS)
if self.readCapacityUnitHours == 0:
validation_message += "readCapacityUnitHours cannot be 0\n"
if self.writeCapacityUnitHours == 0:
validation_message += "writeCapacityUnitHours cannot be 0\n"
if validation_message:
print("Error: [{}]".format(validation_message))
raise ValidationError(validation_message)
return
"""
Please note the following - from https://aws.amazon.com/kinesis/streams/pricing/:
* Getting records from Amazon Kinesis stream is free.
* Data transfer is free. AWS does not charge for data transfer from your data producers to Amazon Kinesis Streams, or from Amazon Kinesis Streams to your Amazon Kinesis Applications.
"""
class KinesisPriceDimension():
def __init__(self,**kargs):
self.region = kargs.get('region','')
self.termType = consts.SCRIPT_TERM_TYPE_ON_DEMAND
self.shardHours = 0
self.shardHours = int(kargs.get('shardHours',0))
self.putPayloadUnits = int(kargs.get('putPayloadUnits',0))
self.extendedDataRetentionHours = int(kargs.get('extendedDataRetentionHours',0))
self.validate()
def validate(self):
validation_message = ""
if not self.region:
validation_message += "Region must be specified\n"
if self.region not in consts.SUPPORTED_REGIONS:
validation_message += "region is "+self.region+", must be one of the following values:"+str(consts.SUPPORTED_REGIONS)
if self.shardHours == 0:
validation_message += "shardHours cannot be 0\n"
if validation_message:
print("Error: [{}]".format(validation_message))
raise ValidationError(validation_message)
return
"""
This object represents the total price calculation.
It includes an array of PricingRecord objects, which are a breakdown of how the price is calculated
"""
#TODO: update arguments, remove region and include one argument for pdim
class PricingResult():
def __init__(self, awsPriceListApiVersion, region, total_cost, pricing_records, **args):
total_cost = round(total_cost,2)
self.version = consts.AWS_PRICE_CALCULATOR_VERSION
self.awsPriceListApiVersion = awsPriceListApiVersion
self.region = region
self.totalCost = round(total_cost,2)
self.currency = consts.DEFAULT_CURRENCY
self.pricingRecords = pricing_records
tmpPriceDimensions = args.get('priceDimensions',{})
if tmpPriceDimensions: self.priceDimensions = args.get('priceDimensions',{}).__dict__
else: self.priceDimensions = tmpPriceDimensions
class PricingRecord():
def __init__(self, service, amt, desc, pricePerUnit, usgUnits, rateCode):
usgUnits = round(float(usgUnits),2)
amt = round(amt,2)
self.service = service
self.amount = amt
self.description = desc
self.pricePerUnit = pricePerUnit
self.usageUnits = int(usgUnits)
self.rateCode = rateCode
"""
This class is a container for generic price comparisons, with different values for a single parameter.
For example, comparing a specific price calculation for different regions, or EC2 instance types, or OS
"""
class PriceComparison():
def __init__(self, awsPriceListApiVersion, service, sortCriteria):
self.version = "v1.0"
self.awsPriceListApiVersion = awsPriceListApiVersion
self.service = service
self.sortCriteria = sortCriteria
self.currency = consts.DEFAULT_CURRENCY
self.pricingScenarios = []
#TODO: implement get_csv_data and get_tabular_data
class PricingScenario():
def __init__(self, index, id, priceDimensions, priceCalculation, totalCost, sortCriteria):
self.index = index
self.id = id
self.displayName = self.getDisplayName(sortCriteria)
self.priceDimensions = priceDimensions
#Remove redundant information from priceCalculation object.
#This information is already present in PriceComparison or PricingScenario objects
priceCalculation.pop('awsPriceListApiVersion','')
priceCalculation.pop('totalCost','')
priceCalculation.pop('service','')
priceCalculation.pop('currency','')
self.priceCalculation = priceCalculation
self.totalCost = round(totalCost,2)
self.deltaPrevious = 0 #how more expensive is this item, compared to cheapest option - in $
self.deltaCheapest = 0 #how more expensive is this item, compared to cheapest option - in %
self.pctToPrevious = 0 #how more expensive is this item, compared to next lower option - in $
self.pctToCheapest = 0 #how more expensive is this item, compared to next lower option - in %
self.totalCost = round(totalCost,2)
def getDisplayName(self, sortCriteria):
result = ''
if sortCriteria == consts.SORT_CRITERIA_REGION:
result = consts.REGION_REPORT_MAP.get(self.id,'N/A')
#TODO: update for all supported sortCriteria options
else:
result = self.id
return result
"""
This class is a container for price calculations between different term types, such as Demand vs. Reserved
"""
class TermPricingAnalysis():
def __init__(self, awsPriceListApiVersion, regions, service, years):
self.version = "v1.0"
self.awsPriceListApiVersion = awsPriceListApiVersion
self.regions = regions
self.service = service
self.currency = consts.DEFAULT_CURRENCY
self.years = years
self.pricingScenarios = []
self.monthlyBreakdown = []
self.tabularData = ""
def get_pricing_scenario(self, region, termType, offerClass, offerType, years):
#print ("get_pricing_scenario - looking for termType:[{}] - offerClass:[{}] - offerType:[{}] - years:[{}]".format())
for p in self.pricingScenarios:
#print ("get_pricing_scenario - looking for termType:[{}] - offerClass:[{}] - offerType:[{}] - years:[{}] in priceDimensions: [{}]".format(termType, offerClass, offerType, years, p['priceDimensions']))
if p['priceDimensions']['region']==region and p['priceDimensions']['termType']==termType \
and p['priceDimensions'].get('offeringClass',consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD)==offerClass \
and p['priceDimensions'].get('offeringType','')==offerType and str(p['priceDimensions']['years'])==str(years):
return p
if p['priceDimensions']['region']==region and p['priceDimensions']['termType']==termType \
and termType == consts.SCRIPT_TERM_TYPE_ON_DEMAND \
and str(p['priceDimensions']['years'])==str(years):
return p
return False
def calculate_months_to_recover(self):
updatedPricingScenarios = []
for s in self.pricingScenarios:
accumamt = 0
month = 1
while month <= int(self.years)*12:
if month == 1:
#if s['id'] == 'reserved-{}-partial-upfront-{}yr'.format(s['priceDimensions']['offeringClass'],self.years):
if 'reserved-{}-partial-upfront-{}yr'.format(s['priceDimensions']['offeringClass'],self.years) in s['id']:
accumamt = self.getUpfrontFee(self.get_pricing_scenario(s['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_RESERVED,
s['priceDimensions']['offeringClass'],
consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT, str(self.years)))
#if s['id'] == 'reserved-{}-all-upfront-{}yr'.format(s['priceDimensions']['offeringClass'],self.years):
if 'reserved-{}-all-upfront-{}yr'.format(s['priceDimensions']['offeringClass'],self.years) in s['id']:
accumamt = self.getUpfrontFee(self.get_pricing_scenario(s['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_RESERVED,
s['priceDimensions']['offeringClass'],
consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT, str(self.years)))
#if s['id'] == 'reserved-{}-partial-upfront-{}yr'.format(s['priceDimensions']['offeringClass'],self.years): accumamt += self.getMonthlyCost(self.get_pricing_scenario(consts.SCRIPT_TERM_TYPE_RESERVED, s['priceDimensions']['offeringClass'], consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT, self.years))
if 'reserved-{}-partial-upfront-{}yr'.format(s['priceDimensions']['offeringClass'],self.years) in s['id']:
accumamt += self.getMonthlyFee(self.get_pricing_scenario(s['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_RESERVED,
s['priceDimensions']['offeringClass'], consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT, self.years))
#if s['id'] == 'reserved-{}-no-upfront-{}yr'.format(s['priceDimensions']['offeringClass'], self.years): accumamt += self.getMonthlyCost(self.get_pricing_scenario(consts.SCRIPT_TERM_TYPE_RESERVED, s['priceDimensions']['offeringClass'], consts.SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT, self.years))
if 'reserved-{}-no-upfront-{}yr'.format(s['priceDimensions']['offeringClass'], self.years) in s['id']:
accumamt += self.getMonthlyFee(self.get_pricing_scenario(s['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_RESERVED,
s['priceDimensions']['offeringClass'], consts.SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT, self.years))
if ((s['onDemandTotalCost']/(int(self.years)*12))*month) >= accumamt:
break
month += 1
s['monthsToRecover'] = month
updatedPricingScenarios.append(s)
self.pricingScenarios = updatedPricingScenarios
def calculate_monthly_breakdown(self):
#TODO: validate that years is either 1 or 3
monthlyScenarios = []
month = 1
while month <= int(self.years) * 12:
monthDict = {}
monthDict['month']=month
for p in self.pricingScenarios: #at this point, scenarios are already sorted
#TODO: confirm if partial upfront also gets the monthly fee applied on the first month, or not.
#If the analysis includes multiple regions, include the region in the scenario id.
if len(self.regions)>1: tmpregionid = "{}-".format(p.get('priceDimensions',{}).get('region',''))
else: tmpregionid = ""
if 'all-upfront' in p['id']:
monthDict['{}reserved-{}-all-upfront-{}yr'.format(tmpregionid, p['priceDimensions']['offeringClass'], self.years)] = \
round(self.getUpfrontFee(self.get_pricing_scenario(p['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_RESERVED,
p['priceDimensions']['offeringClass'], consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT, str(self.years))) + \
month * self.getMonthlyFee(self.get_pricing_scenario(p['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_RESERVED,
p['priceDimensions']['offeringClass'], consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT, str(self.years))),2)
if 'partial-upfront' in p['id']:
monthDict['{}reserved-{}-partial-upfront-{}yr'.format(tmpregionid, p['priceDimensions']['offeringClass'], self.years)] = \
round(self.getUpfrontFee(self.get_pricing_scenario(p['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_RESERVED,
p['priceDimensions']['offeringClass'], consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT, str(self.years))) + \
month * self.getMonthlyFee(self.get_pricing_scenario(p['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_RESERVED,
p['priceDimensions']['offeringClass'], consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT, str(self.years))),2)
if 'no-upfront' in p['id']:
monthDict['{}reserved-{}-no-upfront-{}yr'.format(tmpregionid,p['priceDimensions']['offeringClass'], self.years)] = \
round(month * self.getMonthlyFee(self.get_pricing_scenario(p['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_RESERVED,
p['priceDimensions']['offeringClass'], consts.SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT, str(self.years))),2)
if 'on-demand' in p['id']:
monthDict['{}on-demand-{}yr'.format(tmpregionid,self.years)] = \
round(month * self.getMonthlyFee(self.get_pricing_scenario(p['priceDimensions']['region'], consts.SCRIPT_TERM_TYPE_ON_DEMAND,
consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD, '', str(self.years))),2)
monthlyScenarios.append(monthDict)
month += 1
self.monthlyBreakdown = monthlyScenarios
return
def get_csv_data(self):
#First, sort keys from high to low total
finalMonth = self.monthlyBreakdown[len(self.monthlyBreakdown)-1]
unsorted = []
sortedscenarios = []
csvtxt = ""
for k in finalMonth.keys():unsorted.append((finalMonth[k],k))
sortedscenarios = sorted(unsorted)
comma = ","
i = 0
#print header row
for s in sortedscenarios:
i += 1
if i == len(sortedscenarios): comma = "" #avoid comma at the end of each row
csvtxt += "{}{}".format(s[1],comma) #print headers
#print monthly scenario rows
for m in self.monthlyBreakdown:
csvtxt += "\n"
comma = ","
i = 0
for s in sortedscenarios:
i += 1
if i == len(sortedscenarios): comma = "" #avoid comma at the end of each row
csvtxt += "{}{}".format(m[s[1]],comma)
self.csvData = csvtxt
return
def get_tabular_data(self):
if not self.csvData: self.get_csv_data()
csvtext = self.csvData
#need to reduce the length of each field, otherwise the table headers overflow and data is not displayed properly
headers = csvtext.split("\n")[0].replace("reserved","rsv").replace("standard","std").\
replace("convertible","conv").replace("upfront","upfr").replace("partial","part").replace("demand","dmd").split(",")
data = []
for r in csvtext.split("\n")[1:]:
data.append(r.split(","))
self.tabularData = tabulate(data, headers=headers, tablefmt='github')
"""
def get_csv_data(self):
#TODO: validate that years is either 1 or 3
def get_csv_dict():
result = {}
for k in get_sorted_keys():
result[k[1]]=0
return result
def get_sorted_keys():
#TODO: add to constants
#TODO: update order in which scenarios get displayed
return sorted(((1,'month'),(2,'on-demand-{}yr'.format(self.years)),(3,'reserved-all-upfront-{}yr'.format(self.years)),
(4,'reserved-partial-upfront-{}yr'.format(self.years)),(4,'reserved-no-upfront-{}yr'.format(self.years))))
def get_sorted_key_separator(k):
result = ','
sortedkeys = get_sorted_keys()
if k == sortedkeys[len(sortedkeys)-1]:result = '' #don't add a comma at the end of the line
return result
month = 1
#TODO: see if this block can be removed
for s in self.pricingScenarios:
if s['id'] == "on-demand-{}yr".format(self.years): onDemand = s['pricingRecords']
if s['id'] == "reserved-no-upfront-{}yr".format(self.years): reserved1YrNoUpfront = s['pricingRecords']
if s['id'] == "reserved-all-upfront-{}yr".format(self.years): reserved1YrAllUpfrontAccum = s['totalCost']
for p in s['pricingRecords']:
if 'upfront' in p['description'].lower():
if s['id'] == "reserved-partial-upfront-{}yr".format(self.years): reserved1YrPartialUpfront = p['amount']
if s['id'] == "reserved-all-upfront-{}yr".format(self.years): reserved1YrAllUpfront = p['amount']
else: reserved1YrPartialUpfrontApplied = p['amount']
accumDict = get_csv_dict()
csvdata = ""
sortedkeys = get_sorted_keys()
while month <= int(self.years) * 12:
accumDict['month']=month
if month == 1:
accumDict['reserved-partial-upfront-{}yr'.format(self.years)] = self.getUpfrontFee(self.get_pricing_scenario(consts.SCRIPT_TERM_TYPE_RESERVED, consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT, str(self.years)))
accumDict['reserved-all-upfront-{}yr'.format(self.years)] = self.getUpfrontFee(self.get_pricing_scenario(consts.SCRIPT_TERM_TYPE_RESERVED, consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD, consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT, str(self.years)))
accumDict['reserved-partial-upfront-{}yr'.format(self.years)] += self.getMonthlyCost(self.get_pricing_scenario(consts.SCRIPT_TERM_TYPE_RESERVED, consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT, str(self.years)))
accumDict['on-demand-{}yr'.format(self.years)] += self.getMonthlyCost(self.get_pricing_scenario(consts.SCRIPT_TERM_TYPE_ON_DEMAND, consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT, str(self.years)))
accumDict['reserved-no-upfront-{}yr'.format(self.years)] += self.getMonthlyCost(self.get_pricing_scenario(consts.SCRIPT_TERM_TYPE_RESERVED, consts.SCRIPT_EC2_OFFERING_CLASS_STANDARD, consts.SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT, str(self.years)))
for k in sortedkeys:
amt = 0
if k[1] == 'month': amt = accumDict[k[1]]
else: amt = round(accumDict[k[1]],2)
csvdata += "{}{}".format(amt,get_sorted_key_separator(k))
csvdata += "\n"
month += 1
csvheaders = ""
for k in sortedkeys: csvheaders += k[1]+get_sorted_key_separator(k)
csvheaders += "\n"
self.csvData = csvheaders + csvdata
return
"""
"""
Calculate any upfront fees applied to pricing. The only scenarios applicable for this fee are All Upfront and Partial Upfront
"""
def getUpfrontFee(self, pricingScenarioDict):
result = 0
if pricingScenarioDict:
for p in pricingScenarioDict['pricingRecords']:
if pricingScenarioDict['priceDimensions'].get('offeringType','') in (consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT) \
and 'upfront' in p['description'].lower():
result = p['amount']
break
return result
"""
Calculate any monthly fee paid in addition to an Upfront Fee - for example, monthly fee after Partial Upfront has been paid,
or monthly fee for No Upfront
"""
def getMonthlyFee(self, pricingScenarioDict):
result = 0
if pricingScenarioDict:
print("getMonthlyCost - scenario:[{}] - totalCost:[{}]".format(pricingScenarioDict['id'], pricingScenarioDict['totalCost']))
for p in pricingScenarioDict['pricingRecords']:
if 'upfront' not in p['description'].lower():
result += p['amount'] / (12 * int(pricingScenarioDict['priceDimensions']['years']))
return result
"""
def getMonthlyFee(self, pricingScenarioDict):
result = 0
if pricingScenarioDict:
print("getMonthlyCost - scenario:[{}] - totalCost:[{}]".format(pricingScenarioDict['id'], pricingScenarioDict['totalCost']))
for p in pricingScenarioDict['pricingRecords']:
tmpamt = p['amount'] / (12 * int(pricingScenarioDict['priceDimensions']['years']))
if pricingScenarioDict['priceDimensions'].get('offeringType','') != consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT \
and 'upfront' not in p['description'].lower():
result += tmpamt
#Cost for EMR scenarios consists of EC2 + EMR fee - on-demand scenarios already have EMR + EC2
if self.service == consts.SERVICE_EMR and p['service'] == consts.SERVICE_EMR and pricingScenarioDict['priceDimensions'].get('termType','') != consts.SCRIPT_TERM_TYPE_ON_DEMAND:
result += tmpamt
return result
"""
class TermPricingScenario():
def __init__(self, id, priceDimensions, pricingRecords, totalCost, onDemandTotalCost):
self.index = None
self.id = id
self.priceDimensions = priceDimensions
self.pricingRecords = pricingRecords
self.totalCost = round(totalCost,2)
self.deltaPrevious = 0 #how more expensive is this item, compared to cheapest option - in $
self.deltaCheapest = 0 #how more expensive is this item, compared to cheapest option - in %
self.pctToPrevious = 0 #how more expensive is this item, compared to next lower option - in $
self.pctToCheapest = 0 #how more expensive is this item, compared to next lower option - in %
self.onDemandTotalCost = round(onDemandTotalCost,2)
self.savingsPctvsOnDemand = 0
self.totalSavingsvsOnDemand = 0
self.monthsToRecover = 0
#TODO: implement onDemandMonthsToSavings
#TODO: implement as a subclass of PricingScenario
def calculateOnDemandSavings(self):
#TODO: confirm if Partial Upfront is 1 upfront + 12 payments and if the initial payment is applied on month 1
if self.onDemandTotalCost:
self.savingsPctvsOnDemand = math.fabs(round((100 * (self.totalCost - self.onDemandTotalCost) / self.onDemandTotalCost),2))
if self.priceDimensions['termType'] == consts.SCRIPT_TERM_TYPE_ON_DEMAND:
self.totalSavingsvsOnDemand = 0
else:
self.totalSavingsvsOnDemand = round((self.onDemandTotalCost - self.totalCost),2)
#This method is duplicate in utils - need to find a way to remove circular dependency and avoid duplication
def calculate_instance_hours_year(instanceCount, years):
return 365 * 24 * int(instanceCount) * int(years)
================================================
FILE: awspricecalculator/common/phelper.py
================================================
from . import consts
import os, sys
import datetime
import logging
import csv, json
from .models import PricingRecord, PricingResult
from .errors import NoDataFoundError
log = logging.getLogger()
log.setLevel(consts.LOG_LEVEL)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
site_pkgs = os.path.abspath(os.path.join(__location__, os.pardir, os.pardir,"lib", "python3.7", "site-packages" ))
sys.path.append(site_pkgs)
#print "site_pkgs: [{}]".format(site_pkgs)
import tinydb
def get_data_directory(service):
result = os.path.split(__location__)[0] + '/data/' + service + '/'
return result
def getBillableBand(priceDimensions, usageAmount):
billableBand = 0
beginRange = int(priceDimensions['beginRange'])
endRange = priceDimensions['endRange']
pricePerUnit = priceDimensions['pricePerUnit']['USD']
if endRange == consts.INFINITY:
if beginRange < usageAmount:
billableBand = usageAmount - beginRange
else:
endRange = int(endRange)
if endRange >= usageAmount and beginRange < usageAmount:
billableBand = usageAmount - beginRange
if endRange < usageAmount:
billableBand = endRange - beginRange
return billableBand
def getBillableBandCsv(row, usageAmount):
billableBand = 0
pricePerUnit = 0
amt = 0
if not row['StartingRange']:beginRange = 0
else: beginRange = int(row['StartingRange'])
if not row['EndingRange']:endRange = consts.INFINITY
else: endRange = row['EndingRange']
pricePerUnit = float(row['PricePerUnit'])
if endRange == consts.INFINITY:
if beginRange < usageAmount:
billableBand = usageAmount - beginRange
else:
endRange = int(endRange)
if endRange >= usageAmount and beginRange < usageAmount:
billableBand = usageAmount - beginRange
if endRange < usageAmount:
billableBand = endRange - beginRange
if billableBand > 0: amt = pricePerUnit * billableBand
return billableBand, pricePerUnit, amt
#Creates a table with all the SKUs that are part of the total price
def buildSkuTable(evaluated_sku_desc):
result = {}
sorted_descriptions = sorted(evaluated_sku_desc)
result_table_header = "Price | Description | Price Per Unit | Usage | Rate Code"
result_records = ""
total = 0
for s in sorted_descriptions:
result_records = result_records + "$" + str(s[0]) + "|" + str(s[1]) + "|" + str(s[2]) + "|" + str(s[3]) + "|" + s[4]+"\n"
total = total + s[0]
result['header']=result_table_header
result['records']=result_records
result['total']=total
return result
"""
Calculates the keys that will be used to partition big index files into smaller pieces.
If no term is specified, the function will consider On-Demand and Reserved
"""
#TODO: merge all 3 load balancers into a single file (to speed up DB file loading and number of open files
def get_partition_keys(service, region, term, **extraArgs):
result = []
if region:
regions = [consts.REGION_MAP[region]]
else:
regions = consts.REGION_MAP.values()
if term: terms = [consts.TERM_TYPE_MAP[term]]
else: terms = consts.TERM_TYPE_MAP.values()
#productFamilies = consts.SUPPORTED_PRODUCT_FAMILIES
productFamilies = consts.SUPPORTED_PRODUCT_FAMILIES_BY_SERVICE_DICT[service]
#EC2 & RDS Reserved
offeringClasses = extraArgs.get('offeringClasses',consts.EC2_OFFERING_CLASS_MAP.values())
tenancies = extraArgs.get('tenancies',consts.EC2_TENANCY_MAP.values())
purchaseOptions = extraArgs.get('purchaseOptions',consts.EC2_PURCHASE_OPTION_MAP.values())
indexDict = {}
#TODO: filter by service, to speed up file loading and to avoid max open files limit
for r in regions:
for t in terms:
for pf in productFamilies:
#Reserved EC2 & DB instances have more dimensions for index creation
if t == consts.TERM_TYPE_RESERVED:
if pf in consts.SUPPORTED_RESERVED_PRODUCT_FAMILIES:
for oc in offeringClasses:
for ten in tenancies:
for po in purchaseOptions:
result.append(create_file_key((r,t,pf,oc,ten, po)))
else:
#OnDemand EC2 Instances use Tenancy as a dimension for index creation
if service == consts.SERVICE_EC2 and pf == consts.PRODUCT_FAMILY_COMPUTE_INSTANCE:
for ten in tenancies:
result.append(create_file_key((r,t,pf,ten)))
else:
result.append(create_file_key((r,t,pf)))
return result
#Creates a file key that identifies a data partition
def create_file_key(indexDimensions):
result = ""
for d in indexDimensions: result += d
return result.replace(' ','')
def loadDBs(service, indexFiles):
dBs = {}
datadir = get_data_directory(service)
indexMetadata = getIndexMetadata(service)
#Files in Lambda can only be created in the /tmp filesystem - If it doesn't exist, create it.
lambdaFileSystem = '/tmp/'+service+'/data'
if not os.path.exists(lambdaFileSystem):
os.makedirs(lambdaFileSystem)
for i in indexFiles:
db = tinydb.TinyDB(lambdaFileSystem+'/'+i+'.json')
#TODO: remove circular dependency from utils, so I can use the method get_index_file_name
#TODO: initial tests show that is faster (by a few milliseconds) to populate the file from scratch). See if I should load from scratch all the time
#TODO:Create a file that is an index of those files that have been generated, so the code knows which files to look for and avoid creating unnecesary empty .json files
if len(db) == 0:
try:
#with open(datadir+i+'.csv', 'rb') as csvfile:
with open(datadir+i+'.csv', 'r') as csvfile:
pricelist = csv.DictReader(csvfile, delimiter=',', quotechar='"')
db.insert_multiple(pricelist)
#csvfile.close()#avoid " [Errno 24] Too many open files" exception
except IOError:
pass
dBs[i]=db
#db.close()#avoid " [Errno 24] Too many open files" exception
return dBs, indexMetadata
def getIndexMetadata(service):
ts = Timestamp()
ts.start('getIndexMetadata')
result = {}
#datadir = get_data_directory(service)
with open(get_data_directory(service)+"index_metadata.json") as index_metadata:
result = json.load(index_metadata)
index_metadata.close()
ts.finish('getIndexMetadata')
log.debug("Time to load indexMetadata: [{}]".format(ts.elapsed('getIndexMetadata')))
return result
def calculate_price(service, db, query, usageAmount, pricingRecords, cost):
ts = Timestamp()
ts.start('tinyDbSeachCalculatePrice')
resultSet = db.search(query)
ts.finish('tinyDbSeachCalculatePrice')
log.debug("Time to search {} pricing DB for query [{}] : [{}] ".format(service, query, ts.elapsed('tinyDbSeachCalculatePrice')))
if not resultSet: raise NoDataFoundError("Could not find data for service:[{}] - query:[{}]".format(service, query))
#print("resultSet:[{}]".format(json.dumps(resultSet,indent=4)))
for r in resultSet:
billableUsage, pricePerUnit, amt = getBillableBandCsv(r, usageAmount)
cost = cost + amt
if billableUsage:
#TODO: calculate rounding dynamically - don't set to 4 - use description to set the right rounding
pricing_record = PricingRecord(service,round(amt,4),r['PriceDescription'],pricePerUnit,billableUsage,r['RateCode'])
pricingRecords.append(vars(pricing_record))
return pricingRecords, cost
class Timestamp():
def __init__(self):
self.eventdict = {}
def start(self,event):
self.eventdict[event] = {}
self.eventdict[event]['start'] = datetime.datetime.now()
def finish(self, event):
#elapsed = datetime.timedelta(self.eventdict[event]['start']) * 1000 #return milliseconds
elapsed = datetime.datetime.now() - self.eventdict[event]['start']
self.eventdict[event]['elapsed'] = elapsed
return elapsed
def elapsed(self,event):
return self.eventdict[event]['elapsed']
================================================
FILE: awspricecalculator/datatransfer/__init__.py
================================================
================================================
FILE: awspricecalculator/datatransfer/pricing.py
================================================
import json
import logging
from ..common import consts, phelper
from ..common.models import PricingResult
import tinydb
log = logging.getLogger()
regiondbs = {}
indexMetadata = {}
def calculate(pdim):
log.info("Calculating AWSDataTransfer pricing with the following inputs: {}".format(str(pdim.__dict__)))
ts = phelper.Timestamp()
ts.start('totalCalculation')
ts.start('tinyDbLoadOnDemand')
awsPriceListApiVersion = ''
cost = 0
pricing_records = []
priceQuery = tinydb.Query()
global regiondbs
global indexMetadata
#Load On-Demand DBs
indexArgs = {}
tmpDbKey = consts.SERVICE_DATA_TRANSFER+pdim.region+pdim.termType+pdim.tenancy
dbs = regiondbs.get(tmpDbKey,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_DATA_TRANSFER, phelper.get_partition_keys(consts.SERVICE_DATA_TRANSFER, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND, **indexArgs))
regiondbs[tmpDbKey]=dbs
ts.finish('tinyDbLoadOnDemand')
log.debug("Time to load OnDemand DB files: [{}]".format(ts.elapsed('tinyDbLoadOnDemand')))
#Data Transfer
dataTransferDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DATA_TRANSFER))]
#Out to the Internet
if pdim.dataTransferOutInternetGb:
ts.start('searchDataTransfer')
query = ((priceQuery['To Location'] == 'External') & (priceQuery['Transfer Type'] == 'AWS Outbound'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutInternetGb, pricing_records, cost)
log.debug("Time to search AWSDataTransfer data transfer Out: [{}]".format(ts.finish('searchDataTransfer')))
#Intra-regional data transfer - in/out/between AZs or using EIPs or ELB
if pdim.dataTransferOutIntraRegionGb:
query = ((priceQuery['Transfer Type'] == 'IntraRegion'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutIntraRegionGb, pricing_records, cost)
#Inter-regional data transfer - out to other AWS regions
if pdim.dataTransferOutInterRegionGb:
query = ((priceQuery['Transfer Type'] == 'InterRegion Outbound') & (priceQuery['To Location'] == consts.REGION_MAP[pdim.toRegion]))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutInterRegionGb, pricing_records, cost)
log.debug("regiondbs:[{}]".format(regiondbs.keys()))
awsPriceListApiVersion = indexMetadata['Version']
extraargs = {'priceDimensions':pdim}
pricing_result = PricingResult(awsPriceListApiVersion, pdim.region, cost, pricing_records, **extraargs)
log.debug(json.dumps(vars(pricing_result),sort_keys=False,indent=4))
log.debug("Total time: [{}]".format(ts.finish('totalCalculation')))
return pricing_result.__dict__
================================================
FILE: awspricecalculator/dynamodb/__init__.py
================================================
================================================
FILE: awspricecalculator/dynamodb/pricing.py
================================================
import json
import logging
from ..common import consts, phelper
from ..common.models import PricingResult
import tinydb
log = logging.getLogger()
regiondbs = {}
indexMetadata = {}
def calculate(pdim):
log.info("Calculating DynamoDB pricing with the following inputs: {}".format(str(pdim.__dict__)))
global regiondbs
global indexMetadata
ts = phelper.Timestamp()
ts.start('totalCalculationDynamoDB')
#Load On-Demand DBs
dbs = regiondbs.get(consts.SERVICE_DYNAMODB+pdim.region+pdim.termType,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_DYNAMODB, phelper.get_partition_keys(consts.SERVICE_DYNAMODB, pdim.region,consts.SCRIPT_TERM_TYPE_ON_DEMAND))
regiondbs[consts.SERVICE_DYNAMODB+pdim.region+pdim.termType]=dbs
cost = 0
pricing_records = []
awsPriceListApiVersion = indexMetadata['Version']
priceQuery = tinydb.Query()
#TODO:add support for free-tier flag (include or exclude from calculation)
iopsDb = dbs[phelper.create_file_key([consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DB_PIOPS])]
#Read Capacity Units
query = ((priceQuery['Group'] == 'DDB-ReadUnits'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DYNAMODB, iopsDb, query, pdim.readCapacityUnitHours, pricing_records, cost)
#Write Capacity Units
query = ((priceQuery['Group'] == 'DDB-WriteUnits'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DYNAMODB, iopsDb, query, pdim.writeCapacityUnitHours, pricing_records, cost)
#DB Storage (TODO)
#Data Transfer (TODO)
#there is no additional charge for data transferred between Amazon DynamoDB and other Amazon Web Services within the same Region
#data transferred across Regions (e.g., between Amazon DynamoDB in the US East (Northern Virginia) Region and Amazon EC2 in the EU (Ireland) Region), will be charged on both sides of the transfer.
#API Requests (only applies for DDB Streams)(TODO)
extraargs = {'priceDimensions':pdim}
pricing_result = PricingResult(awsPriceListApiVersion, pdim.region, cost, pricing_records, **extraargs)
log.debug(json.dumps(vars(pricing_result),sort_keys=False,indent=4))
log.debug("Total time to compute: [{}]".format(ts.finish('totalCalculationDynamoDB')))
return pricing_result.__dict__
================================================
FILE: awspricecalculator/ec2/__init__.py
================================================
================================================
FILE: awspricecalculator/ec2/pricing.py
================================================
import json
import logging
from ..common import consts, phelper, utils
from ..common.models import PricingResult
#import psutil
import tinydb
log = logging.getLogger()
regiondbs = {}
indexMetadata = {}
def calculate(pdim):
log.info("Calculating EC2 pricing with the following inputs: {}".format(str(pdim.__dict__)))
ts = phelper.Timestamp()
ts.start('totalCalculation')
ts.start('tinyDbLoadOnDemand')
ts.start('tinyDbLoadReserved')
awsPriceListApiVersion = ''
cost = 0
pricing_records = []
priceQuery = tinydb.Query()
global regiondbs
global indexMetadata
#DBs for Data Transfer
tmpDtDbKey = consts.SERVICE_DATA_TRANSFER+pdim.region+pdim.termType
dtdbs = regiondbs.get(tmpDtDbKey,{})
if not dtdbs:
dtdbs, dtIndexMetadata = phelper.loadDBs(consts.SERVICE_DATA_TRANSFER, phelper.get_partition_keys(consts.SERVICE_DATA_TRANSFER, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND, **{}))
regiondbs[tmpDtDbKey]=dtdbs
#_/_/_/_/_/ ON-DEMAND PRICING _/_/_/_/_/
if pdim.termType == consts.SCRIPT_TERM_TYPE_ON_DEMAND:
#Load On-Demand DBs
indexArgs = {'tenancies':[consts.EC2_TENANCY_MAP[pdim.tenancy]]}
tmpDbKey = consts.SERVICE_EC2+pdim.region+pdim.termType+pdim.tenancy
dbs = regiondbs.get(tmpDbKey,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_EC2, phelper.get_partition_keys(consts.SERVICE_EC2, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND, **indexArgs))
regiondbs[tmpDbKey]=dbs
ts.finish('tinyDbLoadOnDemand')
log.debug("Time to load OnDemand DB files: [{}]".format(ts.elapsed('tinyDbLoadOnDemand')))
#TODO: Move common operations to a common module, and leave only EC2-specific operations in ec2/pricing.py (create a class)
#TODO: support all tenancy types (Host and Dedicated)
#Compute Instance
if pdim.instanceHours:
dbFileKey = phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType],
consts.PRODUCT_FAMILY_COMPUTE_INSTANCE, consts.EC2_TENANCY_MAP[pdim.tenancy]))
log.debug('DB File key: [{}]'.format(dbFileKey))
computeDb = dbs[dbFileKey]
ts.start('tinyDbSearchComputeFile')
query = ((priceQuery['Instance Type'] == pdim.instanceType) &
(priceQuery['Operating System'] == consts.EC2_OPERATING_SYSTEMS_MAP[pdim.operatingSystem]) &
#(priceQuery['Tenancy'] == consts.EC2_TENANCY_SHARED) & #removed since it's redundant with the file name
(priceQuery['Pre Installed S/W'] == pdim.preInstalledSoftware) &
(priceQuery['CapacityStatus'] == consts.EC2_CAPACITY_RESERVATION_STATUS_MAP[pdim.capacityReservationStatus]) &
(priceQuery['License Model'] == consts.EC2_LICENSE_MODEL_MAP[pdim.licenseModel]))# &
#(priceQuery['OfferingClass'] == pdim.offeringClass) &
#(priceQuery['PurchaseOption'] == purchaseOption ))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_EC2, computeDb, query, pdim.instanceHours, pricing_records, cost)
log.debug("Time to search compute:[{}]".format(ts.finish('tinyDbSearchComputeFile')))
#Data Transfer
dataTransferDb = dtdbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DATA_TRANSFER))]
#Out to the Internet
if pdim.dataTransferOutInternetGb:
ts.start('searchDataTransfer')
query = ((priceQuery['To Location'] == 'External') & (priceQuery['Transfer Type'] == 'AWS Outbound'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutInternetGb, pricing_records, cost)
log.debug("Time to search AWS Data Transfer Out: [{}]".format(ts.finish('searchDataTransfer')))
#Intra-regional data transfer - in/out/between EC2 AZs or using EIPs or ELB
if pdim.dataTransferOutIntraRegionGb:
query = ((priceQuery['Transfer Type'] == 'IntraRegion'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutIntraRegionGb, pricing_records, cost)
#Inter-regional data transfer - out to other AWS regions
if pdim.dataTransferOutInterRegionGb:
query = ((priceQuery['Transfer Type'] == 'InterRegion Outbound') & (priceQuery['To Location'] == consts.REGION_MAP[pdim.toRegion]))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutInterRegionGb, pricing_records, cost)
#EBS Storage
if pdim.ebsStorageGbMonth:
#storageDb = dbs[phelper.create_file_key(consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_STORAGE)]
storageDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_STORAGE))]
query = ((priceQuery['Volume Type'] == pdim.volumeType))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_EBS, storageDb, query, pdim.ebsStorageGbMonth, pricing_records, cost)
#System Operation (pIOPS)
if pdim.volumeType == consts.EBS_VOLUME_TYPE_PIOPS and pdim.pIops:
#storageDb = dbs[phelper.create_file_key(consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_SYSTEM_OPERATION)]
storageDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_SYSTEM_OPERATION))]
query = ((priceQuery['Group'] == 'EBS IOPS'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_EBS, storageDb, query, pdim.pIops, pricing_records, cost)
#Snapshot Storage
if pdim.ebsSnapshotGbMonth:
#snapshotDb = dbs[phelper.create_file_key(consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_SNAPSHOT)]
snapshotDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_SNAPSHOT))]
query = ((priceQuery['usageType'] == consts.REGION_PREFIX_MAP[pdim.region]+'EBS:SnapshotUsage'))#EBS:SnapshotUsage comes with a prefix in the PriceList API file (i.e. EU-EBS:SnapshotUsage)
pricing_records, cost = phelper.calculate_price(consts.SERVICE_EBS, snapshotDb, query, pdim.ebsSnapshotGbMonth, pricing_records, cost)
#Classic Load Balancer
if pdim.elbHours:
#elbDb = dbs[phelper.create_file_key(consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_LOAD_BALANCER)]
elbDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_LOAD_BALANCER))]
query = ((priceQuery['usageType'] == consts.REGION_PREFIX_MAP[pdim.region]+'LoadBalancerUsage') & (priceQuery['operation'] == 'LoadBalancing'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_ELB, elbDb, query, pdim.elbHours, pricing_records, cost)
if pdim.elbDataProcessedGb:
#elbDb = dbs[phelper.create_file_key(consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_LOAD_BALANCER)]
elbDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_LOAD_BALANCER))]
query = ((priceQuery['usageType'] == consts.REGION_PREFIX_MAP[pdim.region]+'DataProcessing-Bytes') & (priceQuery['operation'] == 'LoadBalancing'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_ELB, elbDb, query, pdim.elbDataProcessedGb, pricing_records, cost)
#Application Load Balancer
#TODO: add support for Network Load Balancer
if pdim.albHours:
albDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_APPLICATION_LOAD_BALANCER))]
query = ((priceQuery['usageType'] == consts.REGION_PREFIX_MAP[pdim.region]+'LoadBalancerUsage') & (priceQuery['operation'] == 'LoadBalancing:Application'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_ELB, albDb, query, pdim.albHours, pricing_records, cost)
if pdim.albLcus:
albDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_APPLICATION_LOAD_BALANCER))]
query = ((priceQuery['usageType'] == consts.REGION_PREFIX_MAP[pdim.region]+'LCUUsage') & (priceQuery['operation'] == 'LoadBalancing:Application'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_ELB, albDb, query, pdim.albLcus, pricing_records, cost)
#TODO: EIP
#TODO: NAT Gateway
#TODO: Fee
#_/_/_/_/_/ RESERVED PRICING _/_/_/_/_/
#Load Reserved DBs
if pdim.termType == consts.SCRIPT_TERM_TYPE_RESERVED:
indexArgs = {'offeringClasses':[consts.EC2_OFFERING_CLASS_MAP[pdim.offeringClass]],
'tenancies':[consts.EC2_TENANCY_MAP[pdim.tenancy]], 'purchaseOptions':[consts.EC2_PURCHASE_OPTION_MAP[pdim.offeringType]]}
#Load all values for offeringClasses, tenancies and purchaseOptions
#indexArgs = {'offeringClasses':consts.EC2_OFFERING_CLASS_MAP.values(),
# 'tenancies':consts.EC2_TENANCY_MAP.values(), 'purchaseOptions':consts.EC2_PURCHASE_OPTION_MAP.values()}
tmpDbKey = consts.SERVICE_EC2+pdim.region+pdim.termType+pdim.offeringClass+consts.EC2_TENANCY_MAP[pdim.tenancy]+pdim.offeringType
#tmpDbKey = consts.SERVICE_EC2+pdim.region+pdim.termType
dbs = regiondbs.get(tmpDbKey,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_EC2, phelper.get_partition_keys(consts.SERVICE_EC2, pdim.region, consts.SCRIPT_TERM_TYPE_RESERVED, **indexArgs))
#regiondbs[consts.SERVICE_EC2+pdim.region+pdim.termType]=dbs
regiondbs[tmpDbKey]=dbs
log.debug("dbs keys:{}".format(dbs.keys()))
ts.finish('tinyDbLoadReserved')
log.debug("Time to load Reserved DB files: [{}]".format(ts.elapsed('tinyDbLoadReserved')))
computeDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType],
consts.PRODUCT_FAMILY_COMPUTE_INSTANCE, pdim.offeringClass,
consts.EC2_TENANCY_MAP[pdim.tenancy], consts.EC2_PURCHASE_OPTION_MAP[pdim.offeringType]))]
ts.start('tinyDbSearchComputeFileReserved')
query = ((priceQuery['Instance Type'] == pdim.instanceType) &
(priceQuery['Operating System'] == consts.EC2_OPERATING_SYSTEMS_MAP[pdim.operatingSystem]) &
#(priceQuery['Tenancy'] == consts.EC2_TENANCY_SHARED) & #removed since it's redundant with the DB file name
(priceQuery['Pre Installed S/W'] == pdim.preInstalledSoftware) &
(priceQuery['License Model'] == consts.EC2_LICENSE_MODEL_MAP[pdim.licenseModel]) &
#(priceQuery['OfferingClass'] == consts.EC2_OFFERING_CLASS_MAP[pdim.offeringClass]) &
#(priceQuery['PurchaseOption'] == consts.EC2_PURCHASE_OPTION_MAP[pdim.offeringType] ) &
(priceQuery['LeaseContractLength'] == consts.EC2_RESERVED_YEAR_MAP["{}".format(pdim.years)] ))
hrsQuery = query & (priceQuery['Unit'] == 'Hrs' )
qtyQuery = query & (priceQuery['Unit'] == 'Quantity' )
if pdim.offeringType in (consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT):
pricing_records, cost = phelper.calculate_price(consts.SERVICE_EC2, computeDb, qtyQuery, pdim.instanceCount, pricing_records, cost)
if pdim.offeringType in (consts.SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT):
#reservedInstanceHours = pdim.instanceCount * consts.HOURS_IN_MONTH * 12 * pdim.years
reservedInstanceHours = utils.calculate_instance_hours_year(pdim.instanceCount, pdim.years)
pricing_records, cost = phelper.calculate_price(consts.SERVICE_EC2, computeDb, hrsQuery, reservedInstanceHours, pricing_records, cost)
log.debug("Time to search:[{}]".format(ts.finish('tinyDbSearchComputeFileReserved')))
log.debug("regiondbs:[{}]".format(regiondbs.keys()))
awsPriceListApiVersion = indexMetadata['Version']
extraargs = {'priceDimensions':pdim}
pricing_result = PricingResult(awsPriceListApiVersion, pdim.region, cost, pricing_records, **extraargs)
log.debug(json.dumps(vars(pricing_result),sort_keys=False,indent=4))
#proc = psutil.Process()
#log.debug("open_files: {}".format(proc.open_files()))
log.debug("Total time: [{}]".format(ts.finish('totalCalculation')))
return pricing_result.__dict__
================================================
FILE: awspricecalculator/emr/__init__.py
================================================
================================================
FILE: awspricecalculator/emr/pricing.py
================================================
import json
import logging
from ..common import consts, phelper
from ..common.models import PricingResult
from ..common.models import Ec2PriceDimension
from ..ec2 import pricing as ec2pricing
import tinydb
log = logging.getLogger()
regiondbs = {}
indexMetadata = {}
def calculate(pdim):
log.info("Calculating EMR pricing with the following inputs: {}".format(str(pdim.__dict__)))
ts = phelper.Timestamp()
ts.start('totalCalculation')
ts.start('tinyDbLoadOnDemand')
awsPriceListApiVersion = ''
cost = 0
pricing_records = []
priceQuery = tinydb.Query()
global regiondbs
global indexMetadata
#DBs for Data Transfer
tmpDtDbKey = consts.SERVICE_DATA_TRANSFER+pdim.region+pdim.termType
dtdbs = regiondbs.get(tmpDtDbKey,{})
if not dtdbs:
dtdbs, dtIndexMetadata = phelper.loadDBs(consts.SERVICE_DATA_TRANSFER, phelper.get_partition_keys(consts.SERVICE_DATA_TRANSFER, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND, **{}))
regiondbs[tmpDtDbKey]=dtdbs
#_/_/_/_/_/ ON-DEMAND PRICING _/_/_/_/_/
#Load On-Demand EMR DBs
dbs = regiondbs.get(consts.SERVICE_EMR+pdim.region+consts.TERM_TYPE_ON_DEMAND,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_EMR, phelper.get_partition_keys(consts.SERVICE_EMR, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND))
regiondbs[consts.SERVICE_EMR+pdim.region+pdim.termType]=dbs
ts.finish('tinyDbLoadOnDemand')
log.debug("Time to load OnDemand DB files: [{}]".format(ts.elapsed('tinyDbLoadOnDemand')))
#EMR Compute Instance
if pdim.instanceHours:
#The EMR component in the calculation always uses OnDemand (Reserved it's not supported yet for EMR)
computeDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[consts.SCRIPT_TERM_TYPE_ON_DEMAND], consts.PRODUCT_FAMILY_EMR_INSTANCE))]
ts.start('tinyDbSearchComputeFile')
#TODO: add support for Hunk Software Type
query = ((priceQuery['Instance Type'] == pdim.instanceType) & (priceQuery['Software Type'] == 'EMR'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_EMR, computeDb, query, pdim.instanceHours, pricing_records, cost)
log.debug("Time to search compute:[{}]".format(ts.finish('tinyDbSearchComputeFile')))
#EC2 Pricing - the EC2 component takes into consideration either OnDemand or Reserved.
ec2_pricing = ec2pricing.calculate(Ec2PriceDimension(**pdim.ec2PriceDims))
log.info("pdim.ec2PriceDims:[{}]".format(pdim.ec2PriceDims))
log.info("ec2_pricing:[{}]".format(ec2_pricing))
if ec2_pricing.get('pricingRecords',[]): pricing_records.extend(ec2_pricing['pricingRecords'])
cost += ec2_pricing.get('totalCost',0)
log.debug("regiondbs:[{}]".format(regiondbs.keys()))
awsPriceListApiVersion = indexMetadata['Version']
extraargs = {'priceDimensions':pdim}
pricing_result = PricingResult(awsPriceListApiVersion, pdim.region, cost, pricing_records, **extraargs)
log.debug(json.dumps(vars(pricing_result),sort_keys=False,indent=4))
#proc = psutil.Process()
#log.debug("open_files: {}".format(proc.open_files()))
log.debug("Total time: [{}]".format(ts.finish('totalCalculation')))
return pricing_result.__dict__
================================================
FILE: awspricecalculator/kinesis/__init__.py
================================================
================================================
FILE: awspricecalculator/kinesis/pricing.py
================================================
import json
import logging
from ..common import consts, phelper
from ..common.models import PricingResult
import tinydb
log = logging.getLogger()
def calculate(pdim):
log.info("Calculating DynamoDB pricing with the following inputs: {}".format(str(pdim.__dict__)))
ts = phelper.Timestamp()
ts.start('totalCalculationKinesis')
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_KINESIS, phelper.get_partition_keys(pdim.region,consts.SCRIPT_TERM_TYPE_ON_DEMAND))
cost = 0
pricing_records = []
awsPriceListApiVersion = indexMetadata['Version']
priceQuery = tinydb.Query()
kinesisDb = dbs[phelper.create_file_key([consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_KINESIS_STREAMS])]
#Shard Hours
query = ((priceQuery['Group'] == 'Provisioned shard hour'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_KINESIS, kinesisDb, query, pdim.shardHours, pricing_records, cost)
#PUT Payload Units
query = ((priceQuery['Group'] == 'Payload Units'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_KINESIS, kinesisDb, query, pdim.putPayloadUnits, pricing_records, cost)
#Extended Retention Hours
query = ((priceQuery['Group'] == 'Addon shard hour'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_KINESIS, kinesisDb, query, pdim.extendedDataRetentionHours, pricing_records, cost)
#TODO: add Enhanced (shard-level) metrics
#Data Transfer - N/A
#Note there is no charge for data transfer in Kinesis as per https://aws.amazon.com/kinesis/streams/pricing/
extraargs = {'priceDimensions':pdim}
pricing_result = PricingResult(awsPriceListApiVersion, pdim.region, cost, pricing_records, **extraargs)
log.debug(json.dumps(vars(pricing_result),sort_keys=False,indent=4))
log.debug("Total time to compute: [{}]".format(ts.finish('totalCalculationKinesis')))
return pricing_result.__dict__
================================================
FILE: awspricecalculator/rds/__init__.py
================================================
================================================
FILE: awspricecalculator/rds/pricing.py
================================================
import os, sys
import json
import logging
from ..common import consts, phelper, utils
from ..common.models import PricingResult
import tinydb
log = logging.getLogger()
regiondbs = {}
indexMetadata = {}
def calculate(pdim):
ts = phelper.Timestamp()
ts.start('totalCalculation')
ts.start('tinyDbLoadOnDemand')
ts.start('tinyDbLoadReserved')
global regiondbs
global indexMetadata
log.info("Calculating RDS pricing with the following inputs: {}".format(str(pdim.__dict__)))
#Load On-Demand DBs
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_RDS, phelper.get_partition_keys(consts.SERVICE_RDS, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND))
cost = 0
pricing_records = []
awsPriceListApiVersion = indexMetadata['Version']
priceQuery = tinydb.Query()
skuEngine = ''
skuEngineEdition = ''
skuLicenseModel = ''
if pdim.engine in consts.RDS_ENGINE_MAP:
skuEngine = consts.RDS_ENGINE_MAP[pdim.engine]['engine']
skuEngineEdition = consts.RDS_ENGINE_MAP[pdim.engine]['edition']
skuLicenseModel = consts.RDS_LICENSE_MODEL_MAP[pdim.licenseModel]
deploymentOptionCondition = pdim.deploymentOption
#'Multi-AZ (SQL Server Mirror)' is no longer available in pricing index
#if 'sqlserver' in pdim.engine and pdim.deploymentOption == consts.RDS_DEPLOYMENT_OPTION_MULTI_AZ:
# deploymentOptionCondition = consts.RDS_DEPLOYMENT_OPTION_MULTI_AZ_MIRROR
#DBs for Data Transfer
tmpDtDbKey = consts.SERVICE_DATA_TRANSFER+pdim.region+pdim.termType
dtdbs = regiondbs.get(tmpDtDbKey,{})
if not dtdbs:
dtdbs, dtIndexMetadata = phelper.loadDBs(consts.SERVICE_DATA_TRANSFER, phelper.get_partition_keys(consts.SERVICE_DATA_TRANSFER, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND, **{}))
regiondbs[tmpDtDbKey]=dtdbs
#_/_/_/_/_/ ON-DEMAND PRICING _/_/_/_/_/
if pdim.termType == consts.SCRIPT_TERM_TYPE_ON_DEMAND:
#Load On-Demand DBs
dbs = regiondbs.get(consts.SERVICE_RDS+pdim.region+pdim.termType,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_RDS, phelper.get_partition_keys(consts.SERVICE_RDS, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND))
regiondbs[consts.SERVICE_RDS+pdim.region+pdim.termType]=dbs
ts.finish('tinyDbLoadOnDemand')
log.debug("Time to load OnDemand DB files: [{}]".format(ts.elapsed('tinyDbLoadOnDemand')))
#DB Instance
if pdim.instanceHours:
instanceDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DATABASE_INSTANCE))]
ts.start('tinyDbSearchComputeFile')
query = ((priceQuery['Product Family'] == consts.PRODUCT_FAMILY_DATABASE_INSTANCE) &
(priceQuery['Instance Type'] == pdim.dbInstanceClass) &
(priceQuery['Database Engine'] == skuEngine) &
(priceQuery['Database Edition'] == skuEngineEdition) &
(priceQuery['License Model'] == skuLicenseModel) &
(priceQuery['Deployment Option'] == deploymentOptionCondition)
)
log.debug("Time to search DB instance compute:[{}]".format(ts.finish('tinyDbSearchComputeFile')))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_RDS, instanceDb, query, pdim.instanceHours, pricing_records, cost)
#Data Transfer
#To internet
if pdim.dataTransferOutInternetGb:
dataTransferDb = dtdbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DATA_TRANSFER))]
query = ((priceQuery['serviceCode'] == consts.SERVICE_CODE_AWS_DATA_TRANSFER) &
(priceQuery['To Location'] == 'External') &
(priceQuery['Transfer Type'] == 'AWS Outbound'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutInternetGb, pricing_records, cost)
#Inter-regional data transfer - to other AWS regions
if pdim.dataTransferOutInterRegionGb:
dataTransferDb = dtdbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DATA_TRANSFER))]
query = ((priceQuery['serviceCode'] == consts.SERVICE_CODE_AWS_DATA_TRANSFER) &
(priceQuery['To Location'] == consts.REGION_MAP[pdim.toRegion]) &
(priceQuery['Transfer Type'] == 'InterRegion Outbound'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutInterRegionGb, pricing_records, cost)
#Storage (magnetic, SSD, PIOPS)
if pdim.storageGbMonth:
engineCondition = 'Any'
if skuEngine == consts.RDS_DB_ENGINE_SQL_SERVER: engineCondition = consts.RDS_DB_ENGINE_SQL_SERVER
storageDb = dbs[phelper.create_file_key([consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DB_STORAGE])]
query = ((priceQuery['Volume Type'] == pdim.volumeType) &
(priceQuery['Database Engine'] == engineCondition) &
(priceQuery['Deployment Option'] == pdim.deploymentOption))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_RDS, storageDb, query, pdim.storageGbMonth, pricing_records, cost)
#Provisioned IOPS
if pdim.storageType == consts.SCRIPT_RDS_STORAGE_TYPE_IO1:
iopsDb = dbs[phelper.create_file_key([consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DB_PIOPS])]
query = ((priceQuery['Deployment Option'] == pdim.deploymentOption))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_RDS, iopsDb, query, pdim.iops, pricing_records, cost)
#Consumed IOPS (I/O rate)
if pdim.ioRequests:
sysopsDb = dbs[phelper.create_file_key([consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_SYSTEM_OPERATION])]
dbEngineCondition = 'Any'
if pdim.engine in (consts.RDS_DB_ENGINE_POSTGRESQL, consts.RDS_DB_ENGINE_AURORA_MYSQL):
dbEngineCondition = pdim.engine
query = ((priceQuery['Group'] == 'Aurora I/O Operation')&
(priceQuery['Database Engine'] == dbEngineCondition)
)
pricing_records, cost = phelper.calculate_price(consts.SERVICE_RDS, sysopsDb, query, pdim.ioRequests, pricing_records, cost)
#Snapshot Storage
if pdim.backupStorageGbMonth:
snapshotDb = dbs[phelper.create_file_key([consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_SNAPSHOT])]
query = ((priceQuery['usageType'] == 'RDS:ChargedBackupUsage'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_RDS, snapshotDb, query, pdim.backupStorageGbMonth, pricing_records, cost)
#_/_/_/_/_/ RESERVED PRICING _/_/_/_/_/
if pdim.termType == consts.SCRIPT_TERM_TYPE_RESERVED:
#Load Reserved DBs
indexArgs = {'offeringClasses':consts.EC2_OFFERING_CLASS_MAP.values(),
'tenancies':[consts.EC2_TENANCY_SHARED], 'purchaseOptions':consts.EC2_PURCHASE_OPTION_MAP.values()}
dbs = regiondbs.get(consts.SERVICE_RDS+pdim.region+pdim.termType,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_RDS, phelper.get_partition_keys(consts.SERVICE_RDS, pdim.region, consts.SCRIPT_TERM_TYPE_RESERVED, **indexArgs))
regiondbs[consts.SERVICE_RDS+pdim.region+pdim.termType]=dbs
ts.finish('tinyDbLoadReserved')
log.debug("Time to load Reserved DB files: [{}]".format(ts.elapsed('tinyDbLoadReserved')))
log.debug("regiondbs keys:[{}]".format(regiondbs))
#DB Instance
#RDS only supports standard
instanceDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType],
consts.PRODUCT_FAMILY_DATABASE_INSTANCE, consts.EC2_OFFERING_CLASS_STANDARD,
consts.EC2_TENANCY_SHARED, consts.EC2_PURCHASE_OPTION_MAP[pdim.offeringType]))]
ts.start('tinyDbSearchComputeFileReserved')
query = ((priceQuery['Product Family'] == consts.PRODUCT_FAMILY_DATABASE_INSTANCE) &
(priceQuery['Instance Type'] == pdim.dbInstanceClass) &
(priceQuery['Database Engine'] == skuEngine) &
(priceQuery['Database Edition'] == skuEngineEdition) &
(priceQuery['License Model'] == skuLicenseModel) &
(priceQuery['Deployment Option'] == deploymentOptionCondition) &
(priceQuery['OfferingClass'] == consts.EC2_OFFERING_CLASS_MAP[pdim.offeringClass]) &
(priceQuery['PurchaseOption'] == consts.EC2_PURCHASE_OPTION_MAP[pdim.offeringType]) &
(priceQuery['LeaseContractLength'] == consts.EC2_RESERVED_YEAR_MAP["{}".format(pdim.years)])
)
hrsQuery = query & (priceQuery['Unit'] == 'Hrs' )
qtyQuery = query & (priceQuery['Unit'] == 'Quantity' )
#TODO: use RDS-specific constants, not EC2 constants
if pdim.offeringType in (consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT):
pricing_records, cost = phelper.calculate_price(consts.SERVICE_RDS, instanceDb, qtyQuery, pdim.instanceCount, pricing_records, cost)
if pdim.offeringType in (consts.SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT):
reservedInstanceHours = utils.calculate_instance_hours_year(pdim.instanceCount, pdim.years)
pricing_records, cost = phelper.calculate_price(consts.SERVICE_RDS, instanceDb, hrsQuery, reservedInstanceHours, pricing_records, cost)
log.debug("Time to search DB instance compute:[{}]".format(ts.finish('tinyDbSearchComputeFileReserved')))
log.debug("Total time to calculate price: [{}]".format(ts.finish('totalCalculation')))
extraargs = {'priceDimensions':pdim}
pricing_result = PricingResult(awsPriceListApiVersion, pdim.region, cost, pricing_records, **extraargs)
log.debug(json.dumps(vars(pricing_result),sort_keys=False,indent=4))
return pricing_result.__dict__
================================================
FILE: awspricecalculator/redshift/__init__.py
================================================
================================================
FILE: awspricecalculator/redshift/pricing.py
================================================
import json
import logging
from ..common import consts, phelper
from ..common.models import PricingResult
from ..common.models import Ec2PriceDimension
from ..ec2 import pricing as ec2pricing
import tinydb
log = logging.getLogger()
regiondbs = {}
indexMetadata = {}
def calculate(pdim):
log.info("Calculating Redshift pricing with the following inputs: {}".format(str(pdim.__dict__)))
ts = phelper.Timestamp()
ts.start('totalCalculation')
ts.start('tinyDbLoadOnDemand')
ts.start('tinyDbLoadReserved')
awsPriceListApiVersion = ''
cost = 0
pricing_records = []
priceQuery = tinydb.Query()
global regiondbs
global indexMetadata
#DBs for Data Transfer
tmpDtDbKey = consts.SERVICE_DATA_TRANSFER+pdim.region+pdim.termType
dtdbs = regiondbs.get(tmpDtDbKey,{})
if not dtdbs:
dtdbs, dtIndexMetadata = phelper.loadDBs(consts.SERVICE_DATA_TRANSFER, phelper.get_partition_keys(consts.SERVICE_DATA_TRANSFER, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND, **{}))
regiondbs[tmpDtDbKey]=dtdbs
#_/_/_/_/_/ ON-DEMAND PRICING _/_/_/_/_/
#Load On-Demand Redshift DBs
if pdim.termType == consts.SCRIPT_TERM_TYPE_ON_DEMAND:
dbs = regiondbs.get(consts.SERVICE_REDSHIFT+pdim.region+consts.TERM_TYPE_ON_DEMAND,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_REDSHIFT, phelper.get_partition_keys(consts.SERVICE_REDSHIFT, pdim.region, consts.SCRIPT_TERM_TYPE_ON_DEMAND))
regiondbs[consts.SERVICE_REDSHIFT+pdim.region+pdim.termType]=dbs
ts.finish('tinyDbLoadOnDemand')
log.debug("Time to load OnDemand DB files: [{}]".format(ts.elapsed('tinyDbLoadOnDemand')))
#Redshift Compute Instance
if pdim.instanceHours:
computeDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[consts.SCRIPT_TERM_TYPE_ON_DEMAND], consts.PRODUCT_FAMILY_COMPUTE_INSTANCE))]
ts.start('tinyDbSearchComputeFile')
query = ((priceQuery['Instance Type'] == pdim.instanceType) )
pricing_records, cost = phelper.calculate_price(consts.SERVICE_REDSHIFT, computeDb, query, pdim.instanceHours, pricing_records, cost)
log.debug("Time to search compute:[{}]".format(ts.finish('tinyDbSearchComputeFile')))
#TODO: move Data Transfer to a common file (since now it's a separate index file)
"""
#Data Transfer
dataTransferDb = dtdbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType], consts.PRODUCT_FAMILY_DATA_TRANSFER))]
#Out to the Internet
if pdim.dataTransferOutInternetGb:
ts.start('searchDataTransfer')
query = ((priceQuery['To Location'] == 'External') & (priceQuery['Transfer Type'] == 'AWS Outbound'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutInternetGb, pricing_records, cost)
log.debug("Time to search AWS Data Transfer Out: [{}]".format(ts.finish('searchDataTransfer')))
#Intra-regional data transfer - in/out/between EC2 AZs or using EIPs or ELB
if pdim.dataTransferOutIntraRegionGb:
query = ((priceQuery['Transfer Type'] == 'IntraRegion'))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutIntraRegionGb, pricing_records, cost)
#Inter-regional data transfer - out to other AWS regions
if pdim.dataTransferOutInterRegionGb:
query = ((priceQuery['Transfer Type'] == 'InterRegion Outbound') & (priceQuery['To Location'] == consts.REGION_MAP[pdim.toRegion]))
pricing_records, cost = phelper.calculate_price(consts.SERVICE_DATA_TRANSFER, dataTransferDb, query, pdim.dataTransferOutInterRegionGb, pricing_records, cost)
"""
#_/_/_/_/_/ RESERVED PRICING _/_/_/_/_/
print("regiondbs[]".format(regiondbs))
#Load Reserved DBs
if pdim.termType == consts.SCRIPT_TERM_TYPE_RESERVED:
indexArgs = {'offeringClasses':consts.EC2_OFFERING_CLASS_MAP.values(),
'tenancies':[consts.EC2_TENANCY_SHARED], 'purchaseOptions':consts.EC2_PURCHASE_OPTION_MAP.values()}
dbs = regiondbs.get(consts.SERVICE_REDSHIFT+pdim.region+pdim.termType,{})
if not dbs:
dbs, indexMetadata = phelper.loadDBs(consts.SERVICE_REDSHIFT, phelper.get_partition_keys(consts.SERVICE_REDSHIFT, pdim.region, consts.SCRIPT_TERM_TYPE_RESERVED, **indexArgs))
regiondbs[consts.SERVICE_REDSHIFT+pdim.region+pdim.termType]=dbs
ts.finish('tinyDbLoadReserved')
log.debug("Time to load Reserved DB files: [{}]".format(ts.elapsed('tinyDbLoadReserved')))
log.debug("regiondbs keys:[{}]".format(regiondbs))
#Redshift only supports standard
print("dbs:[{}]".format(dbs))
computeDb = dbs[phelper.create_file_key((consts.REGION_MAP[pdim.region], consts.TERM_TYPE_MAP[pdim.termType],
consts.PRODUCT_FAMILY_COMPUTE_INSTANCE, consts.EC2_OFFERING_CLASS_STANDARD,
consts.EC2_TENANCY_SHARED, consts.EC2_PURCHASE_OPTION_MAP[pdim.offeringType]))]
ts.start('tinyDbSearchComputeFileReserved')
query = ((priceQuery['Instance Type'] == pdim.instanceType) &
(priceQuery['LeaseContractLength'] == consts.EC2_RESERVED_YEAR_MAP["{}".format(pdim.years)]))
hrsQuery = query & (priceQuery['Unit'] == 'Hrs' )
qtyQuery = query & (priceQuery['Unit'] == 'Quantity' )
if pdim.offeringType in (consts.SCRIPT_EC2_PURCHASE_OPTION_ALL_UPFRONT, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT):
pricing_records, cost = phelper.calculate_price(consts.SERVICE_REDSHIFT, computeDb, qtyQuery, pdim.instanceCount, pricing_records, cost)
if pdim.offeringType in (consts.SCRIPT_EC2_PURCHASE_OPTION_NO_UPFRONT, consts.SCRIPT_EC2_PURCHASE_OPTION_PARTIAL_UPFRONT):
reservedInstanceHours = pdim.instanceCount * consts.HOURS_IN_MONTH * 12 * pdim.years #TODO: move to common function
pricing_records, cost = phelper.calculate_price(consts.SERVICE_REDSHIFT, computeDb, hrsQuery, reservedInstanceHours, pricing_records, cost)
log.debug("Time to search:[{}]".format(ts.finish('tinyDbSearchComputeFileReserved')))
awsPriceListApiVersion = indexMetadata['Version']
extraargs = {'priceDimensions':pdim}
pricing_result = PricingResult(awsPriceListApiVersion, pdim.region, cost, pricing_records, **extraargs)
log.debug(json.dumps(vars(pricing_result),sort_keys=False,indent=4))
log.debug("Total time: [{}]".format(ts.finish('totalCalculation')))
return pricing_result.__dict__
================================================
FILE: cloudformation/function-plus-schedule.json
================================================
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation Template for deploying a Lambda function that calculates EC2 pricing in near real-time",
"Parameters": {
"TagKey" : {
"Type": "String",
"Description" : "Tag key that will be used to find AWS resources. Mandatory",
"MinLength": "1",
"ConstraintDescription": "Tag key is mandatory."
},
"TagValue" : {
"Type": "String",
"MinLength": "1",
"Description" : "Tag value that will be used to find AWS resources. Mandatory",
"ConstraintDescription": "Tag value is mandatory."
}
},
"Resources": {
"LambdaRealtimeCalculatePricingRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": ["lambda.amazonaws.com"]},
"Action": ["sts:AssumeRole"]
}]
},
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["logs:CreateLogGroup","logs:CreateLogStream","logs:PutLogEvents"],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": ["cloudwatch:*"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["ec2:Describe*",
"elasticloadbalancing:Describe*",
"autoscaling:Describe*"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["rds:Describe*",
"rds:List*"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["dynamodb:Describe*",
"dynamodb:List*"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["kinesis:Describe*",
"kinesis:List*"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["lambda:GetFunctionConfiguration"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["tag:getResources", "tag:getTagKeys", "tag:getTagValues"],
"Resource": "*"
}
]
}
}]
}
},
"LambdaRealtimeCalculatePricingFunction": {
"Type": "AWS::Lambda::Function",
"DependsOn" : ["LambdaRealtimeCalculatePricingRole"],
"Properties": {
"Handler": "functions/calculate-near-realtime.handler",
"Role": { "Fn::GetAtt" : ["LambdaRealtimeCalculatePricingRole", "Arn"] },
"Code": {
"S3Bucket": { "Fn::Join" : [ "", ["concurrencylabs-deployment-artifacts-public-", { "Ref" : "AWS::Region" }] ] },
"S3Key": "lambda-near-realtime-pricing/calculate-near-realtime-pricing-v3.10.zip"
},
"Runtime": "python3.6",
"Timeout": "300",
"MemorySize" : 1024
}
},
"ScheduledPricingCalculationRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "Invoke Pricing Calculator Lambda function every 5 minutes",
"ScheduleExpression": "rate(5 minutes)",
"State": "ENABLED",
"Targets": [{
"Arn": { "Fn::GetAtt": ["LambdaRealtimeCalculatePricingFunction", "Arn"] },
"Id": "NearRealTimePriceCalculatorFunctionv1",
"Input":{"Fn::Join":["", ["{\"tag\":{\"key\":\"",{"Ref":"TagKey"},"\",\"value\":\"",{"Ref":"TagValue"},"\"}}"]]}
}]
}
},
"PermissionForEventsToInvokePricingCalculationLambda": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"FunctionName": { "Ref": "LambdaRealtimeCalculatePricingFunction" },
"Action": "lambda:InvokeFunction",
"Principal": "events.amazonaws.com",
"SourceArn": { "Fn::GetAtt": ["ScheduledPricingCalculationRule", "Arn"] }
}
}
},
"Outputs": {
"Documentation": {
"Description": "For more details, see this blog post",
"Value": "https://www.concurrencylabs.com/blog/aws-pricing-lambda-realtime-calculation-function/"
},
"LambdaFunction": {
"Description": "Lambda function that calculates pricing in near real-time",
"Value": {
"Ref": "LambdaRealtimeCalculatePricingFunction"
}
},
"ScheduledEvent": {
"Description": "CloudWatch Events schedule that will trigger the Lambda function",
"Value": {
"Ref": "ScheduledPricingCalculationRule"
}
}
}
}
================================================
FILE: cloudformation/lambda-metric-filters.yml
================================================
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation Template for creating CloudWatch Logs Metric Filters that keep track of memory utilization (and in the future, possibly other data that can be extracted from Lambda output in CW Logs)
Parameters:
LambdaFunctionName:
Default: ""
Description: Name of the Lambda function to monitor
Type: String
Resources:
LambdaMemoryUsed:
Properties:
FilterPattern: '[reportLabel=REPORT, requestIdLabel="RequestId:",..., maxMemoryUsedValue, maxMemoryUsedMbLabel]'
LogGroupName:
Fn::Join:
- ''
- - '/aws/lambda/'
- Ref: LambdaFunctionName
MetricTransformations:
- MetricName:
Fn::Join:
- ''
- - 'MemoryUsed-'
- Ref: LambdaFunctionName
MetricNamespace: ConcurrencyLabs/Lambda/
MetricValue: $maxMemoryUsedValue
Type: AWS::Logs::MetricFilter
LambdaMemorySize:
Properties:
FilterPattern: '[reportLabel=REPORT, requestIdLabel="RequestId:",..., memorySizeValue, memorySizeValueMbLabel, maxLabel, memoryLabel, usedLabel, maxMemoryUsedValue, maxMemoryUsedMbLabel]'
LogGroupName:
Fn::Join:
- ''
- - '/aws/lambda/'
- Ref: LambdaFunctionName
MetricTransformations:
- MetricName:
Fn::Join:
- ''
- - 'MemorySize-'
- Ref: LambdaFunctionName
MetricNamespace: ConcurrencyLabs/Lambda/
MetricValue: $memorySizeValue
Type: AWS::Logs::MetricFilter
================================================
FILE: functions/calculate-near-realtime.py
================================================
from __future__ import print_function
import datetime
import json
import logging,traceback
import math
import os
import sys
import boto3
from botocore.exceptions import ClientError
__location__ = os.path.dirname(os.path.realpath(__file__))
sys.path.append(os.path.join(__location__, "../"))
sys.path.append(os.path.join(__location__, "../vendored"))
import awspricecalculator.ec2.pricing as ec2pricing
import awspricecalculator.rds.pricing as rdspricing
import awspricecalculator.awslambda.pricing as lambdapricing
import awspricecalculator.dynamodb.pricing as ddbpricing
import awspricecalculator.kinesis.pricing as kinesispricing
import awspricecalculator.common.models as data
import awspricecalculator.common.consts as consts
from awspricecalculator.common.errors import NoDataFoundError
log = logging.getLogger()
#log.setLevel(logging.INFO)
ec2client = None
rdsclient = None
elbclient = None
lambdaclient = None
dddbclient = None
kinesisclient = None
cwclient = None
tagsclient = None
#_/_/_/_/_/_/ default_values - start _/_/_/_/_/_/
#Delay in minutes for metrics collection. We want to make sure that all metrics have arrived for the time period we are evaluating
#Note: Unless you have detailed metrics enabled in CloudWatch, make sure it is >= 10
METRIC_DELAY = 10
#Time window in minutes we will use for metric calculations
#Note: make sure this is at least 5, unless you have detailed metrics enabled in CloudWatch
METRIC_WINDOW = 5
FORECAST_PERIOD_MONTHLY = 'monthly'
FORECAST_PERIOD_HOURLY = 'hourly'
DEFAULT_FORECAST_PERIOD = FORECAST_PERIOD_MONTHLY
HOURS_DICT = {FORECAST_PERIOD_MONTHLY:720, FORECAST_PERIOD_HOURLY:1}
CW_NAMESPACE = 'ConcurrencyLabs/Pricing/NearRealTimeForecast'
CW_METRIC_NAME_ESTIMATEDCHARGES = 'EstimatedCharges'
CW_METRIC_DIMENSION_SERVICE_NAME = 'ServiceName'
CW_METRIC_DIMENSION_PERIOD = 'ForecastPeriod'
CW_METRIC_DIMENSION_CURRENCY = 'Currency'
CW_METRIC_DIMENSION_TAG = 'Tag'
CW_METRIC_DIMENSION_SERVICE_NAME_EC2 = 'ec2'
CW_METRIC_DIMENSION_SERVICE_NAME_RDS = 'rds'
CW_METRIC_DIMENSION_SERVICE_NAME_LAMBDA = 'lambda'
CW_METRIC_DIMENSION_SERVICE_NAME_DYNAMODB = 'dynamodb'
CW_METRIC_DIMENSION_SERVICE_NAME_KINESIS = 'kinesis'
CW_METRIC_DIMENSION_SERVICE_NAME_TOTAL = 'total'
CW_METRIC_DIMENSION_CURRENCY_USD = 'USD'
SERVICE_EC2 = 'ec2'
SERVICE_RDS = 'rds'
SERVICE_ELB = 'elasticloadbalancing'
SERVICE_LAMBDA = 'lambda'
SERVICE_DYNAMODB = 'dynamodb'
SERVICE_KINESIS = 'kinesis'
RESOURCE_LAMBDA_FUNCTION = 'function'
RESOURCE_ELB = 'loadbalancer'
RESOURCE_ALB = 'loadbalancer/app'
RESOURCE_NLB = 'loadbalancer/net'
RESOURCE_EC2_INSTANCE = 'instance'
RESOURCE_RDS_DB_INSTANCE = 'db'
RESOURCE_EBS_VOLUME = 'volume'
RESOURCE_EBS_SNAPSHOT = 'snapshot'
RESOURCE_DDB_TABLE = 'table'
RESOURCE_STREAM = 'stream'
#This map is used to specify which services and resource types will be searched using the tag service
SERVICE_RESOURCE_MAP = {SERVICE_EC2:[RESOURCE_EBS_VOLUME,RESOURCE_EBS_SNAPSHOT, RESOURCE_EC2_INSTANCE],
SERVICE_RDS:[RESOURCE_RDS_DB_INSTANCE],
SERVICE_LAMBDA:[RESOURCE_LAMBDA_FUNCTION],
SERVICE_ELB:[RESOURCE_ELB], #only provide RESOURCE_ELB, even for ALB and NLB
SERVICE_DYNAMODB:[RESOURCE_DDB_TABLE],
SERVICE_KINESIS:[RESOURCE_STREAM]
}
#_/_/_/_/_/_/ default values - end _/_/_/_/_/_/
def handler(event, context):
log.setLevel(consts.LOG_LEVEL)
log.info("Received event {}".format(json.dumps(event)))
try:
init_clients(context)
result = {}
pricing_records = []
ec2Cost = 0
rdsCost = 0
lambdaCost = 0
ddbCost = 0
kinesisCost = 0
totalCost = 0
#First, get the tags we'll be searching for, from the CloudWatch scheduled event
tagkey = ""
tagvalue = ""
if 'tag' in event:
tagkey = event['tag']['key']
tagvalue = event['tag']['value']
if tagkey == "" or tagvalue == "":
log.error("No tags specified, aborting function!")
return {}
log.info("Will search resources with the following tag:["+tagkey+"] - value["+tagvalue+"]")
resource_manager = ResourceManager(tagkey, tagvalue)
start, end = calculate_time_range()
elb_hours = 0
elb_data_processed_gb = 0
elb_instances = {}
alb_hours = 0
alb_lcus = 0
#Get tagged ELB(s) and their registered instances
#taggedelbs = find_elbs(tagkey, tagvalue)
taggedelbs = resource_manager.get_resource_ids(SERVICE_ELB, RESOURCE_ELB)
taggedalbs = resource_manager.get_resource_ids(SERVICE_ELB, RESOURCE_ALB)
taggednlbs = resource_manager.get_resource_ids(SERVICE_ELB, RESOURCE_NLB)
if taggedelbs:
log.info("Found tagged Classic ELBs:{}".format(taggedelbs))
elb_instances = get_elb_instances(taggedelbs)#TODO:add support to find registered instances for ALB and NLB
#Get all EC2 instances registered with each tagged ELB, so we can calculate ELB data processed
#Registered instances will be used for data processed calculation, and not for instance hours, unless they're tagged.
if taggedalbs:
log.info("Found tagged Application Load Balancers:{}".format(taggedalbs))
if taggednlbs:
log.info("Found tagged Network Load Balancers:{}".format(taggednlbs))
#TODO: once pricing for ALB and NLB is added to awspricecalculator, separate hours by ELB type
elb_hours += (len(taggedelbs)+len(taggednlbs))*HOURS_DICT[DEFAULT_FORECAST_PERIOD]
alb_hours += len(taggedalbs)*HOURS_DICT[DEFAULT_FORECAST_PERIOD]
if elb_instances:
try:
log.info("Found registered EC2 instances to tagged ELBs [{}]:{}".format(taggedelbs, elb_instances.keys()))
elb_data_processed_gb = calculate_elb_data_processed(start, end, elb_instances)*calculate_forecast_factor() / (10**9)
except Exception as failure:
log.error('Error calculating costs for tagged ELBs: %s', failure)
else:
log.info("Didn't find any EC2 instances registered to tagged ELBs [{}]".format(taggedelbs))
#else:
# log.info("No tagged ELBs found")
#Get tagged EC2 instances
ec2_instances = get_ec2_instances_by_tag(tagkey, tagvalue)
if ec2_instances:
log.info("Tagged EC2 instances:{}".format(ec2_instances.keys()))
else:
log.info("Didn't find any tagged, running EC2 instances")
#Calculate Classic ELB cost
if elb_hours:
elb_cost = ec2pricing.calculate(data.Ec2PriceDimension(region=region, elbHours=elb_hours,elbDataProcessedGb=elb_data_processed_gb))
if 'pricingRecords' in elb_cost:
pricing_records.extend(elb_cost['pricingRecords'])
ec2Cost = ec2Cost + elb_cost['totalCost']
#Calculate Application Load Balancer cost
if alb_hours:
alb_lcus = calculate_alb_lcus(start, end, taggedalbs)*calculate_forecast_factor()
alb_cost = ec2pricing.calculate(data.Ec2PriceDimension(region=region, albHours=alb_hours, albLcus=alb_lcus))
if 'pricingRecords' in alb_cost:
pricing_records.extend(alb_cost['pricingRecords'])
ec2Cost = ec2Cost + alb_cost['totalCost']
#Calculate EC2 compute time for ALL instance types found (subscribed to ELB or not) - group by instance types
all_instance_dict = {}
all_instance_dict.update(ec2_instances)
all_instance_types = get_instance_type_count(all_instance_dict)
log.info("All instance types:{}".format(all_instance_types))
#Calculate EC2 compute time cost
#TODO: add support for all available OS
for instance_type in all_instance_types:
try:
ec2_compute_cost = ec2pricing.calculate(data.Ec2PriceDimension(region=region, instanceType=instance_type, instanceHours=all_instance_types[instance_type]*HOURS_DICT[DEFAULT_FORECAST_PERIOD]))
if 'pricingRecords' in ec2_compute_cost: pricing_records.extend(ec2_compute_cost['pricingRecords'])
ec2Cost = ec2Cost + ec2_compute_cost['totalCost']
except Exception as failure:
log.error('Error processing %s: %s', instance_type, failure)
#Get provisioned storage by volume type, and provisioned IOPS (if applicable)
ebs_storage_dict, piops = get_storage_by_ebs_type(all_instance_dict)
#Calculate EBS storagecost
for k in ebs_storage_dict.keys():
if k == 'io1': pricing_piops = piops
else: pricing_piops = 0
try:
ebs_storage_cost = ec2pricing.calculate(data.Ec2PriceDimension(region=region, ebsVolumeType=k, ebsStorageGbMonth=ebs_storage_dict[k], pIops=pricing_piops))
if 'pricingRecords' in ebs_storage_cost: pricing_records.extend(ebs_storage_cost['pricingRecords'])
ec2Cost = ec2Cost + ebs_storage_cost['totalCost']
except Exception as failure:
log.error('Error processing ebs storage costs: %s', failure)
#Get tagged RDS DB instances
#db_instances = get_db_instances_by_tag(tagkey, tagvalue)
db_instances = get_db_instances_by_tag(resource_manager.get_resource_ids(SERVICE_RDS, RESOURCE_RDS_DB_INSTANCE))
if db_instances:
log.info("Found the following tagged DB instances:{}".format(db_instances.keys()))
else:
log.info("Didn't find any tagged RDS DB instances")
#Calculate RDS instance time for ALL instance types found - group by DB instance types
all_db_instance_dict = {}
all_db_instance_dict.update(db_instances)
all_db_instance_types = get_db_instance_type_count(all_db_instance_dict)
all_db_storage_types = get_db_storage_type_count(all_db_instance_dict)
#TODO: add support for read replicas
#Calculate RDS instance time cost
rds_instance_cost = {}
for db_instance_type in all_db_instance_types:
try:
dbInstanceClass = db_instance_type.split("|")[0]
engine = db_instance_type.split("|")[1]
licenseModel= db_instance_type.split("|")[2]
multiAz= bool(int(db_instance_type.split("|")[3]))
log.info("Calculating RDS DB Instance compute time")
rds_instance_cost = rdspricing.calculate(data.RdsPriceDimension(region=region, dbInstanceClass=dbInstanceClass, multiAz=multiAz,
engine=engine, licenseModel=licenseModel, instanceHours=all_db_instance_types[db_instance_type]*HOURS_DICT[DEFAULT_FORECAST_PERIOD]))
if 'pricingRecords' in rds_instance_cost: pricing_records.extend(rds_instance_cost['pricingRecords'])
rdsCost = rdsCost + rds_instance_cost['totalCost']
except Exception as failure:
log.error('Error processing RDS instance time costs: %s', failure)
#Calculate RDS storage cost
#TODO: add support for Aurora operations
rds_storage_cost = {}
for storage_key in all_db_storage_types.keys():
try:
storageType = storage_key.split("|")[0]
multiAz = bool(int(storage_key.split("|")[1]))
storageGbMonth = all_db_storage_types[storage_key]['AllocatedStorage']
iops = all_db_storage_types[storage_key]['Iops']
log.info("Calculating RDS DB Instance Storage")
rds_storage_cost = rdspricing.calculate(data.RdsPriceDimension(region=region, storageType=storageType,
multiAz=multiAz, storageGbMonth=storageGbMonth,
iops=iops))
if 'pricingRecords' in rds_storage_cost: pricing_records.extend(rds_storage_cost['pricingRecords'])
rdsCost = rdsCost + rds_storage_cost['totalCost']
except Exception as failure:
log.error('Error processing RDS storage costs: %s', failure)
#RDS Data Transfer - the Lambda function will assume all data transfer happens between RDS and EC2 instances
#Lambda functions
#TODO: add support for lambda function qualifiers
#TODO: calculate data ingested into CloudWatch Logs
lambdafunctions = resource_manager.get_resources(SERVICE_LAMBDA, RESOURCE_LAMBDA_FUNCTION)
for func in lambdafunctions:
executions = calculate_lambda_executions(start, end, func)
avgduration = calculate_lambda_duration(start, end, func)
funcname = ''
qualifier = ''
fullname = ''
funcname = func.id
fullname = funcname
#if 'qualifier' in func:
# qualifier = func['qualifier']
# fullname += ":"+qualifier
memory = get_lambda_memory(funcname,qualifier)
log.info("Executions for Lambda function [{}]: [{}] - Memory:[{}] - Avg Duration:[{}]".format(funcname,executions,memory, avgduration))
if executions and avgduration:
try:
#Note we're setting data transfer = 0, since we don't have a way to calculate it based on CW metrics alone
#TODO:call a single time and include a GB-s price dimension to the Lambda calculator
lambdapdim = data.LambdaPriceDimension(region=region, requestCount=executions*calculate_forecast_factor(),
avgDurationMs=avgduration, memoryMb=memory, dataTranferOutInternetGb=0,
dataTranferOutIntraRegionGb=0, dataTranferOutInterRegionGb=0, toRegion='')
lambda_func_cost = lambdapricing.calculate(lambdapdim)
if 'pricingRecords' in lambda_func_cost: pricing_records.extend(lambda_func_cost['pricingRecords'])
lambdaCost = lambdaCost + lambda_func_cost['totalCost']
except Exception as failure:
log.error('Error processing Lambda costs: %s', failure)
else:
log.info("Skipping pricing calculation for function [{}] - qualifier [{}] due to lack of executions in [{}-minute] time window".format(fullname, qualifier, METRIC_WINDOW))
#DynamoDB
totalRead = 0
totalWrite = 0
ddbtables = resource_manager.get_resources(SERVICE_DYNAMODB, RESOURCE_DDB_TABLE)
#Provisioned Capacity Units
for t in ddbtables:
read, write = get_ddb_capacity_units(t.id)
log.info("Dynamo DB Provisioned Capacity Units - Table:{} Read:{} Write:{}".format(t.id, read, write))
totalRead += read
totalWrite += write
#TODO: add support for storage
if totalRead and totalWrite:
ddbpdim = data.DynamoDBPriceDimension(region=region, readCapacityUnitHours=totalRead*HOURS_DICT[FORECAST_PERIOD_MONTHLY],
writeCapacityUnitHours=totalWrite*HOURS_DICT[FORECAST_PERIOD_MONTHLY])
ddbtable_cost = ddbpricing.calculate(ddbpdim)
if 'pricingRecords' in ddbtable_cost: pricing_records.extend(ddbtable_cost['pricingRecords'])
ddbCost = ddbCost + ddbtable_cost['totalCost']
#Kinesis Streams
streams = resource_manager.get_resources(SERVICE_KINESIS, RESOURCE_STREAM)
totalShards = 0
totalExtendedRetentionCount = 0
totalPutPayloadUnits = 0
for s in streams:
log.info("Stream:[{}]".format(s.id))
tmpShardCount, tmpExtendedRetentionCount = get_kinesis_stream_shards(s.id)
totalShards += tmpShardCount
totalExtendedRetentionCount += tmpExtendedRetentionCount
totalPutPayloadUnits += calculate_kinesis_put_payload_units(start, end, s.id)
if totalShards:
kinesispdim = data.KinesisPriceDimension(region=region,
shardHours=totalShards*HOURS_DICT[FORECAST_PERIOD_MONTHLY],
extendedDataRetentionHours=totalExtendedRetentionCount*HOURS_DICT[FORECAST_PERIOD_MONTHLY],
putPayloadUnits=totalPutPayloadUnits*calculate_forecast_factor())
stream_cost = kinesispricing.calculate(kinesispdim)
if 'pricingRecords' in stream_cost: pricing_records.extend(stream_cost['pricingRecords'])
kinesisCost = kinesisCost + stream_cost['totalCost']
#Do this after all calculations for all supported services have concluded
totalCost = ec2Cost + rdsCost + lambdaCost + ddbCost + kinesisCost
result['pricingRecords'] = pricing_records
result['totalCost'] = round(totalCost,2)
result['forecastPeriod']=DEFAULT_FORECAST_PERIOD
result['currency'] = CW_METRIC_DIMENSION_CURRENCY_USD
#Publish metrics to CloudWatch using the default namespace
if tagkey:
put_cw_metric_data(end, ec2Cost, CW_METRIC_DIMENSION_SERVICE_NAME_EC2, tagkey, tagvalue)
put_cw_metric_data(end, rdsCost, CW_METRIC_DIMENSION_SERVICE_NAME_RDS, tagkey, tagvalue)
put_cw_metric_data(end, lambdaCost, CW_METRIC_DIMENSION_SERVICE_NAME_LAMBDA, tagkey, tagvalue)
put_cw_metric_data(end, ddbCost, CW_METRIC_DIMENSION_SERVICE_NAME_DYNAMODB, tagkey, tagvalue)
put_cw_metric_data(end, kinesisCost, CW_METRIC_DIMENSION_SERVICE_NAME_KINESIS, tagkey, tagvalue)
put_cw_metric_data(end, totalCost, CW_METRIC_DIMENSION_SERVICE_NAME_TOTAL, tagkey, tagvalue)
log.info("Estimated monthly cost for resources tagged with key={},value={} : [{}]".format(tagkey, tagvalue, json.dumps(result,sort_keys=False,indent=4)))
except NoDataFoundError as ndf:
log.error ("NoDataFoundError [{}]".format(ndf))
except Exception as e:
traceback.print_exc()
log.error("Exception message:["+str(e)+"]")
return result
#TODO: calculate data transfer for instances that are not registered with the ELB
#TODO: Support different OS for EC2 instances (see how engine and license combinations are calculated for RDS)
#TODO: log the actual AWS resources that are found for the price calculation
#TODO: add support for detailed metrics fee
#TODO: add support for EBS optimized
#TODO: add support for EIP
#TODO: add support for EC2 operating systems other than Linux
#TODO: calculate monthly hours based on the current month, instead of assuming 720
#TODO: add support for different forecast periods (1 hour, 1 day, 1 month, etc.)
#TODO: add support for Spot and Reserved. Function only supports On-demand instances at the time
def put_cw_metric_data(timestamp, cost, service, tagkey, tagvalue):
response = cwclient.put_metric_data(
Namespace=CW_NAMESPACE,
MetricData=[
{
'MetricName': CW_METRIC_NAME_ESTIMATEDCHARGES,
'Dimensions': [{'Name': CW_METRIC_DIMENSION_SERVICE_NAME,'Value': service},
{'Name': CW_METRIC_DIMENSION_PERIOD,'Value': DEFAULT_FORECAST_PERIOD},
{'Name': CW_METRIC_DIMENSION_CURRENCY,'Value': CW_METRIC_DIMENSION_CURRENCY_USD},
{'Name': CW_METRIC_DIMENSION_TAG,'Value': tagkey+'='+tagvalue}
],
'Timestamp': timestamp,
'Value': cost,
'Unit': 'Count'
}
]
)
def get_elb_instances(elbnames):
result = {}
instance_ids = []
elbs = elbclient.describe_load_balancers(LoadBalancerNames=elbnames)
if 'LoadBalancerDescriptions' in elbs:
for e in elbs['LoadBalancerDescriptions']:
if 'Instances' in e:
instances = e['Instances']
for i in instances:
instance_ids.append(i['InstanceId'])
if instance_ids:
response = ec2client.describe_instances(InstanceIds=instance_ids)
if 'Reservations' in response:
for r in response['Reservations']:
if 'Instances' in r:
for i in r['Instances']:
result[i['InstanceId']]=i
return result
def get_ec2_instances_by_tag(tagkey, tagvalue):
result = {}
response = ec2client.describe_instances(Filters=[{'Name': 'tag:'+tagkey, 'Values':[tagvalue]},
{'Name': 'instance-state-name', 'Values': ['running',]}])
if 'Reservations' in response:
reservations = response['Reservations']
for r in reservations:
if 'Instances' in r:
for i in r['Instances']:
result[i['InstanceId']]=i
return result
def get_db_instances_by_tag(dbIds):
result = {}
#TODO: paginate
if dbIds:
response = rdsclient.describe_db_instances(Filters=[{'Name':'db-instance-id','Values':dbIds}])
if 'DBInstances' in response:
dbInstances = response['DBInstances']
for d in dbInstances:
result[d['DbiResourceId']]=d
return result
def get_non_elb_instances_by_tag(tagkey, tagvalue, elb_instances):
result = {}
response = ec2client.describe_instances(Filters=[{'Name': 'tag:'+tagkey, 'Values':[tagvalue]}])
if 'Reservations' in response:
reservations = response['Reservations']
for r in reservations:
if 'Instances' in r:
for i in r['Instances']:
if i['InstanceId'] not in elb_instances: result[i['InstanceId']]=i
return result
def get_instance_type_count(instance_dict):
result = {}
for key in instance_dict:
instance_type = instance_dict[key]['InstanceType']
if instance_type in result:
result[instance_type] = result[instance_type] + 1
else:
result[instance_type] = 1
return result
def get_db_instance_type_count(db_instance_dict):
result = {}
for key in db_instance_dict:
#key format: db-instance-class|engine|license-model|multi-az
multiAz = 0
if db_instance_dict[key]['MultiAZ']==True:multiAz=1
db_instance_key = db_instance_dict[key]['DBInstanceClass']+"|"+\
db_instance_dict[key]['Engine']+"|"+\
db_instance_dict[key]['LicenseModel']+"|"+\
str(multiAz)
if db_instance_key in result:
result[db_instance_key] = result[db_instance_key] + 1
else:
result[db_instance_key] = 1
return result
def get_db_storage_type_count(db_instance_dict):
result = {}
for key in db_instance_dict:
#key format: db-storage-type|allocated-storage|iops|multi-az
multiAz = 0
#print("db_instance_dict[{}]".format(db_instance_dict[key]))
if db_instance_dict[key]['MultiAZ']==True:multiAz=1
db_storage_key = db_instance_dict[key]['StorageType']+"|"+\
str(multiAz)
if 'Iops' not in db_instance_dict[key]: db_instance_dict[key]['Iops'] = 0
if db_storage_key in result:
result[db_storage_key]['Iops'] += db_instance_dict[key]['Iops']
result[db_storage_key]['AllocatedStorage'] += db_instance_dict[key]['AllocatedStorage']
else:
result[db_storage_key] = {}
result[db_storage_key]['Iops'] = db_instance_dict[key]['Iops']
result[db_storage_key]['AllocatedStorage'] = db_instance_dict[key]['AllocatedStorage']
#print ("Storage composite key:[{}]".format(db_storage_key))
return result
def get_storage_by_ebs_type(instance_dict):
result = {}
iops = 0
ebs_ids = []
for key in instance_dict:
block_mappings = instance_dict[key]['BlockDeviceMappings']
for bm in block_mappings:
if 'Ebs' in bm:
if 'VolumeId' in bm['Ebs']:
ebs_ids.append(bm['Ebs']['VolumeId'])
volume_details = {}
if ebs_ids: volume_details = ec2client.describe_volumes(VolumeIds=ebs_ids)#TODO:add support for pagination
if 'Volumes' in volume_details:
for v in volume_details['Volumes']:
volume_type = v['VolumeType']
if volume_type in result:
result[volume_type] = result[volume_type] + int(v['Size'])
else:
result[volume_type] = int(v['Size'])
if volume_type == 'io1': iops = iops + int(v['Iops'])
return result, iops
def get_total_snapshot_storage(tagkey, tagvalue):
result = 0
snapshots = ec2client.describe_snapshots(Filters=[{'Name': 'tag:'+tagkey,'Values': [tagvalue]}])
if 'Snapshots' in snapshots:
for s in snapshots['Snapshots']:
result = result + s['VolumeSize']
#log.info("total snapshot size:["+str(result)+"]")
return result
"""
For each EC2 instance registered to an ELB, get the following metrics: NetworkIn, NetworkOut.
Then add them up and use them to calculate the total data processed by the ELB
"""
def calculate_elb_data_processed(start, end, elb_instances):
result = 0
for instance_id in elb_instances.keys():
metricsNetworkIn = cwclient.get_metric_statistics(
Namespace='AWS/EC2',
MetricName='NetworkIn',
Dimensions=[{'Name': 'InstanceId','Value': instance_id}],
StartTime=start,
EndTime=end,
Period=60*METRIC_WINDOW,
Statistics = ['Sum']
)
metricsNetworkOut = cwclient.get_metric_statistics(
Namespace='AWS/EC2',
MetricName='NetworkOut',
Dimensions=[{'Name': 'InstanceId','Value': instance_id}],
StartTime=start,
EndTime=end,
Period=60*METRIC_WINDOW,
Statistics = ['Sum']
)
for datapoint in metricsNetworkIn['Datapoints']:
if 'Sum' in datapoint: result = result + datapoint['Sum']
for datapoint in metricsNetworkOut['Datapoints']:
if 'Sum' in datapoint: result = result + datapoint['Sum']
log.info ("Total Bytes processed by ELBs in time window of ["+str(METRIC_WINDOW)+"] minutes :["+str(result)+"]")
return result
"""
For each ALB, get the value for the ConsumedLCUs metric
"""
def calculate_alb_lcus(start, end, albs):
result = 0
for a in albs:
log.info("Getting ConsumedLCUs for ALB: [{}]".format(a))
metricsLcus = cwclient.get_metric_statistics(
Namespace='AWS/ApplicationELB',
MetricName='ConsumedLCUs',
Dimensions=[{'Name': 'LoadBalancer','Value': "app/{}".format(a)}],
StartTime=start,
EndTime=end,
Period=60*METRIC_WINDOW,
Statistics = ['Sum']
)
for datapoint in metricsLcus['Datapoints']:
result += datapoint.get('Sum',0)
log.info ("Total ConsumedLCUs consumed by ALBs in time window of ["+str(METRIC_WINDOW)+"] minutes :["+str(result)+"]")
return result
def calculate_lambda_executions(start, end, func):
result = 0
invocations = cwclient.get_metric_statistics(
Namespace='AWS/Lambda',
MetricName='Invocations',
#Dimensions=[{'Name': 'FunctionName','Value': func['name']}],
Dimensions=[{'Name': 'FunctionName','Value': func.id}],
StartTime=start,
EndTime=end,
Period=60*METRIC_WINDOW,
Statistics = ['Sum']
)
for datapoint in invocations['Datapoints']:
if 'Sum' in datapoint: result = result + datapoint['Sum']
log.debug("calculate_lambda_executions: [{}]".format(result))
return result
def calculate_lambda_duration(start, end, func):
result = 0
invocations = cwclient.get_metric_statistics(
Namespace='AWS/Lambda',
MetricName='Duration',
#Dimensions=[{'Name': 'FunctionName','Value': func['name']}],
Dimensions=[{'Name': 'FunctionName','Value': func.id}],
StartTime=start,
EndTime=end,
Period=60*METRIC_WINDOW,
Statistics = ['Average']
)
count = 0
total = 0
for datapoint in invocations['Datapoints']:
if 'Average' in datapoint:
count+=1
total+=datapoint['Average']
if count: result = total / count
log.debug("calculate_lambda_duration: [{}]".format(result))
return result
def get_lambda_memory(functionname, qualifier):
result = 0
args = {}
if qualifier: args = {'FunctionName':functionname,'Qualifier':qualifier}
else: args = {'FunctionName':functionname}
try:
response = lambdaclient.get_function_configuration(**args)
if 'MemorySize' in response:
result = response['MemorySize']
except ClientError as e:
log.error("{}".format(e))
return result
def get_ddb_capacity_units(tablename):
read = 0
write = 0
try:
r = ddbclient.describe_table(TableName=tablename)
if 'Table' in r:
read = r['Table']['ProvisionedThroughput']['ReadCapacityUnits']
write = r['Table']['ProvisionedThroughput']['WriteCapacityUnits']
return read, write
except Exception as e:
log.error("{}".format(e))
def get_kinesis_stream_shards(streamName):
shardCount = 0
extendedRetentionCount = 0
try:
#TODO: add support for streams with >100 shards
#TODO: add support for detailed metrics
response = kinesisclient.describe_stream(StreamName=streamName)
shardCount = len(response['StreamDescription']['Shards'])
if response['StreamDescription']['RetentionPeriodHours'] > 24:
extendedRetentionCount = shardCount
except Exception as e:
log.error("{}".format(e))
return shardCount, extendedRetentionCount
"""
This function calculates an approximation of PUT payload units, based on CloudWatch metrics.
Unfortunately, there is no direct CloudWatch metric that returns the PUT Payload Units for a stream.
https://aws.amazon.com/kinesis/streams/pricing/
PUT Payload Units are calculated in 25KB chunks and CloudWatch metrics return the number of records inserted
as well as the total bytes entering the stream. There is no accurate way to calculate the number of
25KB chunks going into the stream.
"""
def calculate_kinesis_put_payload_units(start, end, streamName):
totalRecords = 0
totalBytesAvg = 0
totalPutPayloadUnits = 0
chunkCount = 0 #Kinesis charges for PUT Payload Units in chunks of 25KB
try:
incomingRecords = cwclient.get_metric_statistics(
Namespace='AWS/Kinesis',
MetricName='IncomingRecords',
Dimensions=[{'Name': 'StreamName','Value': streamName}],
StartTime=start,
EndTime=end,
Period=60*METRIC_WINDOW,
Statistics = ['Sum']
)
for datapoint in incomingRecords['Datapoints']:
if 'Sum' in datapoint: totalRecords = totalRecords + datapoint['Sum']
incomingBytes = cwclient.get_metric_statistics(
Namespace='AWS/Kinesis',
MetricName='IncomingBytes',
Dimensions=[{'Name': 'StreamName','Value': streamName}],
StartTime=start,
EndTime=end,
Period=60*METRIC_WINDOW,
Statistics = ['Average']
)
for datapoint in incomingBytes['Datapoints']:
if 'Average' in datapoint:
chunkCount += int(math.ceil(datapoint['Average']/25000))
bytesDatapoints = len(incomingBytes['Datapoints'])
if not bytesDatapoints: bytesDatapoints = 1 #avoid zerodiv
totalPutPayloadUnits = totalRecords * chunkCount/bytesDatapoints
log.info("get_kinesis_stream_puts - incomingRecords:[{}] - chunkAvg:[{}] - totalPutPayloadUnits:[{}]".format(totalRecords, chunkCount/bytesDatapoints, totalPutPayloadUnits))
except Exception as e:
log.error("{}".format(e))
return totalPutPayloadUnits
def calculate_time_range():
start = datetime.datetime.utcnow() + datetime.timedelta(minutes=-METRIC_DELAY)
end = start + datetime.timedelta(minutes=METRIC_WINDOW)
log.info("start:["+str(start)+"] - end:["+str(end)+"]")
return start, end
def calculate_forecast_factor():
result = (60 / METRIC_WINDOW ) * HOURS_DICT[DEFAULT_FORECAST_PERIOD]
log.debug("Forecast factor:["+str(result)+"]")
return result
def get_ec2_instances(registered, all):
result = []
for a in all:
if a not in registered: result.append(a)
return result
def init_clients(context):
global ec2client
global rdsclient
global elbclient
global lambdaclient
global ddbclient
global kinesisclient
global cwclient
global tagsclient
global region
global awsaccount
arn = context.invoked_function_arn
region = arn.split(":")[3] #ARN format is arn:aws:lambda:us-east-1:xxx:xxxx
awsaccount = arn.split(":")[4]
ec2client = boto3.client('ec2',region)
rdsclient = boto3.client('rds',region)
elbclient = boto3.client('elb',region) #classic load balancers
elbclientv2 = boto3.client('elbv2',region) #application and network load balancers
lambdaclient = boto3.client('lambda',region)
ddbclient = boto3.client('dynamodb',region)
kinesisclient = boto3.client('kinesis',region)
cwclient = boto3.client('cloudwatch', region)
tagsclient = boto3.client('resourcegroupstaggingapi', region)
class ResourceManager():
def __init__(self, tagkey, tagvalue):
self.resources = []
self.init_resources(tagkey, tagvalue)
def init_resources(self, tagkey, tagvalue):
#TODO: Implement pagination
response = tagsclient.get_resources(
TagsPerPage = 500,
TagFilters=[{'Key': tagkey,'Values': [tagvalue]}],
ResourceTypeFilters=self.get_resource_type_filters(SERVICE_RESOURCE_MAP)
)
if 'ResourceTagMappingList' in response:
for r in response['ResourceTagMappingList']:
res = self.extract_resource(r['ResourceARN'])
if res:
self.resources.append(res)
#log.info("Tagged resource:{}".format(res.__dict__))
#Return a service:resource list in the format the ResourceGroupTagging API expects it
def get_resource_type_filters(self, service_resource_map):
result = []
for s in service_resource_map.keys():
for r in service_resource_map[s]: result.append("{}:{}".format(s,r))
return result
def extract_resource(self, arn):
service = arn.split(":")[2]
resourceId = ''
#See http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html for different patterns in ARNs
for service in (SERVICE_EC2, SERVICE_ELB, SERVICE_DYNAMODB, SERVICE_KINESIS):
for type in (RESOURCE_ALB, RESOURCE_NLB, RESOURCE_ELB):
if ':'+service+':' in arn and ':'+type+'/' in arn:
resourceId = arn.split(':'+type+'/')[1]
return self.Resource(service, type, resourceId, arn)
for type in (RESOURCE_EC2_INSTANCE,RESOURCE_EBS_VOLUME,RESOURCE_EBS_SNAPSHOT,RESOURCE_DDB_TABLE, RESOURCE_STREAM):
if ':'+service+':' in arn and ':'+type+'/' in arn:
resourceId = arn.split(':'+type+'/')[1]
return self.Resource(service, type, resourceId, arn)
for service in (SERVICE_RDS, SERVICE_LAMBDA):
for type in (RESOURCE_RDS_DB_INSTANCE, RESOURCE_LAMBDA_FUNCTION):
if ':'+service+':' in arn and ':'+type+':' in arn:
resourceId = arn.split(':'+type+':')[1]
return self.Resource(service, type, resourceId, arn)
return None
def get_resources(self, service, resourceType):
result = []
if self.resources:
for r in self.resources:
if r.service == service and r.type == resourceType:
result.append(r)
return result
def get_resource_ids(self, service, resourceType):
result = []
if self.resources:
for r in self.resources:
if r.service == service and r.type == resourceType:
result.append(r.id)
return result
class Resource():
def __init__(self, service, type, id, arn):
self.service = service
self.type = type
self.id = id
self.arn = arn
================================================
FILE: install.sh
================================================
#!/bin/sh
#Install application dependencies in /vendored folder
pip install -r requirements.txt -t vendored
#Install local dev environment & test dependencies in default site_packages path
pip install -r requirements-dev.txt
================================================
FILE: requirements-dev.txt
================================================
tinydb==3.4.1
numpy== 1.12.1
tabulate
boto3
python-lambda-local
================================================
FILE: requirements.txt
================================================
tinydb==3.4.1
numpy== 1.12.1
tabulate
================================================
FILE: scripts/README.md
================================================
## Concurrency Labs - aws-pricing-tools scripts
This folder contains Python scripts that are used for various purposes in the repository
All scripts need to be executed from the `/scripts` folder.
Make sure you have the following environment variables set:
```
export AWS_DEFAULT_PROFILE=
export AWS_DEFAULT_REGION=
```
### Get Latest Index
The code needs a local copy of the the AWS Price List API index file.
The GitHub repo doesn't come with the index file, therefore you have to
download it the first time you run a test and every time AWS publishes a new
Price List API index.
In order to download the latest index file, go to the "scripts" folder and run:
```
python get-latest-index.py --service=
```
The script takes a few seconds to execute since some index files are a little heavy (like the EC2 one).
I recommend executing with the option `--service=all` and subscribing to the AWS Price List API change notifications.
### Lambda Optimization Recommendations
This script does the following:
* It finds the function's execution records in CloudWatch Logs, for the
given time window in minutes (i.e. the past 10 minutes)
* Parses usage information and extracts memory used, execution time and memory allocated
* It uses the Price List Index to calculate pricing for the Lambda function,
for different scenarios and tells you potential savings.
```
python lambda-optimization.py --function= --minutes=
```
This function requires you to have the following IAM permissions:
* `lambda:getFunction`
* `logs:getLogEvents`
Make sure variables AWS_DEFAULT_PROFILE and AWS_DEFAULT_REGION are set.
================================================
FILE: scripts/emr-pricing.py
================================================
#!/usr/bin/python
import sys, os, getopt, json, logging
import argparse
import traceback
sys.path.insert(0, os.path.abspath('..'))
import awspricecalculator.emr.pricing as emrpricing
import awspricecalculator.common.consts as consts
import awspricecalculator.common.models as data
import awspricecalculator.common.utils as utils
from awspricecalculator.common.errors import ValidationError
from awspricecalculator.common.errors import NoDataFoundError
log = logging.getLogger()
logging.basicConfig()
log.setLevel(logging.DEBUG)
def main(argv):
parser = argparse.ArgumentParser()
parser.add_argument('--region', help='', required=False)
parser.add_argument('--regions', help='', required=False)
parser.add_argument('--sort-criteria', help='', required=False)
parser.add_argument('--instance-type', help='', required=False)
parser.add_argument('--instance-types', help='', required=False)
parser.add_argument('--instance-hours', help='', type=int, required=False)
parser.add_argument('--ebs-volume-type', help='', required=False)
parser.add_argument('--ebs-storage-gb-month', help='', required=False)
parser.add_argument('--piops', help='', type=int, required=False)
parser.add_argument('--data-transfer-out-internet-gb', help='', required=False)
parser.add_argument('--data-transfer-out-intraregion-gb', help='', required=False)
parser.add_argument('--data-transfer-out-interregion-gb', help='', required=False)
parser.add_argument('--to-region', help='', required=False)
parser.add_argument('--term-type', help='', required=False)
parser.add_argument('--offering-class', help='', required=False)
parser.add_argument('--offering-classes', help='', required=False)
parser.add_argument('--instance-count', help='', type=int, required=False)
parser.add_argument('--years', help='', required=False)
parser.add_argument('--offering-type', help='', required=False)
parser.add_argument('--offering-types', help='', required=False)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
args = parser.parse_args()
region = ''
regions = ''
instanceType = ''
instanceTypes = ''
instanceHours = 0
instanceCount = 0
sortCriteria = ''
ebsVolumeType = ''
ebsStorageGbMonth = 0
pIops = 0
dataTransferOutInternetGb = 0
dataTransferOutIntraRegionGb = 0
dataTransferOutInterRegionGb = 0
toRegion = ''
termType = consts.SCRIPT_TERM_TYPE_ON_DEMAND
offeringClass = ''
offeringClasses = consts.SUPPORTED_EMR_OFFERING_CLASSES #only used for Reserved comparisons (standard, convertible)
offeringType = ''
offeringTypes = consts.EC2_SUPPORTED_PURCHASE_OPTIONS #only used for Reserved comparisons (all-upfront, partial-upfront, no-upfront)
years = 1
if args.region: region = args.region
if args.regions: regions = args.regions
if args.sort_criteria: sortCriteria = args.sort_criteria
if args.instance_type: instanceType = args.instance_type
if args.instance_types: instanceTypes = args.instance_types
if args.instance_hours: instanceHours = int(args.instance_hours)
if args.ebs_volume_type: ebsVolumeType = args.ebs_volume_type
if args.ebs_storage_gb_month: ebsStorageGbMonth = int(args.ebs_storage_gb_month)
if args.piops: pIops = int(args.piops)
if args.data_transfer_out_internet_gb: dataTransferOutInternetGb = int(args.data_transfer_out_internet_gb)
if args.data_transfer_out_intraregion_gb: dataTransferOutIntraRegionGb = int(args.data_transfer_out_intraregion_gb)
if args.data_transfer_out_interregion_gb: dataTransferOutInterRegionGb = int(args.data_transfer_out_interregion_gb)
if args.to_region: toRegion = args.to_region
if args.term_type: termType = args.term_type
if args.offering_class: offeringClass = args.offering_class
if args.offering_classes: offeringClasses = args.offering_classes.split(',')
if args.instance_count: instanceCount = args.instance_count
if args.offering_type: offeringType = args.offering_type
if args.offering_types: offeringTypes = args.offering_types.split(',')
if args.years: years = str(args.years)
#TODO: Implement comparison between a subset of regions by entering an array of regions to compare
#TODO: Implement a sort by target region (for data transfer)
#TODO: For Reserved pricing, include a payment plan throughout the whole period, and a monthly average and savings
try:
#TODO: not working for EBS Snapshots!
kwargs = {'sortCriteria':sortCriteria, 'instanceType':instanceType, 'instanceTypes':instanceTypes,
'instanceHours':instanceHours, 'dataTransferOutInternetGb':dataTransferOutInternetGb,
'ebsVolumeType':ebsVolumeType, 'ebsStorageGbMonth':ebsStorageGbMonth, 'pIops':pIops,
'dataTransferOutIntraRegionGb':dataTransferOutIntraRegionGb, 'dataTransferOutInterRegionGb':dataTransferOutInterRegionGb,
'toRegion':toRegion, 'termType':termType, 'instanceCount': instanceCount, 'years': years, 'offeringType':offeringType,
'offeringClass':offeringClass
}
if region: kwargs['region'] = region
if sortCriteria:
if sortCriteria in (consts.SORT_CRITERIA_TERM_TYPE, consts.SORT_CRITERIA_TERM_TYPE_REGION):
if sortCriteria == consts.SORT_CRITERIA_TERM_TYPE_REGION:
#TODO: validate that region list is comma-separated
#TODO: move this list to utils.compare_term_types
if regions: kwargs['regions'] = regions.split(',')
else: kwargs['regions']=consts.SUPPORTED_REGIONS
kwargs['purchaseOptions'] = offeringTypes #purchase options are referred to as offering types in the EC2 API
kwargs['offeringClasses']=offeringClasses
validate (kwargs)
termPricingAnalysis = utils.compare_term_types(service=consts.SERVICE_EMR, **kwargs)
tabularData = termPricingAnalysis.pop('tabularData')
print ("EMR termpPricingAnalysis: [{}]".format(json.dumps(termPricingAnalysis,sort_keys=False, indent=4)))
print("csvData:\n{}\n".format(termPricingAnalysis['csvData']))
#print("tabularData:\n{}".format(tabularData).replace("reserved","rsv").replace("standard","std").
# replace("convertible","conv").replace("-upfront","").replace("partial","par").replace("demand","dmd"))
print("\n{}".format(tabularData))
else:
validate (kwargs)
pricecomparisons = utils.compare(service=consts.SERVICE_EMR,**kwargs)
print("Price comparisons:[{}]".format(json.dumps(pricecomparisons, indent=4)))
#tabularData = termPricingAnalysis.pop('tabularData')
#print("tabularData:\n{}".format(tabularData))
else:
validate (kwargs)
emr_pricing = emrpricing.calculate(data.EmrPriceDimension(**kwargs))
print(json.dumps(emr_pricing,sort_keys=False,indent=4))
except NoDataFoundError as ndf:
print ("NoDataFoundError args:[{}]".format(args))
except Exception as e:
traceback.print_exc()
print("Exception message:["+str(e)+"]")
"""
This function contains validations at the script level. No need to validate EC2 parameters, since
class Ec2PriceDimension already contains a validation function.
"""
def validate (args):
#TODO: add - if termType sort criteria is specified, don't include offeringClass (singular)
#TODO: add - if offeringTypes is included, have at least one valid offeringType (purchase option)
#TODO: move to models
validation_msg = ""
if args.get('sortCriteria','') == consts.SORT_CRITERIA_TERM_TYPE:
if args.get('instanceHours',False):
validation_msg = "instance-hours cannot be set when sort-criteria=term-type"
if args.get('offeringType',False):
validation_msg = "offering-type cannot be set when sort-criteria=term-type - try offering-types (plural) instead"
if not args.get('years',''):
validation_msg = "years cannot be empty"
if args.get('sortCriteria','') == consts.SORT_CRITERIA_TERM_TYPE_REGION:
if not args.get('offeringClasses',''):
validation_msg = "offering-classes cannot be empty"
if not args.get('purchaseOptions',''):
validation_msg = "offering-types cannot be empty"
if validation_msg:
print("Error: [{}]".format(validation_msg))
raise ValidationError(validation_msg)
return
if __name__ == "__main__":
main(sys.argv[1:])
================================================
FILE: scripts/get-latest-index.py
================================================
#!/usr/bin/python
import os, sys, getopt, json, csv, ssl
import urllib.request
sys.path.insert(0, os.path.abspath('..'))
from awspricecalculator.common import consts as consts
from awspricecalculator.common import phelper as phelper
if (not os.environ.get('PYTHONHTTPSVERIFY', '') and
getattr(ssl, '_create_unverified_context', None)):
ssl._create_default_https_context = ssl._create_unverified_context
__location__ = os.path.dirname(os.path.realpath(__file__))
dataindexpath = os.path.join(os.path.split(__location__)[0],"awspricecalculator", "data")
"""
This script gets the latest index files from the AWS Price List API.
"""
#TODO: add support for term-type = onDemand, Reserved or both
def main(argv):
SUPPORTED_SERVICES = (consts.SERVICE_S3, consts.SERVICE_EC2, consts.SERVICE_RDS, consts.SERVICE_LAMBDA,
consts.SERVICE_DYNAMODB, consts.SERVICE_KINESIS, consts.SERVICE_DATA_TRANSFER, consts.SERVICE_EMR,
consts.SERVICE_REDSHIFT, consts.SERVICE_ALL)
SUPPORTED_FORMATS = ('json','csv')
OFFER_INDEX_URL = 'https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/{serviceIndex}/current/index.'
service = ''
format = ''
region = ''
tenancy = ''
help_message = 'Script usage: \nget-latest-index.py --service= --format='
try:
opts, args = getopt.getopt(argv,"hr:s:f:t",["region=","service=","format=","tenancy="])
print ('opts: ' + str(opts))
except getopt.GetoptError:
print (help_message)
sys.exit(2)
for opt in opts:
if opt[0] == '-h':
print (help_message)
sys.exit()
if opt[0] in ("-s","--service"):
service = opt[1]
if opt[0] in ("-f","--format"):
format = opt[1]
if opt[0] in ("-r","--region"):
region = opt[1]
if opt[0] in ("-t","--tenancy"): #comma-separated tenancies (host, dedicated, shared)
tenancy= opt[1].split(',')
if not format: format = 'csv'
validation_ok = True
if service not in SUPPORTED_SERVICES:
validation_ok = False
if format not in SUPPORTED_FORMATS:
validation_ok = False
if not validation_ok:
print (help_message)
sys.exit(2)
services = []
if service == 'all': services = SUPPORTED_SERVICES
else: services = [service]
term = '' #all terms
extraArgs = {}
if tenancy: extraArgs['tenancies']=[tenancy]
else: extraArgs['tenancies']=consts.EC2_TENANCY_MAP.keys()
for s in services:
if s != 'all':
offerIndexUrl = OFFER_INDEX_URL.replace('{serviceIndex}',consts.SERVICE_INDEX_MAP[s]) + format
print ('Downloading offerIndexUrl:['+offerIndexUrl+']...')
servicedatapath = dataindexpath + "/" + s
print ("servicedatapath:[{}]".format(servicedatapath))
if not os.path.exists(servicedatapath): os.mkdir(servicedatapath)
filename = servicedatapath+"/index."+format
with open(filename, "wb") as f: f.write(urllib.request.urlopen(offerIndexUrl).read())
if format == 'csv':
remove_metadata(filename)
split_index(s, region, term, **extraArgs)
"""
The first rows in the PriceList index.csv are metadata.
This method removes the metadata from the index files and writes it in a separate .json file,
so the metadata can be accessed by other modules. For example, the PriceList Version is returned
in every price calculation.
"""
def remove_metadata(index_filename):
print ("Removing metadata from file [{}]".format(index_filename))
metadata_filename = index_filename.replace('.csv','_metadata.json')
metadata_dict = {}
with open(index_filename,"r") as rf:
lines = rf.readlines()
with open(index_filename,"w") as wf:
i = 0
for l in lines:
#The first 5 records in the CSV file are metadata
if i <= 4:
config_record = l.replace('","','"|"').strip("\n").split("|")
metadata_dict[config_record[0].strip('\"')] = config_record[1].strip('\"')
else:
wf.write(l)
i += 1
with open(metadata_filename,"w") as mf:
print ("Creating metadata file [{}]".format(metadata_filename))
metadata_json = json.dumps(metadata_dict,sort_keys=False,indent=4)
print ("metadata_json: [{}]".format(metadata_json))
mf.write(metadata_json)
"""
Some index files are too large. For example, the one for EC2 has more than 460K records.
In order to make price lookup more efficient, awspricecalculator splits the
index based on a combination of region, term type and product family. Each partition
has a key, which is used by tinydb to load smaller files as databases that can be
queried. This increases performance significantly.
"""
def split_index(service, region, term, **args):
#Split index format: region -> term type -> product family
indexDict = {}#contains the keys of the files that will be created
productFamilies = {}
usageGroupings=[]
partition_keys = phelper.get_partition_keys(service, region, term, **args)#All regions and all term types (On-Demand + Reserved)
#print("partition_keys:[{}]".format(partition_keys))
for pk in partition_keys:
indexDict[pk]=[]
fieldnames = []
#with open(get_index_file_name(service, 'index', 'csv'), 'rb') as csvfile:
with open(get_index_file_name(service, 'index', 'csv'), 'r') as csvfile:
pricelist = csv.DictReader(csvfile, delimiter=',', quotechar='"')
indexRegion = ''
x = 0
for row in pricelist:
indexKey = ''
if x==0: fieldnames=row.keys()
if row.get('Location Type','') == 'AWS Region':
indexRegion = row['Location']
if row.get('Product Family','')== consts.PRODUCT_FAMILY_DATA_TRANSFER:
indexRegion = row['From Location']
#Determine the index partition the current row belongs to and append it to the corresponding array
if row.get('TermType','') == consts.TERM_TYPE_RESERVED:
#TODO:move the creation of the index dimensions to a common function
if service == consts.SERVICE_EC2:
indexDimensions = (indexRegion,row['TermType'],row['Product Family'],row['OfferingClass'],row['Tenancy'], row['PurchaseOption'])
elif service in (consts.SERVICE_RDS, consts.SERVICE_REDSHIFT):#'Tenancy' is not part of the RDS/Redshift index, therefore default it to Shared
indexDimensions = (indexRegion,row['TermType'],row['Product Family'],row['OfferingClass'],row.get('Tenancy',consts.EC2_TENANCY_SHARED),row['PurchaseOption'])
else:
if service == consts.SERVICE_EC2:
indexDimensions = (indexRegion,row['TermType'],row['Product Family'],row['Tenancy'])
else:
indexDimensions = (indexRegion,row['TermType'],row['Product Family'])
#print ("TermType:[{}] - service:[{}] - indexDimensions:[{}]".format(row.get('TermType',''), service, indexDimensions))
indexKey = phelper.create_file_key(indexDimensions)
if indexKey in indexDict:
indexDict[indexKey].append(remove_fields(service, row))
#Get a list of distinct product families in the index file
productFamily = row['Product Family']
if productFamily not in productFamilies:
productFamilies[productFamily] = []
usageGroup = row.get('Group','')
if usageGroup not in productFamilies[productFamily]:
productFamilies[productFamily].append(usageGroup)
x += 1
if x % 1000 == 0: print("Processed row [{}]".format(x))
print ("productFamilies:{}".format(productFamilies))
i = 0
#Create csv files based on the partitions that were calculated when scanning the main index.csv file
for f in indexDict.keys():
if indexDict[f]:
i += 1
print ("Writing file for key: [{}]".format(f))
with open(get_index_file_name(service, f, 'csv'),'w') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames, dialect='excel', quoting=csv.QUOTE_ALL)
writer.writeheader()
for r in indexDict[f]:
writer.writerow(r)
print ("Number of records in main index file: [{}]".format(x))
print ("Number of files written: [{}]".format(i))
return
def get_index_file_name(service, name, format):
result = '../awspricecalculator/data/'+service+'/'+name+'.'+format
return result
"""
This method removes unnecessary fields from each row in the index file. This is necessary since large index files
become a problem when they're too large and result in Lambda functions exceeding package size or in slower warm-up
times for Lambda.
"""
def remove_fields(service, row):
#don't exclude: 'Product Family', 'operation' (used by ELB)
EXCLUDE_FIELD_DICT = {
consts.SERVICE_EC2:['Location Type', 'Storage', 'Location', 'Memory', 'Physical Processor',
'Dedicated EBS Throughput', 'Processor Features', 'ECU', 'serviceName', 'Network Performance',
'Instance Family', 'Current Generation',
'serviceCode','TermType','Tenancy','OfferingClass','PurchaseOption' #these fields are implicit in the DB file name, therefore they're not necessary in the file itself
]
}
for f in EXCLUDE_FIELD_DICT.get(service, []):
row.pop(f,'')
return row
#TODO: remove consolidated index.csv file after it has been split into smaller files
if __name__ == "__main__":
main(sys.argv[1:])
================================================
FILE: scripts/lambda-optimization.py
================================================
#!/usr/bin/python
import sys, os, json, time
import argparse
import traceback
import boto3
from botocore.exceptions import ClientError
import math
sys.path.insert(0, os.path.abspath('..'))
sys.path.insert(0, os.path.abspath('../vendored'))
import numpy
import awspricecalculator.awslambda.pricing as lambdapricing
import awspricecalculator.common.models as data
import awspricecalculator.common.consts as consts
import awspricecalculator.common.errors as errors
logsclient = boto3.client('logs')
lambdaclient = boto3.client('lambda')
MONTHLY = "MONTHLY"
MS_MAP = {MONTHLY:(3.6E6)*720}
def main(argv):
region = os.environ['AWS_DEFAULT_REGION']
parser = argparse.ArgumentParser()
parser.add_argument('--function', help='', required=True)
parser.add_argument('--minutes', help='', required=True)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
args = parser.parse_args()
function = ''
minutes = 0 #in minutes
if args.function: function = args.function
if args.minutes: minutes = int(args.minutes)
try:
validate(function, minutes)
except errors.ValidationError as error:
print(error.message)
sys.exit(1)
mem_used_array = []
duration_array = []
prov_mem_size = 0
firstEventTs = 0
lastEventTs = 0
ts_format = "%Y-%m-%d %H:%M:%S UTC"
log_group_name = '/aws/lambda/'+function
try:
i = 0
windowStartTime = (int(time.time()) - minutes * 60) * 1000
firstEventTs = windowStartTime #temporary value, it will be updated once (if) we get results from the CW Logs get_log_events API
lastEventTs = int(time.time() * 1000) #this will also be updated once (if) we get results from the CW Logs get_log_events API
nextLogstreamToken = True
logstreamsargs = {'logGroupName':log_group_name, 'orderBy':'LastEventTime', 'descending':True}
while nextLogstreamToken:
logstreams = logsclient.describe_log_streams(**logstreamsargs)
"""
Read through CW Logs entries and extract information from them.
We're interested in entries that look like this:
REPORT RequestId: 7686bf2c-2f79-11e7-b693-97868a5db36b Duration: 5793.53 ms Billed Duration: 5800 ms Memory Size: 448 MB Max Memory Used: 24 MB
"""
if 'logStreams' in logstreams:
print("Number of logstreams found:[{}]".format(len(logstreams['logStreams'])))
nextLogstreamToken = logstreams.get('nextToken',False)
if nextLogstreamToken: logstreamsargs['nextToken']=nextLogstreamToken
else:logstreamsargs.pop('nextToken',False)
#Go through all logstreams in descending order
for ls in logstreams['logStreams']:
nextEventsForwardToken = True
logeventsargs = {'logGroupName':log_group_name, 'logStreamName':ls['logStreamName'],
'startFromHead':True, 'startTime':windowStartTime}
while nextEventsForwardToken:
logevents = logsclient.get_log_events(**logeventsargs)
if 'events' in logevents:
if len(logevents['events']):
print ("\nEvents for logGroup:[{}] - logstream:[{}] - nextForwardToken:[{}]".format(log_group_name, ls['logStreamName'],nextEventsForwardToken))
for e in logevents['events']:
#Extract lambda execution duration and memory utilization from "REPORT" log events
if 'REPORT RequestId:' in e['message']:
mem_used = e['message'].split('Max Memory Used: ')[1].split()[0]
mem_used_array.append(int(mem_used))
duration = e['message'].split('Billed Duration: ')[1].split()[0]
duration_array.append(int(duration))
if i == 0:
prov_mem_size = int(e['message'].split('Memory Size: ')[1].split()[0])
firstEventTs = e['timestamp']
lastEventTs = e['timestamp']
else:
if e['timestamp'] < firstEventTs: firstEventTs = e['timestamp']
if e['timestamp'] > lastEventTs: lastEventTs = e['timestamp']
print ("mem_used:[{}] - mem_size:[{}] - timestampMs:[{}] - timestamp:[{}]".format(mem_used,prov_mem_size, e['timestamp'], time.strftime(ts_format, time.gmtime(e['timestamp']/1000))))
print e['message']
i += 1
else: break
nextEventsForwardToken = logevents.get('nextForwardToken',False)
if nextEventsForwardToken: logeventsargs['nextToken']=nextEventsForwardToken
else: logeventsargs.pop('nextToken',False)
#Once we've iterated through all log streams and log events, calculate averages, cost and optimization scenarios
avg_used_mem = 0
avg_duration_ms = 0
p90_duration_ms = 0
p99_duration_ms = 0
p100_duration_ms = 0
if mem_used_array: avg_used_mem = math.ceil(numpy.average(mem_used_array))
if duration_array:
avg_duration_ms = round(math.ceil(numpy.average(duration_array)),0)
p90_duration_ms = round(math.ceil(numpy.percentile(duration_array, 90)),0)
p99_duration_ms = round(math.ceil(numpy.percentile(duration_array, 99)),0)
p100_duration_ms = round(math.ceil(numpy.percentile(duration_array, 100)),0)
base_usage = LambdaSampleUsage(region, i, avg_duration_ms, avg_used_mem, prov_mem_size, firstEventTs, lastEventTs, MONTHLY)
memoptims= []
durationoptims = []
current_cost = 0
for m in get_lower_possible_memory_ranges(avg_used_mem, prov_mem_size):
#TODO: add target memory % utilization (i.e. I want to use 60% of memory and see how much that'll save me)
memoptims.append(LambdaUtilScenario(base_usage, base_usage.avgDurationMs, m).__dict__)
for d in get_lower_possible_durations(avg_duration_ms, 100):
durationoptims.append(LambdaUtilScenario(base_usage, d, base_usage.memSizeMb).__dict__)
optim_info = {"sampleUsage":base_usage.__dict__,
"memoryOptimizationScenarios":memoptims,
"durationOptimizationScenarios":durationoptims
}
#print(json.dumps(optim_info,sort_keys=False,indent=4))
print ("avg_duration_ms:[{}] avg_used_mem:[{}] prov_mem_size:[{}] records:[{}]".format(avg_duration_ms, avg_used_mem,prov_mem_size,i))
print ("p90_duration_ms:[{}] p99_duration_ms:[{}] p100_duration_ms:[{}]".format(p90_duration_ms, p99_duration_ms, p100_duration_ms))
print ("------------------------------------------------------------------------------------")
print ("OPTIMIZATION SUMMARY\n")
print ("**Data sample used for calculation:**")
print ("CloudWatch Log Group: [{}]\n" \
"First Event time:[{}]\n" \
"Last Event time:[{}]\n" \
"Number of executions:[{}]\n" \
"Average executions per second:[{}]".\
format(log_group_name,
time.strftime(ts_format, time.gmtime(base_usage.startTs/1000)),
time.strftime(ts_format, time.gmtime(base_usage.endTs/1000)),
base_usage.requestCount, base_usage.avgTps))
print ("\n**Usage for Lambda function [{}] in the sample period is the following:**".format(function))
print ("Average duration per Lambda execution: {}ms\n" \
"Average consumed memory per execution: {}MB\n" \
"Configured memory in your Lambda function: {}MB\n" \
"Memory utilization (used/allocated): {}%\n" \
"Total projected cost: ${}USD - {}".\
format(base_usage.avgDurationMs, base_usage.avgMemUsedMb,base_usage.memSizeMb,
base_usage.memUsedPct,base_usage.projectedCost, base_usage.projectedPeriod))
if memoptims:
print ("\nThe following Lambda memory configurations could save you money (assuming constant execution time)")
labels = ['memSizeMb', 'memUsedPct', 'cost', 'timePeriod', 'savingsAmt']
print ("\n"+ResultsTable(memoptims,labels).dict2md())
if durationoptims:
print ("\n\nCan you make your function execute faster? The following Lambda execution durations will save you money (assuming memory allocation remains constant):")
labels = ['durationMs', 'cost', 'timePeriod', 'savingsAmt']
print ("\n"+ResultsTable(durationoptims,labels).dict2md())
print ("------------------------------------------------------------------------------------")
except Exception as e:
traceback.print_exc()
print("Exception message:["+str(e.message)+"]")
"""
Get the possible Lambda memory configurations values that would:
- Be lower than the current provisioned value (thus, cheaper)
- Are greater than the current average used memory (therefore won't result in memory errors)
"""
def get_lower_possible_memory_ranges(usedMem, provMem):
result = []
for m in consts.LAMBDA_MEM_SIZES:
if usedMem < float(m) and m < provMem:
result.append(m)
return result
"""
Get the possible Lambda execution values that would:
- Be lower than the current average duration (thus, cheaper)
- Are greater than a lower limit set by the user of the script (there's only so much one can do to make a function run faster)
"""
def get_lower_possible_durations(usedDurationMs, lowestDuration):
result = []
initBilledDurationMs = math.floor(usedDurationMs / 100) * 100
d = int(initBilledDurationMs)
while d >= lowestDuration:
result.append(d)
d -= 100
return result
#TODO:Move to a different file
#This class models the usage for a Lambda function within a time window defined by startTs and endTs
class LambdaSampleUsage():
def __init__(self, region, requestCount, avgDurationMs, avgMemUsedMb, memSizeMb, startTs, endTs, projectedPeriod):
self.region = region
self.requestCount = 0
if requestCount: self.requestCount = requestCount
self.avgDurationMs = 0
if avgDurationMs: self.avgDurationMs = int(avgDurationMs)
self.avgMemUsedMb = int(avgMemUsedMb)
self.memSizeMb = memSizeMb
self.memUsedPct = 0.00
if memSizeMb: self.memUsedPct = round(float(100 * avgMemUsedMb/memSizeMb),2)
self.startTs = startTs
self.endTs = endTs
self.elapsedMs = endTs - startTs
self.avgTps = 0
if self.elapsedMs:
self.avgTps = round((1000 * float(self.requestCount) / float(self.elapsedMs)),4)
self.projectedPeriod = projectedPeriod
self.projectedRequestCount = self.get_projected_request_count(requestCount)
args = {'region':region, 'requestCount':self.projectedRequestCount,
'avgDurationMs':math.ceil(avgDurationMs/100)*100, 'memoryMb':memSizeMb}
print ("args: {}".format(args))
self.projectedCost = lambdapricing.calculate(data.LambdaPriceDimension(**args))['totalCost']
def get_projected_request_count(self,requestCount):
result = 0
print ("elapsed_ms:[{}] - period: [{}]".format(self.elapsedMs, self.projectedPeriod))
if self.elapsedMs:
result = float(requestCount)*(MS_MAP[self.projectedPeriod]/self.elapsedMs)
return int(result)
"""
This class represents usage scenarios that will be modeled and displayed as possibilities, so the user can decide
if they're good options (or not) .
"""
class LambdaUtilScenario():
def __init__(self, base_usage, proposedDurationMs, proposedMemSizeMb):
self.memSizeMb = proposedMemSizeMb
self.memUsedPct = 0
self.memUsedPct = float(100 * base_usage.avgMemUsedMb/proposedMemSizeMb)
self.durationMs = proposedDurationMs
args = {'region':base_usage.region, 'requestCount':base_usage.projectedRequestCount,
'avgDurationMs':self.durationMs, 'memoryMb':proposedMemSizeMb}
self.cost= lambdapricing.calculate(data.LambdaPriceDimension(**args))['totalCost']
self.savingsAmt = round((base_usage.projectedCost - self.cost),2)
self.savingsPct = 0.00
if base_usage.projectedCost:
self.savingsPct = round((100 * self.savingsAmt / base_usage.projectedCost),2)
self.timePeriod = MONTHLY
def get_next_mem(self, memUsedMb):
result = 0
for m in consts.LAMBDA_MEM_SIZES:
if memUsedMb <= float(m):
result = m
break
return result
"""
This class takes an array of dictionary objects, so they can be converted to table format using Markdown syntax.
It also takes an optional array of labels, if you want to limit the output to a subset of keys in each dictionary.
The output is something like this:
key1| key2| key3
---| ---| ---
01| 02| 03
04| 05| 06
07| 08| 09
"""
class ResultsTable():
def __init__(self, records,labels):
self.records = records
self.labels = []
if labels:
self.labels = labels
#Converts an array of dictionaries to Markdown format.
def dict2md(self):
result = ""
keys = []
if self.labels:
keys = self.labels
else:
if self.records: keys = self.records[0].keys()
rc = 0 #rowcount
mdrow = "" #markdown headers row
self.records.insert(0,[])#insert dummy record at the beginning, since first record in loop is used to create row headers
for r in self.records:
#if rc==0: keys = r.keys()
cc = 0 #column count
for k in keys:
cc += 1
if rc==0:
result += k
mdrow += self.addpadding(k,'---')
else:
result += self.addpadding(k,r[k])
if cc == len(keys):
result += "\n"
mdrow += "\n"
else:
result += "|"
mdrow += "|"
if rc==0: result += mdrow
rc += 1
return result
"""
def dict2md(self):
result = ""
keys = []
if self.labels:
keys = self.labels
else:
if self.records: keys = self.records[0].keys()
rc = 0 #rowcount
mdrow = "" #markdown headers row
for r in self.records:
#if rc==0: keys = r.keys()
cc = 0 #column count
for k in keys:
cc += 1
if rc==0:
result += k
mdrow += self.addpadding(k,'---')
else:
result += self.addpadding(k,r[k])
if cc == len(keys):
result += "\n"
mdrow += "\n"
else:
result += "|"
mdrow += "|"
if rc==0: result += mdrow
rc += 1
return result
"""
def addpadding(self,label,value):
padding = ""
i = 0
while i < (len(label)-len(str(value))):
padding += " "
i += 1
return padding + str(value)
def validate(function, minutes):
validation_ok = True
validation_message = "\nValidationError:\n"
try:
lambdaclient.get_function(FunctionName=function)
except ClientError as e:
if e.response['Error']['Code'] == 'ResourceNotFoundException':
validation_message += "Function [{}] does not exist, please enter a valid Lambda function\n".format(function)
else:
validation_message += "Boto3 client error when calling lambda.get_function"
validation_ok = False
if minutes <1:
validation_message += "Minutes must be greater than 0\n"
validation_ok = False
if not validation_ok:
raise errors.ValidationError(validation_message)
return validation_ok
if __name__ == "__main__":
main(sys.argv[1:])
================================================
FILE: scripts/redshift-pricing.py
================================================
#!/usr/bin/python
import sys, os, getopt, json, logging
import argparse
import traceback
sys.path.insert(0, os.path.abspath('..'))
import awspricecalculator.redshift.pricing as redshiftpricing
import awspricecalculator.common.consts as consts
import awspricecalculator.common.models as data
import awspricecalculator.common.utils as utils
from awspricecalculator.common.errors import ValidationError
from awspricecalculator.common.errors import NoDataFoundError
log = logging.getLogger()
logging.basicConfig()
log.setLevel(logging.DEBUG)
def main(argv):
parser = argparse.ArgumentParser()
parser.add_argument('--region', help='', required=False)
parser.add_argument('--regions', help='', required=False)
parser.add_argument('--sort-criteria', help='', required=False)
parser.add_argument('--instance-type', help='', required=False)
parser.add_argument('--instance-types', help='', required=False)
parser.add_argument('--instance-hours', help='', type=int, required=False)
parser.add_argument('--ebs-volume-type', help='', required=False)
parser.add_argument('--ebs-storage-gb-month', help='', required=False)
parser.add_argument('--piops', help='', type=int, required=False)
parser.add_argument('--data-transfer-out-internet-gb', help='', required=False)
parser.add_argument('--data-transfer-out-intraregion-gb', help='', required=False)
parser.add_argument('--data-transfer-out-interregion-gb', help='', required=False)
parser.add_argument('--to-region', help='', required=False)
parser.add_argument('--term-type', help='', required=False)
parser.add_argument('--offering-class', help='', required=False)
parser.add_argument('--offering-classes', help='', required=False)
parser.add_argument('--instance-count', help='', type=int, required=False)
parser.add_argument('--years', help='', required=False)
parser.add_argument('--offering-type', help='', required=False)
parser.add_argument('--offering-types', help='', required=False)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
args = parser.parse_args()
region = ''
regions = ''
instanceType = ''
instanceTypes = ''
instanceHours = 0
instanceCount = 0
sortCriteria = ''
ebsVolumeType = ''
ebsStorageGbMonth = 0
pIops = 0
dataTransferOutInternetGb = 0
dataTransferOutIntraRegionGb = 0
dataTransferOutInterRegionGb = 0
toRegion = ''
termType = consts.SCRIPT_TERM_TYPE_ON_DEMAND
offeringClass = ''
offeringClasses = consts.SUPPORTED_REDSHIFT_OFFERING_CLASSES #only used for Reserved comparisons (standard, convertible)
offeringType = ''
offeringTypes = consts.EC2_SUPPORTED_PURCHASE_OPTIONS #only used for Reserved comparisons (all-upfront, partial-upfront, no-upfront)
years = 1
if args.region: region = args.region
if args.regions: regions = args.regions
if args.sort_criteria: sortCriteria = args.sort_criteria
if args.instance_type: instanceType = args.instance_type
if args.instance_types: instanceTypes = args.instance_types
if args.instance_hours: instanceHours = int(args.instance_hours)
if args.ebs_volume_type: ebsVolumeType = args.ebs_volume_type
if args.ebs_storage_gb_month: ebsStorageGbMonth = int(args.ebs_storage_gb_month)
if args.piops: pIops = int(args.piops)
if args.data_transfer_out_internet_gb: dataTransferOutInternetGb = int(args.data_transfer_out_internet_gb)
if args.data_transfer_out_intraregion_gb: dataTransferOutIntraRegionGb = int(args.data_transfer_out_intraregion_gb)
if args.data_transfer_out_interregion_gb: dataTransferOutInterRegionGb = int(args.data_transfer_out_interregion_gb)
if args.to_region: toRegion = args.to_region
if args.term_type: termType = args.term_type
if args.offering_class: offeringClass = args.offering_class
if args.offering_classes: offeringClasses = args.offering_classes.split(',')
if args.instance_count: instanceCount = args.instance_count
if args.offering_type: offeringType = args.offering_type
if args.offering_types: offeringTypes = args.offering_types.split(',')
if args.years: years = str(args.years)
#TODO: Implement comparison between a subset of regions by entering an array of regions to compare
#TODO: Implement a sort by target region (for data transfer)
#TODO: For Reserved pricing, include a payment plan throughout the whole period, and a monthly average and savings
#TODO: remove EBS for Redshift
try:
kwargs = {'sortCriteria':sortCriteria, 'instanceType':instanceType, 'instanceTypes':instanceTypes,
'instanceHours':instanceHours, 'dataTransferOutInternetGb':dataTransferOutInternetGb, 'pIops':pIops,
'dataTransferOutIntraRegionGb':dataTransferOutIntraRegionGb, 'dataTransferOutInterRegionGb':dataTransferOutInterRegionGb,
'toRegion':toRegion, 'termType':termType, 'instanceCount': instanceCount, 'years': years, 'offeringType':offeringType,
'offeringClass':offeringClass
}
if region: kwargs['region'] = region
if sortCriteria:
if sortCriteria in (consts.SORT_CRITERIA_TERM_TYPE, consts.SORT_CRITERIA_TERM_TYPE_REGION):
if sortCriteria == consts.SORT_CRITERIA_TERM_TYPE_REGION:
#TODO: validate that region list is comma-separated
#TODO: move this list to utils.compare_term_types
if regions: kwargs['regions'] = regions.split(',')
else: kwargs['regions']=consts.SUPPORTED_REGIONS
kwargs['purchaseOptions'] = offeringTypes #purchase options are referred to as offering types in the EC2 API
kwargs['offeringClasses']=offeringClasses
validate (kwargs)
termPricingAnalysis = utils.compare_term_types(service=consts.SERVICE_REDSHIFT, **kwargs)
tabularData = termPricingAnalysis.pop('tabularData')
print ("Redshift termpPricingAnalysis: [{}]".format(json.dumps(termPricingAnalysis,sort_keys=False, indent=4)))
print("csvData:\n{}\n".format(termPricingAnalysis['csvData']))
print("tabularData:\n{}".format(tabularData))
else:
validate (kwargs)
pricecomparisons = utils.compare(service=consts.SERVICE_REDSHIFT,**kwargs)
print("Price comparisons:[{}]".format(json.dumps(pricecomparisons, indent=4)))
else:
validate (kwargs)
redshift_pricing = redshiftpricing.calculate(data.RedshiftPriceDimension(**kwargs))
print(json.dumps(redshift_pricing,sort_keys=False,indent=4))
except NoDataFoundError as ndf:
print ("NoDataFoundError args:[{}]".format(args))
except Exception as e:
traceback.print_exc()
print("Exception message:["+str(e)+"]")
"""
This function contains validations at the script level. No need to validate Redshift parameters, since
class RedshiftPriceDimension already contains a validation function.
"""
def validate (args):
#TODO: add - if termType sort criteria is specified, don't include offeringClass (singular)
#TODO: add - if offeringTypes is included, have at least one valid offeringType (purchase option)
#TODO: move to models or a common place that can be used by both CLI and API
validation_msg = ""
if args.get('sortCriteria','') == consts.SORT_CRITERIA_TERM_TYPE:
if args.get('instanceHours',False):
validation_msg = "instance-hours cannot be set when sort-criteria=term-type"
if args.get('offeringType',False):
validation_msg = "offering-type cannot be set when sort-criteria=term-type - try offering-types (plural) instead"
if not args.get('years',''):
validation_msg = "years cannot be empty"
if args.get('sortCriteria','') == consts.SORT_CRITERIA_TERM_TYPE_REGION:
if not args.get('offeringClasses',''):
validation_msg = "offering-classes cannot be empty"
if not args.get('purchaseOptions',''):
validation_msg = "offering-types cannot be empty"
if validation_msg:
print("Error: [{}]".format(validation_msg))
raise ValidationError(validation_msg)
return
if __name__ == "__main__":
main(sys.argv[1:])
================================================
FILE: serverless.env.yml
================================================
# This is the Serverless Environment File
#
# It contains listing of your stages, and their regions
# It also manages serverless variables at 3 levels:
# - common variables: variables that apply to all stages/regions
# - stage variables: variables that apply to a specific stage
# - region variables: variables that apply to a specific region
vars:
stages:
dev:
vars:
regions:
us-east-1:
vars:
================================================
FILE: serverless.yml
================================================
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# v1.docs.serverless.com
#
# Happy Coding!
service: aws-pricing # NOTE: update this with your service name
provider:
name: aws
runtime: python3.6
# you can add statements to the Lambda function's IAM Role here
iamRoleStatements:
- Effect: "Allow"
Action:
- cloudwatch:*
- ec2:Describe*
- elasticloadbalancing:Describe*
- autoscaling:Describe*
- rds:Describe*
- rds:List*
- dynamodb:Describe*
- dynamodb:List*
- kinesis:Describe*
- kinesis:List*
- lambda:GetFunctionConfiguration
- tag:getResources
- tag:getTagKeys
- tag:getTagValues
Resource: "*"
# you can overwrite defaults here
#defaults:
# stage: dev
# region: us-east-1
# you can add packaging information here
package:
exclude:
- bin/*
- lib/**
- .git/**
- .idea/**
- include/**
- pip-selfcheck.json
- awspricecalculator/s3
- awspricecalculator/data/ec2/index.*
- awspricecalculator/data/ec2/*Dedicated*
- awspricecalculator/data/ec2/*Host*
- awspricecalculator/data/ec2/*Reserved*
- awspricecalculator/data/ec2/*Savings*
- awspricecalculator/data/rds/index.*
- awspricecalculator/data/rds/*Reserved*
- awspricecalculator/data/s3/index.*
- awspricecalculator/data/lambda/index.*
- awspricecalculator/data/dynamodb/index.*
- awspricecalculator/data/kinesis/index.*
- scripts/**
- test/**
- cloudformation/**
- readme.txt
include:
- vendored
functions:
nearrealtimepricing:
handler: functions/calculate-near-realtime.handler
name: calculate-near-realtime-pricing
timeout: 300
memory: 1024
events:
- schedule:
rate: rate(5 minutes)
enabled: true
input:
tag:
key: ${env:PRICING_TAG_KEY}
value: ${env:PRICING_TAG_VALUE}
# you can add CloudFormation resource templates here
#resources:
# Resources:
# NewResource:
# Type: AWS::S3::Bucket
# Properties:
# BucketName: my-new-bucket
# Outputs:
# NewOutput:
# Description: "Description for the output"
# Value: "Some output value"
================================================
FILE: setup.py
================================================
from setuptools import setup
setup(name='awspricecalculator',
version='0.1',
description='AWS Price List calculations',
url='https://github.com/ConcurrenyLabs/aws-pricing-tools/tree/master/awspricecalculator',
author='Concurrency Labs',
author_email='github@concurrencylabs.com',
license='GNU',
packages=['awspricecalculator','awspricecalculator.common',
'awspricecalculator.awslambda','awspricecalculator.ec2','awspricecalculator.rds', 'awspricecalculator.emr',
'awspricecalculator.redshift', 'awspricecalculator.s3','awspricecalculator.dynamodb',
'awspricecalculator.kinesis', 'awspricecalculator.datatransfer'],
include_package_data=True,
zip_safe=False)
================================================
FILE: test/events/constant-tag.json
================================================
{
"tag": {
"key": "",
"value": ""
}
}