Repository: adiralashiva8/robotframework-metrics Branch: master Commit: 593a0d6ab9cd Files: 16 Total size: 57.3 KB Directory structure: gitextract_e0lyx8s4/ ├── .gitignore ├── LICENSE ├── MANIFEST.in ├── README.md ├── robotframework_metrics/ │ ├── __init__.py │ ├── dashboard_stats.py │ ├── details.py │ ├── keyword_results.py │ ├── keyword_times.py │ ├── robotmetrics.py │ ├── runner.py │ ├── suite_results.py │ ├── templates/ │ │ └── index.html │ ├── test_results.py │ └── version.py └── setup.py ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ # Created by https://www.gitignore.io/api/python # Edit at https://www.gitignore.io/?templates=python ### Python ### # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ wheels/ share/python-wheels/ *.egg-info/ .installed.cfg *.egg MANIFEST # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .nox/ .coverage .coverage.* .cache nosetests.xml coverage.xml *.cover .hypothesis/ .pytest_cache/ # Translations *.mo *.pot # Django stuff: *.log local_settings.py db.sqlite3 # Flask stuff: instance/ .webassets-cache # Scrapy stuff: .scrapy # Sphinx documentation docs/_build/ # PyBuilder target/ # Jupyter Notebook .ipynb_checkpoints # IPython profile_default/ ipython_config.py # pyenv .python-version # celery beat schedule file celerybeat-schedule # SageMath parsed files *.sage.py # Environments .env .venv env/ venv/ ENV/ env.bak/ venv.bak/ # Spyder project settings .spyderproject .spyproject # Rope project settings .ropeproject # mkdocs documentation /site # mypy .mypy_cache/ .dmypy.json dmypy.json # Pyre type checker .pyre/ ### Python Patch ### .venv/ # End of https://www.gitignore.io/api/python *.xml metrics.html ================================================ FILE: LICENSE ================================================ MIT License Copyright (c) 2019 Shiva Prasad Adirala Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: MANIFEST.in ================================================ recursive-include robotframework_metrics/templates * include MANIFEST.in ================================================ FILE: README.md ================================================

Robot Framework Metrics

Custom HTML report (dashboard view) by parsing robotframework output.xml file

contributors last update forks stars downloads open issues license

View Demo · Documentation · Report Bug · Request Feature


# 📔 Table of Contents - [About the Project](#-about-the-project) * [Screenshots](#-screenshots) * [Tech Stack](#-tech-stack) * [Features](#-features) - [Getting Started](#-getting-started) * [Installation](#-installation) - [Usage](#usage) * [Continuous Integration (CI) Setup](#-cisetup) - [Contact](#-contact) - [Acknowledgements](#-acknowledgements) ## 🌟 About the Project `Robot Framework Metrics` is a tool designed to generate comprehensive `HTML reports` from Robot Framework's `output.xml` files. These reports provide a __dashboard view__, offering detailed insights into your test executions, including __suite__ statistics, __test case__ results, and __keyword__ performance. ### 📷 Screenshots ![Metrics Report](https://github.com/adiralashiva8/robotframework-metrics/blob/master/metrics.png) ### 🛠️ Tech Stack
### 🎯 Features - *Custom HTML Report:* Create visually appealing and informative dashboard. - *Detailed Metrics:* Access suite, test case, keyword statistics, status, and elapsed time. - *Support for RF7:* Fully compatible with Robot Framework 7 (from v3.5.0 onwards). - *Command-Line Interface:* Easy-to-use CLI for report generation. ## 🧰 Getting Started ### ⚙️ Installation You can install `robotframework-metrics` using one of the following methods: __Method 1__: Latest Development Version (**Recommended**) (for the latest features and RF7 support) ``` pip install git+https://github.com/adiralashiva8/robotframework-metrics ``` __Method 2__: Using pip ``` pip install robotframework-metrics==3.7.0 ``` __Method 3__: From Source (clone the repository and install using setup.py) ``` git clone https://github.com/adiralashiva8/robotframework-metrics.git cd robotframework-metrics python setup.py install ``` ## 👀 Usage After executing your Robot Framework tests, you can generate a metrics report by running: __Default Configuration__: If `output.xml` is in the current directory ``` robotmetrics ``` __Custom Path__: If `output.xml` is located in a different directory ``` robotmetrics --inputpath ./Result/ --output output1.xml ``` For more options: ``` robotmetrics --help ``` ### 🧪 Continuous Integration (CI) Setup To automate report generation in CI/CD pipelines, add the following steps to your pipeline configuration: 1. Run tests with Robot Framework 2. Generate the metrics report ``` robot test.robot & robotmetrics [:options] ``` > & is used to execute multiple command's in .bat file ## 🤝 Contact For any questions, suggestions, or feedback, please contact: - Email: `adiralashiva8@gmail.com` ## 💎 Acknowledgements Special thanks to the following individuals for their guidance, contributions, and feedback: *Idea, Guidance and Support:* - Steve Fisher - Goutham Duduka *Contributors:* 1. [Pekka Klarck](https://www.linkedin.com/in/pekkaklarck/) [Author of robotframework] 2. [Ruud Prijs](https://www.linkedin.com/in/ruudprijs/) 3. [Jesse Zacharias](https://www.linkedin.com/in/jesse-zacharias-7926ba50/) 4. [Bassam Khouri](https://www.linkedin.com/in/bassamkhouri/) 5. [Francesco Spegni](https://www.linkedin.com/in/francesco-spegni-34b39b61/) 6. [Sreelesh Kunnath](https://www.linkedin.com/in/kunnathsree/) *Feedback:* 1. [Mantri Sri](https://www.linkedin.com/in/mantri-sri-4a0196133/) 2. [Prasad Ozarkar](https://www.linkedin.com/in/prasad-ozarkar-b4a61017/) 3. [Suresh Parimi](https://www.linkedin.com/in/sparimi/) 4. [Amit Lohar](https://github.com/amitlohar) 5. [Robotframework community users](https://groups.google.com/forum/#!forum/robotframework-users) --- ⭐ Star this repository if you find it useful! (it motivates) --- ================================================ FILE: robotframework_metrics/__init__.py ================================================ ================================================ FILE: robotframework_metrics/dashboard_stats.py ================================================ import pandas as pd from datetime import datetime import numpy as np class Dashboard: def __init__(self): pass @classmethod def get_suite_statistics(self, suite_list): suite_data_frame = pd.DataFrame.from_records(suite_list) suite_stats = { "Total" : (suite_data_frame.Name).count(), "Pass" : (suite_data_frame.Status == 'PASS').sum(), "Fail" : (suite_data_frame.Status == 'FAIL').sum(), "Skip" : (suite_data_frame.Status == 'SKIP').sum(), "Time" : (suite_data_frame.Time).sum(), "Min" : (suite_data_frame.Time).min(), "Max" : (suite_data_frame.Time).max(), "Avg" : (suite_data_frame.Time).mean() } return suite_stats @classmethod def get_test_statistics(self, test_list): test_data_frame = pd.DataFrame.from_records(test_list) test_stats = { "Total" : (test_data_frame.Status).count(), "Pass" : (test_data_frame.Status == 'PASS').sum(), "Fail" : (test_data_frame.Status == 'FAIL').sum(), "Skip" : (test_data_frame.Status == 'SKIP').sum(), "Time" : (test_data_frame.Time).sum(), "Min" : (test_data_frame.Time).min(), "Max" : (test_data_frame.Time).max(), "Avg" : (test_data_frame.Time).mean() } return test_stats @classmethod def get_keyword_statistics(self, kw_list): kw_data_frame = pd.DataFrame.from_records(kw_list) if not kw_data_frame.empty: kw_stats = { "Total" : (kw_data_frame.Status).count(), "Pass" : (kw_data_frame.Status == 'PASS').sum(), "Fail" : (kw_data_frame.Status == 'FAIL').sum(), "Skip" : (kw_data_frame.Status == 'SKIP').sum() } else: kw_stats = { "Total" : 0, "Pass" : 0, "Fail" : 0, "Skip" : 0, } return kw_stats def suite_error_statistics(self, suite_list): suite_data_frame = pd.DataFrame.from_records(suite_list) required_data_frame = pd.DataFrame(suite_data_frame, columns = ['Name', 'Total', 'Fail']) required_data_frame['percent'] = (required_data_frame['Fail'] / required_data_frame['Total'])*100 filtered_data_frame = required_data_frame[required_data_frame['Fail'] > 0] # print(required_data_frame) return filtered_data_frame.sort_values(by = ['Fail'], ascending = [False], ignore_index=True).head(10).reset_index(drop=True) def get_execution_info(self, test_list): data_frame = pd.DataFrame.from_records(test_list) data_frame['start_time'] = pd.to_datetime(data_frame['start_time']) data_frame['end_time'] = pd.to_datetime(data_frame['end_time']) initial_start_time = data_frame['start_time'].min() final_end_time = data_frame['end_time'].max() overall_execution_time = final_end_time - initial_start_time return [initial_start_time, final_end_time, overall_execution_time] def get_test_execution_trends(self, test_list): data_frame = pd.DataFrame.from_records(test_list) num_bins = 10 min_time = round(data_frame['Time'].min()/60000, 2) max_time = round(data_frame['Time'].max()/60000, 2) if max_time == min_time: max_time += 0.1 bins = np.linspace(min_time, max_time, num_bins + 1) labels = [f'{round(bins[i], 0)} - {round(bins[i+1], 0)} min' for i in range(len(bins)-1)] data_frame['time_group'] = pd.cut(round(data_frame['Time']/60000,2), bins=bins, labels=labels, include_lowest=True, ordered=False) result = data_frame.groupby('time_group').size().reset_index(name='test_case_count') return result ================================================ FILE: robotframework_metrics/details.py ================================================ from robot.api import ExecutionResult, ResultVisitor from datetime import timedelta from robot.result.model import Keyword class SuiteReportVisitor(ResultVisitor): def __init__(self, details_list): self.test_report = details_list def visit_suite(self, suite): self.tests = [] self.keywords = [] # Traverse each test in the suite for test in suite.tests: # Traverse each keyword in the test for keyword in test.body: if isinstance(keyword, Keyword): _current_keyword = { 'keyword_name': keyword.name, 'keyword_status': keyword.status, 'keyword_start_time': keyword.starttime, 'keyword_end_time': keyword.endtime, 'keyword_elapsed_time': str(timedelta(milliseconds=keyword.elapsedtime)), 'keyword_documentation': keyword.doc, 'keyword_message': keyword.message if keyword.message else "", } self.keywords.append(_current_keyword) _current_test = { 'test_name': test.name, 'test_id': test.id, 'start_time': test.starttime, 'end_time': test.endtime, 'elapsed_time': str(timedelta(milliseconds=test.elapsedtime)), 'status': test.status, 'tags': ", ".join(test.tags), 'documentation': test.doc, 'message': test.message if test.message else "", 'keywords': self.keywords } self.tests.append(_current_test) tests_info = { 'suite_name': suite.longname, 'suite_id': suite.id, 'start_time': suite.starttime, 'end_time': suite.endtime, 'elapsed_time': str(timedelta(milliseconds=suite.elapsedtime)), 'status': suite.status, 'pass_count': suite.statistics.passed, 'fail_count': suite.statistics.failed, 'skip_count': suite.statistics.skipped, 'total': suite.statistics.total, 'message': suite.message if suite.message else "", 'tests': self.tests } self.test_report.append(tests_info) # Recursively visit nested suites for child_suite in suite.suites: child_suite.visit(self) ================================================ FILE: robotframework_metrics/keyword_results.py ================================================ from robot.api import ResultVisitor class KeywordResults(ResultVisitor): def __init__(self, kw_list, ignore_library, ignore_type): self.kw_list = kw_list self.ignore_library = ignore_library self.ignore_type = ignore_type def start_keyword(self, kw): if (kw.libname not in self.ignore_library) and (kw.type not in self.ignore_type): kw_json = { "Name" : kw.name, "Status" : kw.status, "Time" : kw.elapsedtime } self.kw_list.append(kw_json) ================================================ FILE: robotframework_metrics/keyword_times.py ================================================ import pandas as pd class KeywordTimes(): def get_keyword_times(self, kw_list): keywords_data_frame = pd.DataFrame.from_records(kw_list) if not keywords_data_frame.empty: kw_times = (keywords_data_frame.groupby("Name").agg(times = ("Time", "count"), time_min = ("Time", "min"), time_max = ("Time", "max"), time_mean = ("Time", "mean"), fail_count=("Status", lambda x: (x == "FAIL").sum())).reset_index()) else: kw_times = keywords_data_frame return kw_times ================================================ FILE: robotframework_metrics/robotmetrics.py ================================================ import os import logging import codecs from datetime import datetime from robot.api import ExecutionResult from jinja2 import Environment, FileSystemLoader, Template from .suite_results import SuiteResults from .test_results import TestResults from .keyword_results import KeywordResults from .keyword_times import KeywordTimes from .dashboard_stats import Dashboard from .details import SuiteReportVisitor templates_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'templates') file_loader = FileSystemLoader(templates_dir) env = Environment( loader = file_loader ) template = env.get_template('index.html') IGNORE_LIBRARIES = ["SeleniumLibrary", "BuiltIn", "Collections", "DateTime", "Dialogs", "OperatingSystem" "Process", "Screenshot", "String", "Telnet", "XML"] IGNORE_TYPES = ['FOR ITERATION', 'FOR', 'for', 'foritem'] suite_list, test_list, kw_list, kw_times, details_list = [], [], [], [], [] def generate_report(opts): logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.INFO) # Ignores following library keywords in metrics report ignore_library = IGNORE_LIBRARIES if opts.ignore: ignore_library.extend(opts.ignore) # Ignores following type keywords in metrics report ignore_type = IGNORE_TYPES if opts.ignoretype: ignore_type.extend(opts.ignoretype) # Report to support file location as arguments path = os.path.abspath(os.path.expanduser(opts.path)) # output.xml files output_names = [] # support "*.xml" of output files if ( opts.output == "*.xml" ): for item in os.listdir(path): item = os.path.join(path, item) if os.path.isfile(item) and item.endswith('.xml'): output_names.append(item) else: for curr_name in opts.output.split(","): curr_path = os.path.join(path, curr_name) output_names.append(curr_path) log_name = opts.log_name # copy the list of output_names onto the one of required_files; the latter may (in the future) # contain files that should not be processed as output_names required_files = list(output_names) missing_files = [filename for filename in required_files if not os.path.exists(filename)] if missing_files: # We have files missing. exit("output.xml file is missing: {}".format(", ".join(missing_files))) mt_time = datetime.now().strftime('%Y%m%d-%H%M%S') # Output result file location if opts.metrics_report_name: result_file_name = opts.metrics_report_name else: result_file_name = 'metrics-' + mt_time + '.html' result_file = os.path.join(path, result_file_name) logging.info(" Converting .xml to .html file. This may take few minutes...") # Read output.xml file result = ExecutionResult(*output_names) logging.info(" 1 of 4: Capturing suite metrics") result.visit(SuiteResults(suite_list)) logging.info(" 2 of 4: Capturing test metrics") result.visit(TestResults(test_list)) # if opts.showkeyword == "True": # logging.info(" 3 of 4: Capturing keyword metrics") # result.visit(KeywordResults(kw_list, IGNORE_LIBRARIES)) # hide_keyword_menu = "" # else: # logging.info(" 3 of 4: Ignoring keyword metrics") # result.visit(KeywordResults([], IGNORE_LIBRARIES)) # hide_keyword_menu = "hide" if opts.showkwtimes == "True": logging.info(" 3 of 4: Capturing keyword times metrics") result.visit(KeywordResults(kw_list, ignore_library, ignore_type)) kw_times = KeywordTimes().get_keyword_times(kw_list) hide_kw_times_menu = "" else: kw_times = KeywordTimes().get_keyword_times([]) hide_kw_times_menu = "hide" if opts.showtags == "True": hide_tags = "" else: hide_tags = "hide" if opts.showdocs == "True": hide_docs = "" else: hide_docs = "hide" logging.info(" 4 of 4: Capturing details") result.visit(SuiteReportVisitor(details_list)) logging.info(" Preparing data for dashboard") dashboard_obj = Dashboard() suite_stats = dashboard_obj.get_suite_statistics(suite_list) test_stats = dashboard_obj.get_test_statistics(test_list) kw_stats = dashboard_obj.get_keyword_statistics(kw_list) suite_error_stats = dashboard_obj.suite_error_statistics(suite_list) execution_stats = dashboard_obj.get_execution_info(test_list) test_time_group = dashboard_obj.get_test_execution_trends(test_list) logging.info(" Writing results to html file") with codecs.open(result_file,'w','utf-8') as fh: fh.write(template.render( hide_tags = hide_tags, hide_docs = hide_docs, # hide_keyword_menu = hide_keyword_menu, hide_kw_times_menu = hide_kw_times_menu, suite_stats = suite_stats, log_name = log_name, test_stats = test_stats, kw_stats = kw_stats, suites = suite_list, tests = test_list, # keywords = kw_list, keyword_times = kw_times, # error_stats = error_stats, suite_error_stats = suite_error_stats, suites_list = details_list, execution_stats=execution_stats, test_time_group=test_time_group, )) logging.info(" Results file created successfully and can be found at {}".format(result_file)) ================================================ FILE: robotframework_metrics/runner.py ================================================ import os import argparse from .robotmetrics import generate_report from .robotmetrics import IGNORE_LIBRARIES from .robotmetrics import IGNORE_TYPES from .version import __version__ def parse_options(): parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) general = parser.add_argument_group("General") parser.add_argument( '-v', '--version', action='store_true', dest='version', help='Display application version information' ) general.add_argument( '--ignorelib', dest='ignore', default=IGNORE_LIBRARIES, nargs="+", help="Ignore keywords of specified library in report" ) general.add_argument( '--ignoretype', dest='ignoretype', default=IGNORE_TYPES, nargs="+", help="Ignore keywords of specified type in report" ) general.add_argument( '-I', '--inputpath', dest='path', default=os.path.curdir, help="Path of result files" ) general.add_argument( '-M', '--metrics-report-name', dest='metrics_report_name', help="Output name of the generate metrics report" ) general.add_argument( '-O', '--output', dest='output', default="output.xml", help="Name of output.xml" ) # general.add_argument( # '-sk', '--showkeyword', # dest='showkeyword', # default="True", # help="Display keywords in metrics report" # ) general.add_argument( '-skt', '--showkwtimes', dest='showkwtimes', default="True", help="Display keyword times in metrics report" ) general.add_argument( '-t', '--showtags', dest='showtags', default="False", help="Display test case tags in test metrics" ) general.add_argument( '-d', '--showdocs', dest='showdocs', default="False", help="Display test case documentation in test metrics" ) general.add_argument( '-L', '--log', dest='log_name', default='log.html', help="Name of log.html" ) args = parser.parse_args() return args def main(): args = parse_options() if args.version: print(__version__) exit(0) generate_report(args) ================================================ FILE: robotframework_metrics/suite_results.py ================================================ from robot.api import ResultVisitor from robot.utils.markuputils import html_format class SuiteResults(ResultVisitor): def __init__(self, suite_list): self.suite_list = suite_list def start_suite(self, suite): if suite.tests: try: stats = suite.statistics.all except: stats = suite.statistics try: skipped = stats.skipped except: skipped = 0 suite_json = { "Name" : suite.longname, "Id" : suite.id, "Status" : suite.status, "Documentation" : html_format(suite.doc), "Total" : stats.total, "Pass" : stats.passed, "Fail" : stats.failed, "Skip" : skipped, "Time" : suite.elapsedtime, } self.suite_list.append(suite_json) ================================================ FILE: robotframework_metrics/templates/index.html ================================================ Robot Metrics

Dashboard

Test Status:
{{test_stats['Total']}} {{test_stats['Pass']}}
Total Pass
{{test_stats['Fail']}} {{test_stats['Skip']}}
Fail Skip
Suite Status:
Keyword Status:
Execution Duration (m):
Type Min Max Avg
Suite {{(suite_stats['Min']/60000)|round(2)}} {{(suite_stats['Max']/60000)|round(2)}} {{(suite_stats['Avg']/60000)|round(2)}}
Test {{(test_stats['Min']/60000)|round(2)}} {{(test_stats['Max']/60000)|round(2)}} {{(test_stats['Avg']/60000)|round(2)}}
Execution Info:
Action Time
Start Time {{ execution_stats[0] }}
End Time {{ execution_stats[1] }}
Duration {{ execution_stats[2] }}
Top 10 Failed Suites:
Test Count By Elapsed Time:

Suite Metrics

{% for suite in suites %} {% if (suite['Status'] == "PASS") %} {% elif (suite['Status'] == "FAIL") %} {% else %} {% endif %} {% endfor %}
Name Status Total Pass Fail Skip Time (s)
{{ suite['Name'] }}{{ suite['Status'] }}{{ suite['Status'] }}{{ suite['Status'] }}{{ suite['Total'] }} {{ suite['Pass'] }} {{ suite['Fail'] }} {{ suite['Skip'] }} {{ (suite['Time']/1000)|round(2) }}

Test Metrics

{% for test in tests %} {% if (test['Status'] == "PASS") %} {% elif (test['Status'] == "FAIL") %} {% else %} {% endif %} {% endfor %}
Suite Name Test Name Status Time (s) Message Tags
{{ test['Suite Name'] }} {{ test['Test Name'] }}{{ test['Status'] }}{{ test['Status'] }}{{ test['Status'] }}{{ (test['Time']/1000)|round(2) }} {{ test['Message'] }} {{ test['Tags'] }}

KW Times Metrics

{% if not keyword_times.empty %} {% for key, value in keyword_times.iterrows() %} {% endfor %} {% endif %}
Keyword Name Times Fail Count Min Duration(s) Max Duration(s) Average Duration(s)
{{ value['Name'] }} {{ value['times'] }} {{ value['fail_count'] }} {{ (value['time_min']/1000)|round(2) }} {{ (value['time_max']/1000)|round(2) }} {{ (value['time_mean']/1000)|round(2) }}

Details

{% for suite in suites_list %} {% if suite['tests'] %}
{{ suite["suite_name"] }}
{% endif %} {% endfor %}
{% for suite in suites_list %} {% if suite['tests'] %} {% endif %} {% endfor %}
================================================ FILE: robotframework_metrics/test_results.py ================================================ from robot.api import ResultVisitor from robot.utils.markuputils import html_format class TestResults(ResultVisitor): def __init__(self, test_list): self.test_list = test_list def visit_test(self, test): suite_name = test.parent if test.parent else test.parent.name test_json = { "Suite Name" : suite_name, "Test Name" : test.name, "Test Id" : test.id, "Status" : test.status, "Documentation" : html_format(test.doc), "Time" : test.elapsedtime, # "Message" : html_format(test.message), "Message" : str(test.message).replace("*HTML*",""), "Tags" : test.tags, 'start_time': test.starttime, 'end_time': test.endtime, } self.test_list.append(test_json) ================================================ FILE: robotframework_metrics/version.py ================================================ __version__ = "3.6.0" ================================================ FILE: setup.py ================================================ from setuptools import setup, find_packages setup( name='robotframework-metrics', version="3.6.0", description='Custom report for robot framework', long_description='Custom html report generator using robot.result api', classifiers=[ 'Framework :: Robot Framework', 'Programming Language :: Python', 'Topic :: Software Development :: Testing', ], keywords='robotframework report', author='Shiva Prasad Adirala', author_email='adiralashiva8@gmail.com', url='https://github.com/adiralashiva8/robotframework-metrics', license='MIT', packages=find_packages(), include_package_data= True, zip_safe=False, install_requires=[ 'robotframework', 'jinja2', 'pandas', ], entry_points={ 'console_scripts': [ 'robotmetrics=robotframework_metrics.runner:main', ] }, )