Repository: mozilla/jupyter-spark
Branch: master
Commit: f7814f617204
Files: 20
Total size: 47.6 KB
Directory structure:
gitextract_apq7qxov/
├── .coveragerc
├── .eslintrc.json
├── .gitignore
├── .travis.yml
├── CODE_OF_CONDUCT.md
├── LICENSE
├── MANIFEST.in
├── Makefile
├── README.md
├── examples/
│ └── Jupyter Spark example.ipynb
├── package.json
├── pytest.ini
├── setup.cfg
├── setup.py
├── src/
│ └── jupyter_spark/
│ ├── __init__.py
│ ├── handlers.py
│ ├── spark.py
│ └── static/
│ └── extension.js
├── tests/
│ └── test_spark.py
└── tox.ini
================================================
FILE CONTENTS
================================================
================================================
FILE: .coveragerc
================================================
[run]
branch = True
source = jupyter_spark
[paths]
source =
.tox/*/lib/python*/site-packages/jupyter_spark
.tox/pypy*/site-packages/jupyter_spark
[report]
show_missing = True
================================================
FILE: .eslintrc.json
================================================
{
"env": {
"browser": true,
"jquery": true,
"amd": true
},
"plugins": [
"amd-imports"
],
"extends": "eslint:recommended",
"rules": {
"indent": [
"error",
4
],
"linebreak-style": [
"error",
"unix"
]
}
}
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
# PyBuilder
target/
#Ipython Notebook
.ipynb_checkpoints
node_modules/
================================================
FILE: .travis.yml
================================================
language: python
sudo: false
cache: pip
python:
- '2.7'
- '3.4'
- '3.5'
- '3.6'
- pypy
install:
- pip install tox-travis
script:
- tox -v
after_success:
- bash <(curl -s https://codecov.io/bash)
jobs:
include:
- stage: deploy
python: '3.5'
script: skip
deploy:
provider: pypi
user: jezdez
distributions: sdist bdist_wheel
password:
secure: 0WAa8nmsv3VE/sKgd6AYLXJFujUe0YSo11shN2G7EarVlB/d0TLdU/ZyY5C5ClKutMTgnjlcpaCzgImMh86tA0W0lb3n2o5M7ssyIbOqKSIUPZBOFmqFJ2F9Wy01Woj0V+gQx6gS5U7lntTjNFbYnycH801FXrhub0ZMLynDO5y5fy2p91Qu9qMOs+gahSyb0KrkzQCXXLlaqwhmKAwfYj0JjrEuKTe/ltRlz/Ffw01cpJJoU2j+SophMCcd4taM4ESW0eAuXjDf3XmxZY2oSYTEDIhJATmgi3Y+I7TxI5N9/PJSyeuVnxP2uL+rs1iaZWWBD2SJPO4jvlliUwbapOqZG7MSP15OaGeKST6vSdxLp17zFXVhXSXOhUJTFQ1uPtmddt9hTNTnQbXcZRLz1oqomSt39+f/MH7RhAEJiqL+7qJPtp3AtFDX0AW/LBvgNO4i0tc3RxCIF00urMTHgk8HHKUa7wGEJ6R5qmNClihu2KkVNqRIGAATe7VDwqr990IADGl1cGL5yy+HTj2GtCdLPaxJuz9/Pv3YNUFusWbL4YXQQqxPRS3dSoUZ0zYEW8KGJsAMCpWJYyvUkzLnub3GZTm79Pty6PiNuuO3kmu/wSjztYgc0cfPygJdIy9Xw6j9qM4JT6oBVyA1vOFXIJ4IeFcq/wvQANXsBClpzqQ=
on:
tags: true
repo: mozilla/jupyter-spark
matrix:
include:
- env: eslint
sudo: false
cache:
directories:
- node_modules # NPM packages
install:
- nvm install 8
- npm install
script:
- node ./node_modules/eslint/bin/eslint.js --ext .js src/jupyter_spark/static
allow_failures:
- env: eslint
================================================
FILE: CODE_OF_CONDUCT.md
================================================
# Community Participation Guidelines
This repository is governed by Mozilla's code of conduct and etiquette guidelines.
For more details, please read the
[Mozilla Community Participation Guidelines](https://www.mozilla.org/about/governance/policies/participation/).
## How to Report
For more information on how to report violations of the Community Participation Guidelines, please read our '[How to Report](https://www.mozilla.org/about/governance/policies/participation/reporting/)' page.
================================================
FILE: LICENSE
================================================
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.
================================================
FILE: MANIFEST.in
================================================
include README.md
recursive-include src *.js
recursive-include tests *.py
recursive-include notebooks *.ipynb
recursive-include screenshots *.png
================================================
FILE: Makefile
================================================
.PHONY: install build uninstall enable clean dev
install:
jupyter serverextension enable --py jupyter_spark
jupyter nbextension install --py jupyter_spark
jupyter nbextension enable --py jupyter_spark
uninstall:
jupyter serverextension disable --py jupyter_spark
jupyter nbextension disable --py jupyter_spark
jupyter nbextension uninstall --py jupyter_spark
pip uninstall -y jupyter-spark
clean: uninstall
rm -rf dist/
build: clean
python setup.py sdist
pip install dist/*.tar.gz
dev: build install
notebook:
jupyter notebook
================================================
FILE: README.md
================================================
# jupyter-spark
[](http://unmaintained.tech/)
[](https://travis-ci.org/mozilla/jupyter-spark)
[](https://codecov.io/gh/mozilla/jupyter-spark)
**NOTE: This project is currently unmaintained, if anyone would like to take over maintenance please [let us know](https://github.com/mozilla/jupyter-spark/issues/55).**
Jupyter Notebook extension for Apache Spark integration.
Includes a progress indicator for the current Notebook cell if it invokes a
Spark job. Queries the Spark UI service on the backend to get the required
Spark job information.

To view all currently running jobs, click the "show running Spark jobs"
button, or press ```Alt+S```.


A proxied version of the Spark UI can be accessed at
http://localhost:8888/spark.
## Installation
To install, simply run:
```
pip install jupyter-spark
jupyter serverextension enable --py jupyter_spark
jupyter nbextension install --py jupyter_spark
jupyter nbextension enable --py jupyter_spark
jupyter nbextension enable --py widgetsnbextension
```
The last step is needed to enable the `widgetsnbextension` extension that
Jupyter-Spark depends on. It may have been enabled before by a different
extension.
You may want to append ``--user`` to the commands above if you're getting
configuration errors upon invoking them.
To double-check if the extension was correctly installed run:
```
jupyter nbextension list
jupyter serverextension list
```
Pleaes feel free to install [lxml](http://lxml.de/) as well to improve
performance of the server side communication to Spark using your favorite
package manager, e.g.:
```
pip install lxml
```
For development and testing, clone the project and run from a shell in the
project's root directory:
```
pip install -e .
jupyter serverextension enable --py jupyter_spark
jupyter nbextension install --py jupyter_spark
jupyter nbextension enable --py jupyter_spark
```
To uninstall the extension run:
```
jupyter serverextension disable --py jupyter_spark
jupyter nbextension disable --py jupyter_spark
jupyter nbextension uninstall --py jupyter_spark
pip uninstall jupyter-spark
```
## Configuration
To change the URL of the Spark API that the job metadata is fetched from
override the `Spark.url` config value, e.g. on the command line:
```
jupyter notebook --Spark.url="http://localhost:4040"
```
## Example
There is a simple `pyspark` example included in `examples` to confirm that your
installation is working.
## Changelog
### 0.3.0 (2016-07-04)
- Rewrote proxy to use an async Tornado handler and HTTP client to fetch
responses from Spark.
- Simplified proxy processing to take Amazon EMR proxying into account
- Extended test suite to cover proxy handler, too.
- Removed requests as a dependency.
### 0.2.0 (2016-06-30)
- Refactored to fix a bunch of Python packaging and code quality issues
- Added test suite for Python code
- Set up continuous integration: https://travis-ci.org/mozilla/jupyter-spark
- Set up code coverage reports: https://codecov.io/gh/mozilla/jupyter-spark
- Added ability to override Spark API URL via command line option
- **IMPORTANT** Requires manual step to enable after running pip install
(see installation docs)!
To update:
1. Run `pip uninstall jupyter-spark`
2. Delete `spark.js` from your `nbextensions` folder.
3. Delete any references to `jupyter_spark.spark` in
`jupyter_notebook_config.json` (in your .jupyter directory)
4. Delete any references to `spark` in `notebook.json`
(in .jupyter/nbconfig)
5. Follow installation instructions to reinstall
### 0.1.1 (2016-05-03)
- Initial release with a working prototype
================================================
FILE: examples/Jupyter Spark example.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example jupyter_spark notebook\n",
"\n",
"This is an example notebook to demonstrate the `jupyter_spark` notebook plugin.\n",
"\n",
"It is based on the [approximating pi](https://github.com/apache/spark/blob/master/examples/src/main/python/pi.py) example in the pyspark documentation. This works by sampling random numbers in a square and counting the number that fall inside the unit circle."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"from random import random\n",
"from operator import add\n",
"\n",
"from pyspark.sql import SparkSession"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a `SparkSession` and give it a name.\n",
"\n",
"Note: This will start the spark client console -- there is no need to run `spark-shell` directly."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"spark = SparkSession \\\n",
" .builder \\\n",
" .appName(\"PythonPi\") \\\n",
" .getOrCreate()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`partitions` is the number of spark workers to partition the work into."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"partitions = 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`n` is the number of random samples to calculate"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"n = 100000000"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is the sampling function. It generates numbers in the square from (-1, -1) to (1, 1), and returns 1 if it falls inside the unit circle, and 0 otherwise."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"def f(_):\n",
" x = random() * 2 - 1\n",
" y = random() * 2 - 1\n",
" return 1 if x ** 2 + y ** 2 <= 1 else 0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here's where we farm the work out to Spark."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"count = spark.sparkContext \\\n",
" .parallelize(range(1, n + 1), partitions) \\\n",
" .map(f) \\\n",
" .reduce(add)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Pi is roughly 3.141880\n"
]
}
],
"source": [
"print(\"Pi is roughly %f\" % (4.0 * count / n))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Shut down the spark server."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"spark.stop()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
================================================
FILE: package.json
================================================
{
"devDependencies": {
"eslint": "^4.17.0",
"eslint-plugin-amd-imports": "^4.0.0"
}
}
================================================
FILE: pytest.ini
================================================
[pytest]
addopts = -v --flake8 --isort --cov-report xml --cov-report=term-missing --cov=jupyter_spark --cov-config .coveragerc
norecursedirs = *.egg .eggs dist build docs .tox
flake8-ignore = E501
================================================
FILE: setup.cfg
================================================
[wheel]
universal = 1
================================================
FILE: setup.py
================================================
from setuptools import find_packages, setup
setup(
name='jupyter-spark',
use_scm_version={
'version_scheme': 'post-release',
'local_scheme': 'dirty-tag'
},
setup_requires=['setuptools_scm'],
description='Jupyter Notebook extension for Apache Spark integration',
author='Mozilla Firefox Data Platform',
author_email='fx-data-platform@mozilla.com',
packages=find_packages(where='src'),
package_dir={'': 'src'},
include_package_data=True,
license='MPL2',
install_requires=[
'ipython >= 4',
'jupyter',
'notebook >= 4.2',
'beautifulsoup4',
'widgetsnbextension',
],
url='https://github.com/mozilla/jupyter-spark',
zip_safe=False,
)
================================================
FILE: src/jupyter_spark/__init__.py
================================================
from pkg_resources import get_distribution, DistributionNotFound
try:
__version__ = get_distribution(__name__).version
except DistributionNotFound:
# package is not installed
pass
def _jupyter_nbextension_paths(): # pragma: no cover
return [{
'section': 'notebook',
# the path is relative to the `jupyter_spark` directory
'src': 'static',
# directory in the `nbextension/` namespace
'dest': 'jupyter-spark',
# _also_ in the `nbextension/` namespace
'require': 'jupyter-spark/extension',
}]
def _jupyter_server_extension_paths(): # pragma: no cover
return [{
'module': 'jupyter_spark',
}]
def load_jupyter_server_extension(nbapp): # pragma: no cover
from .spark import Spark
from .handlers import SparkHandler
spark = Spark(
# add access to NotebookApp config, too
parent=nbapp,
base_url=nbapp.web_app.settings['base_url'],
)
nbapp.web_app.add_handlers(
r'.*', # match any host
[(spark.proxy_url + '.*', SparkHandler, {'spark': spark})]
)
nbapp.log.info("Jupyter-Spark enabled!")
================================================
FILE: src/jupyter_spark/handlers.py
================================================
import json
import tornado.web
from notebook.base.handlers import IPythonHandler
from tornado import httpclient
class SparkHandler(IPythonHandler):
def initialize(self, spark):
self.spark = spark
@tornado.web.asynchronous
def get(self):
"""
Fetch the requested URI from the Spark API, replace the
URLs in the response content for HTML responses or return
the verbatim response.
"""
http = httpclient.AsyncHTTPClient()
url = self.spark.backend_url(self.request)
self.spark.log.debug('Fetching from Spark %s', url)
http.fetch(url, self.handle_response)
def handle_response(self, response):
if response.error:
content_type = 'application/json'
content = json.dumps({'error': 'SPARK_NOT_RUNNING'})
else:
content_type = response.headers['Content-Type']
if 'text/html' in content_type:
content = self.spark.replace(response.body)
else:
# Probably binary response, send it directly.
content = response.body
self.set_header('Content-Type', content_type)
self.write(content)
self.finish()
================================================
FILE: src/jupyter_spark/spark.py
================================================
import re
from bs4 import BeautifulSoup
from notebook.utils import url_path_join
from traitlets.config import LoggingConfigurable
from traitlets.traitlets import Unicode
# try importing lxml and use it as the BeautifulSoup builder if available
try:
import lxml # noqa
except ImportError:
BEAUTIFULSOUP_BUILDER = 'html.parser'
else:
BEAUTIFULSOUP_BUILDER = 'lxml' # pragma: no cover
# a regular expression to match paths against the Spark on EMR proxy paths
PROXY_PATH_RE = re.compile(r'\/proxy\/application_\d+_\d+\/(.*)')
# a tuple of tuples with tag names and their attribute to automatically fix
PROXY_ATTRIBUTES = (
(('a', 'link'), 'href'),
(('img', 'script'), 'src'),
)
class Spark(LoggingConfigurable):
"""
A config object that is able to replace URLs of the Spark frontend
on the fly.
"""
url = Unicode(
'http://localhost:4040',
help='The URL of Spark API',
).tag(config=True)
proxy_root = Unicode(
'/spark',
help='The URL path under which the Spark API will be proxied',
)
def __init__(self, *args, **kwargs):
self.base_url = kwargs.pop('base_url')
super(Spark, self).__init__(*args, **kwargs)
self.proxy_url = url_path_join(self.base_url, self.proxy_root)
def backend_url(self, request):
request_path = request.uri[len(self.proxy_url):]
return url_path_join(self.url, request_path)
def replace(self, content):
"""
Replace all the links with our prefixed handler links, e.g.:
/proxy/application_1467283586194_0015/static/styles.css' or
/static/styles.css
with
/spark/static/styles.css
"""
soup = BeautifulSoup(content, BEAUTIFULSOUP_BUILDER)
for tags, attribute in PROXY_ATTRIBUTES:
for tag in soup.find_all(tags, **{attribute: True}):
value = tag[attribute]
match = PROXY_PATH_RE.match(value)
if match is not None:
value = match.groups()[0]
tag[attribute] = url_path_join(self.proxy_root, value)
return str(soup)
================================================
FILE: src/jupyter_spark/static/extension.js
================================================
var UPDATE_FREQUENCY = 10000; // ms
var UPDATE_FREQUENCY_ACTIVE = 500;
var PROGRESS_COUNT_TEXT = "Running Spark job ";
/*
cache is an array of application objects with an added property jobs.
application.jobs is the result of the /applications/applicationId/jobs
API request.
*/
var cache = [];
var current_update_frequency;
var spark_is_running = false;
var cell_queue = [];
var current_cell;
var cell_jobs_counter = 0;
var jobs_in_cache = 0;
var update = function(api_url) {
update_cache(api_url, update_dialog_contents);
};
// callbacks follows jQuery callback style, can be either single function or array of functions
// callbacks will be passed the cache as a parameter
var update_cache = function(api_url, callbacks) {
var cbs;
if (callbacks) {
cbs = $.Callbacks();
cbs.add(callbacks);
}
$.getJSON(api_url + '/applications').done(function(applications) {
var num_applications = cache.length;
var num_completed = 0;
// Check if Spark is running before processing applications
if(!applications.hasOwnProperty('error')){
spark_is_running = true;
applications.forEach(function(application, i) {
$.getJSON(api_url + '/applications/' + application.id + '/jobs').done(function (jobs) {
cache[i] = application;
cache[i].jobs = jobs;
num_completed++;
if (num_completed === num_applications && cbs) {
cbs.fire(cache);
}
// Update progress bars if jobs have been run and there are cells to be updated
if (jobs.length > jobs_in_cache && cell_queue.length > 0 ) {
$(document).trigger('update.progress.bar');
}
});
});
} else {
spark_is_running = false;
}
});
};
var update_dialog_contents = function() {
if ($('#dialog_contents').length) {
var element = $('<div/>').attr('id', 'dialog_contents');
cache.forEach(function(application){
element.append(create_application_table(application));
});
$('#dialog_contents').replaceWith(element);
}
};
var create_application_table = function(e) {
var application_div = $('<div/>');
application_div.append($('<h5/>').text(e.name + ': ' + e.id));
var application_table = $('<table/>').addClass('table table-hover');
var header_row = $('<tr/>');
header_row.append($('<th/>').text('Job ID'));
header_row.append($('<th/>').text('Job Name'));
header_row.append($('<th/>').text('Progress'));
application_table.append(header_row);
e.jobs.forEach(function(job) {
application_table.append(create_table_row(job));
});
application_div.append(application_table);
return application_div;
};
var create_table_row = function(e) {
var row = $('<tr/>');
row.append($('<td/>').text(e.jobId));
row.append($('<td/>').text(e.name));
var status_class = get_status_class(e.status);
var progress_bar_div = create_progress_bar(status_class, e.numCompletedTasks, e.numTasks);
row.append($('<td/>').append(progress_bar_div));
return row;
};
var get_status_class = function(status) {
var status_class;
switch(status) {
case 'SUCCEEDED':
status_class = 'progress-bar-success';
break;
case 'RUNNING':
status_class = 'progress-bar-info';
break;
case 'FAILED':
status_class = 'progress-bar-danger';
break;
case 'UNKNOWN':
status_class = 'progress-bar-warning';
break;
}
return status_class;
}
var create_progress_bar = function(status_class, completed, total) {
// progress defined in percent
var progress = completed / total * 100;
var progress_bar_div = $('<div/>')
.addClass('progress')
.css({'min-width': '100px', 'margin-bottom': 0});
var progress_bar = $('<div/>')
.addClass('progress-bar ' + status_class)
.attr('role', 'progressbar')
.attr('aria-valuenow', progress)
.attr('aria-valuemin', 0)
.attr('aria-valuemax', 100)
.css({'width': progress + '%',
'white-space': 'nowrap',
'overflow': 'visible'});
if (status_class == 'progress-bar-warning') {
progress_bar.text('Loading Spark...');
} else {
progress_bar.text(completed + ' out of ' + total + ' tasks');
}
progress_bar_div.append(progress_bar);
return progress_bar_div;
};
define([
'jquery',
'base/js/namespace',
'base/js/dialog',
'base/js/events',
'base/js/utils',
'notebook/js/codecell'
], function ($, Jupyter, dialog, events, utils, codecell) {
var CodeCell = codecell.CodeCell;
var base_url = utils.get_body_data('baseUrl') || '/';
var api_url = base_url + 'spark/api/v1';
var show_running_jobs = function() {
var element = $('<div/>').attr('id', 'dialog_contents');
dialog.modal({
title: "Running Spark Jobs",
body: element,
buttons: {
"Close": {}
},
open: update_dialog_contents
});
};
var spark_progress_bar = function(event, data) {
var cell = data.cell;
if (is_spark_cell(cell)) {
window.clearInterval(current_update_frequency);
current_update_frequency = window.setInterval(update, UPDATE_FREQUENCY_ACTIVE, api_url);
cell_queue.push(cell);
current_cell = cell_queue[0];
add_progress_bar(current_cell);
}
};
var add_progress_bar = function(cell) {
var progress_bar_div = cell.element.find('.progress-container');
if (progress_bar_div.length < 1) {
var input_area = cell.element.find('.input_area');
cell_jobs_counter = 0;
if (spark_is_running) {
jobs_in_cache = cache[0].jobs.length;
}
var panel = $('<div/>')
.addClass('panel')
.addClass('panel-default')
.addClass('progress-panel')
.css({'margin-bottom': '0'})
.hide();
var jobs_completed_container = $('<div/>')
.addClass('progress_counter')
.addClass('panel-heading')
.text(PROGRESS_COUNT_TEXT + cell_jobs_counter);
var progress_bar_container = $('<div/>')
.addClass('progress-container');
var progress_bar = create_progress_bar('progress-bar-warning', 1, 5);
progress_bar.appendTo(progress_bar_container);
jobs_completed_container.appendTo(panel);
progress_bar_container.appendTo(panel);
panel.appendTo(input_area);
}
};
var update_progress_bar = function() {
var job = cache[0].jobs[0];
var completed = job.numCompletedTasks;
var total = job.numTasks;
var progress_bar = current_cell.element.find('.progress');
update_progress_count(current_cell, job.jobId);
var progress = completed / total * 100;
progress_bar.show();
progress_bar.find('.progress-bar')
.attr('class', 'progress-bar ' + get_status_class(job.status))
.attr('aria-valuenow', progress)
.css('width', progress + '%')
.text(completed + ' out of ' + total + ' tasks');
};
var update_progress_count = function(cell, jobId) {
var progress_count = cell.element.find('.progress_counter');
var job_name = "";
var canceller = null;
if (spark_is_running) {
cell_jobs_counter = cache[0].jobs.length - jobs_in_cache;
job_name = ": " + cache[0].jobs[0].name
canceller = $('<a href="#" class="btn btn-default btn-xs pull-right">Cancel</a>').on(
'click',
function () { $.get(base_url + "spark/jobs/job/kill?id=" + jobId)});
}
progress_count.text(PROGRESS_COUNT_TEXT + cell_jobs_counter + job_name);
progress_count.append(canceller)
cell.element.find('.progress-panel').show();
};
var remove_progress_bar = function() {
if (current_cell != null) {
var progress_panel = current_cell.element.find('.progress-panel');
progress_panel.remove();
start_next_progress_bar();
}
};
var start_next_progress_bar = function() {
cell_queue.shift();
current_cell = cell_queue[0];
if (current_cell != null) {
add_progress_bar(current_cell);
} else {
window.clearInterval(current_update_frequency);
current_update_frequency = window.setInterval(update, UPDATE_FREQUENCY, api_url);
}
};
var is_spark_cell = function(cell) {
// TODO: Find a way to detect if cell is actually running Spark
return (cell instanceof CodeCell)
};
var load_ipython_extension = function () {
events.on('execute.CodeCell', spark_progress_bar);
$(document).on('update.progress.bar', update_progress_bar);
// Kernel becomes idle after a cell finishes executing
events.on('kernel_idle.Kernel', remove_progress_bar);
Jupyter.keyboard_manager.command_shortcuts.add_shortcut('Alt-S', show_running_jobs);
Jupyter.toolbar.add_buttons_group([{
'label': 'Show running Spark jobs',
'icon': 'fa-tasks',
'callback': show_running_jobs,
'id': 'show_running_jobs'
}]);
update(api_url);
current_update_frequency = window.setInterval(update, UPDATE_FREQUENCY, api_url);
};
return {
load_ipython_extension: load_ipython_extension
};
});
================================================
FILE: tests/test_spark.py
================================================
# -*- coding: utf-8 -*-
import pytest
import six
import tornado
import tornado.httpclient
import tornado.testing
import tornado.web
from bs4 import BeautifulSoup
from jupyter_spark.handlers import SparkHandler
from jupyter_spark.spark import BEAUTIFULSOUP_BUILDER, Spark
PROXY_PREFIX = "/proxy/application_1234556789012_3456"
spark = Spark(base_url='http://localhost:8888')
class FakeHandler(tornado.web.RequestHandler):
def get(self):
self.set_header('Content-Type', self.CONTENT_TYPE)
self.write(self.RESPONSE)
class FakeReplaceHandler(FakeHandler):
handler_root = '/backend/replace'
RESPONSE = six.b('<img src="/image.png" />')
REPLACED = six.b('<img src="/spark/image.png"/>')
CONTENT_TYPE = 'text/html'
class FakeVerbatimHandler(FakeHandler):
handler_root = '/backend/verbatim'
RESPONSE = six.b('<a href="/">Hello, world!</a>')
CONTENT_TYPE = 'plain/text'
class SparkHandlerTests(tornado.testing.AsyncHTTPTestCase):
def get_app(self):
port = self.get_http_port()
base_url = 'http://localhost:%s' % port
self.spark = Spark(base_url=base_url)
return tornado.web.Application([
(spark.proxy_root + '.*', SparkHandler, {'spark': self.spark}),
(FakeReplaceHandler.handler_root, FakeReplaceHandler),
(FakeVerbatimHandler.handler_root, FakeVerbatimHandler),
])
def test_http_fetch_error(self):
response = self.fetch(self.spark.proxy_root)
self.assertEqual(response.code, 200)
self.assertIn(six.b('SPARK_NOT_RUNNING'), response.body)
def test_http_fetch_replace_success(self):
self.spark.url = self.spark.base_url + FakeReplaceHandler.handler_root
response = self.fetch(self.spark.proxy_root)
self.assertEqual(response.code, 200)
self.assertNotEqual(response.body, FakeReplaceHandler.RESPONSE)
self.assertEqual(response.body, FakeReplaceHandler.REPLACED)
self.assertEqual(response.headers['Content-Type'],
FakeReplaceHandler.CONTENT_TYPE)
def test_http_fetch_verbatim_success(self):
self.spark.url = self.spark.base_url + FakeVerbatimHandler.handler_root
response = self.fetch(self.spark.proxy_root)
self.assertEqual(response.code, 200)
self.assertEqual(response.body, FakeVerbatimHandler.RESPONSE)
self.assertEqual(response.headers['Content-Type'],
FakeVerbatimHandler.CONTENT_TYPE)
def test_spark_backend_url(self):
class FakeRequest(object):
# http://localhost:8888/spark/api
uri = self.spark.base_url + self.spark.proxy_root + '/api'
fake_request = FakeRequest()
self.assertEqual(self.spark.backend_url(fake_request),
self.spark.url + '/api')
@pytest.mark.parametrize('content', [
'<a href="{prefix}/page/">page</a>',
'<link rel="stylesheet" href="{prefix}/styles.css" />',
six.u('<a href="{prefix}/über-uns/">Über uns</a>'),
# missing href attribute so expected to fail:
pytest.mark.xfail('<a data-href="{prefix}/page/">page</a>'),
pytest.mark.xfail('<link rel="stylesheet" data-href="{prefix}/styles.css" />'),
# fails because the URL path doesn't start with the prefix
pytest.mark.xfail('<a href="/something/completely/">different</a>'),
])
def test_replace_href_tags(content):
content = content.format(prefix=PROXY_PREFIX)
replaced = spark.replace(content)
assert replaced != content
soup = BeautifulSoup(replaced, BEAUTIFULSOUP_BUILDER)
for tag in soup.find_all(['a', 'link']):
assert tag.attrs['href'].startswith(spark.proxy_root)
@pytest.mark.parametrize('content', [
'<img src="{prefix}/img.png" />',
'<script src="{prefix}/script.js" />',
'<img src="/logo.png" />',
six.u('<script src="{prefix}/scrüpt.js" />'),
# missing src attribute so expected to fail:
pytest.mark.xfail('<img data-src="{prefix}/img.png" />'),
pytest.mark.xfail('<script data-src="{prefix}/script.js" />'),
])
def test_replace_src_tags(content):
content = content.format(prefix=PROXY_PREFIX)
replaced = spark.replace(content)
assert replaced != content
soup = BeautifulSoup(replaced, BEAUTIFULSOUP_BUILDER)
for tag in soup.find_all(['img', 'script']):
assert tag.attrs['src'].startswith(spark.proxy_root)
================================================
FILE: tox.ini
================================================
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
recreate = true
[testenv]
commands =
py.test {posargs}
deps:
tornado
coverage>=4.0
pytest-isort
pytest-cache>=1.0
pytest-cov
flake8
pytest-flake8>=0.5
pytest>=2.8.0
six
[flake8]
ignore = E501
gitextract_apq7qxov/ ├── .coveragerc ├── .eslintrc.json ├── .gitignore ├── .travis.yml ├── CODE_OF_CONDUCT.md ├── LICENSE ├── MANIFEST.in ├── Makefile ├── README.md ├── examples/ │ └── Jupyter Spark example.ipynb ├── package.json ├── pytest.ini ├── setup.cfg ├── setup.py ├── src/ │ └── jupyter_spark/ │ ├── __init__.py │ ├── handlers.py │ ├── spark.py │ └── static/ │ └── extension.js ├── tests/ │ └── test_spark.py └── tox.ini
SYMBOL INDEX (23 symbols across 4 files)
FILE: src/jupyter_spark/__init__.py
function _jupyter_nbextension_paths (line 10) | def _jupyter_nbextension_paths(): # pragma: no cover
function _jupyter_server_extension_paths (line 22) | def _jupyter_server_extension_paths(): # pragma: no cover
function load_jupyter_server_extension (line 28) | def load_jupyter_server_extension(nbapp): # pragma: no cover
FILE: src/jupyter_spark/handlers.py
class SparkHandler (line 8) | class SparkHandler(IPythonHandler):
method initialize (line 10) | def initialize(self, spark):
method get (line 14) | def get(self):
method handle_response (line 25) | def handle_response(self, response):
FILE: src/jupyter_spark/spark.py
class Spark (line 26) | class Spark(LoggingConfigurable):
method __init__ (line 41) | def __init__(self, *args, **kwargs):
method backend_url (line 46) | def backend_url(self, request):
method replace (line 50) | def replace(self, content):
FILE: tests/test_spark.py
class FakeHandler (line 16) | class FakeHandler(tornado.web.RequestHandler):
method get (line 18) | def get(self):
class FakeReplaceHandler (line 23) | class FakeReplaceHandler(FakeHandler):
class FakeVerbatimHandler (line 30) | class FakeVerbatimHandler(FakeHandler):
class SparkHandlerTests (line 36) | class SparkHandlerTests(tornado.testing.AsyncHTTPTestCase):
method get_app (line 38) | def get_app(self):
method test_http_fetch_error (line 48) | def test_http_fetch_error(self):
method test_http_fetch_replace_success (line 53) | def test_http_fetch_replace_success(self):
method test_http_fetch_verbatim_success (line 62) | def test_http_fetch_verbatim_success(self):
method test_spark_backend_url (line 70) | def test_spark_backend_url(self):
function test_replace_href_tags (line 89) | def test_replace_href_tags(content):
function test_replace_src_tags (line 107) | def test_replace_src_tags(content):
Condensed preview — 20 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (52K chars).
[
{
"path": ".coveragerc",
"chars": 183,
"preview": "[run]\nbranch = True\nsource = jupyter_spark\n\n[paths]\nsource =\n .tox/*/lib/python*/site-packages/jupyter_spark\n .tox/p"
},
{
"path": ".eslintrc.json",
"chars": 342,
"preview": "{\n \"env\": {\n \"browser\": true,\n \"jquery\": true,\n \"amd\": true\n },\n \"plugins\": [\n \"amd"
},
{
"path": ".gitignore",
"chars": 779,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
},
{
"path": ".travis.yml",
"chars": 1462,
"preview": "language: python\nsudo: false\ncache: pip\npython:\n- '2.7'\n- '3.4'\n- '3.5'\n- '3.6'\n- pypy\ninstall:\n- pip install tox-travis"
},
{
"path": "CODE_OF_CONDUCT.md",
"chars": 495,
"preview": "# Community Participation Guidelines\n\nThis repository is governed by Mozilla's code of conduct and etiquette guidelines."
},
{
"path": "LICENSE",
"chars": 16725,
"preview": "Mozilla Public License Version 2.0\n==================================\n\n1. Definitions\n--------------\n\n1.1. \"Contributor\""
},
{
"path": "MANIFEST.in",
"chars": 146,
"preview": "include README.md\nrecursive-include src *.js\nrecursive-include tests *.py\nrecursive-include notebooks *.ipynb\nrecursive-"
},
{
"path": "Makefile",
"chars": 544,
"preview": ".PHONY: install build uninstall enable clean dev\n\ninstall:\n\tjupyter serverextension enable --py jupyter_spark\n\tjupyter n"
},
{
"path": "README.md",
"chars": 4057,
"preview": "# jupyter-spark\n\n[](http://unmaintained.tech/)\n[![Build St"
},
{
"path": "examples/Jupyter Spark example.ipynb",
"chars": 3693,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"## Example jupyter_spark notebook\\n"
},
{
"path": "package.json",
"chars": 98,
"preview": "{\n \"devDependencies\": {\n \"eslint\": \"^4.17.0\",\n \"eslint-plugin-amd-imports\": \"^4.0.0\"\n }\n}\n"
},
{
"path": "pytest.ini",
"chars": 197,
"preview": "[pytest]\naddopts = -v --flake8 --isort --cov-report xml --cov-report=term-missing --cov=jupyter_spark --cov-config .cove"
},
{
"path": "setup.cfg",
"chars": 22,
"preview": "[wheel]\nuniversal = 1\n"
},
{
"path": "setup.py",
"chars": 743,
"preview": "from setuptools import find_packages, setup\n\nsetup(\n name='jupyter-spark',\n use_scm_version={\n 'version_sch"
},
{
"path": "src/jupyter_spark/__init__.py",
"chars": 1152,
"preview": "from pkg_resources import get_distribution, DistributionNotFound\n\ntry:\n __version__ = get_distribution(__name__).vers"
},
{
"path": "src/jupyter_spark/handlers.py",
"chars": 1230,
"preview": "import json\n\nimport tornado.web\nfrom notebook.base.handlers import IPythonHandler\nfrom tornado import httpclient\n\n\nclass"
},
{
"path": "src/jupyter_spark/spark.py",
"chars": 2149,
"preview": "import re\n\nfrom bs4 import BeautifulSoup\nfrom notebook.utils import url_path_join\nfrom traitlets.config import LoggingCo"
},
{
"path": "src/jupyter_spark/static/extension.js",
"chars": 9883,
"preview": "var UPDATE_FREQUENCY = 10000; // ms\nvar UPDATE_FREQUENCY_ACTIVE = 500;\nvar PROGRESS_COUNT_TEXT = \"Running Spark job \";\n\n"
},
{
"path": "tests/test_spark.py",
"chars": 4399,
"preview": "# -*- coding: utf-8 -*-\nimport pytest\nimport six\nimport tornado\nimport tornado.httpclient\nimport tornado.testing\nimport "
},
{
"path": "tox.ini",
"chars": 482,
"preview": "# Tox (http://tox.testrun.org/) is a tool for running tests\n# in multiple virtualenvs. This configuration file will run "
}
]
About this extraction
This page contains the full source code of the mozilla/jupyter-spark GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 20 files (47.6 KB), approximately 11.9k tokens, and a symbol index with 23 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.