Showing preview only (1,356K chars total). Download the full file or copy to clipboard to get everything.
Repository: unias/docklet
Branch: master
Commit: 70c089a6a5bb
Files: 185
Total size: 1.3 MB
Directory structure:
gitextract_vecf3u3x/
├── .gitignore
├── CHANGES
├── LICENSE
├── README.md
├── VERSION
├── bin/
│ ├── docklet-master
│ ├── docklet-supermaster
│ └── docklet-worker
├── cloudsdk-installer.sh
├── conf/
│ ├── container/
│ │ ├── lxc2.container.batch.conf
│ │ ├── lxc2.container.conf
│ │ ├── lxc3.container.batch.conf
│ │ └── lxc3.container.conf
│ ├── docklet.conf.template
│ ├── lxc-script/
│ │ ├── lxc-ifdown
│ │ ├── lxc-ifup
│ │ ├── lxc-mount
│ │ └── lxc-prestart
│ └── nginx_docklet.conf
├── doc/
│ ├── devdoc/
│ │ ├── coding.md
│ │ ├── config_info.md
│ │ ├── network-arch.md
│ │ ├── networkmgr.md
│ │ ├── openvswitch-vlan.md
│ │ ├── proxy-control.md
│ │ └── startup.md
│ ├── devguide/
│ │ └── devguide.md
│ └── example/
│ └── example-LogisticRegression.py
├── meter/
│ ├── connector/
│ │ ├── master.py
│ │ └── minion.py
│ ├── daemon/
│ │ ├── http.py
│ │ ├── master_v1.py
│ │ └── minion_v1.py
│ ├── intra/
│ │ ├── billing.py
│ │ ├── cgroup.py
│ │ ├── smart.py
│ │ └── system.py
│ ├── main.py
│ └── policy/
│ ├── allocate.py
│ └── quota.py
├── prepare.sh
├── src/
│ ├── master/
│ │ ├── beansapplicationmgr.py
│ │ ├── bugreporter.py
│ │ ├── cloudmgr.py
│ │ ├── deploy.py
│ │ ├── httprest.py
│ │ ├── jobmgr.py
│ │ ├── lockmgr.py
│ │ ├── monitor.py
│ │ ├── network.py
│ │ ├── nodemgr.py
│ │ ├── notificationmgr.py
│ │ ├── parser.py
│ │ ├── releasemgr.py
│ │ ├── settings.py
│ │ ├── sysmgr.py
│ │ ├── taskmgr.py
│ │ ├── testTaskCtrler.py
│ │ ├── testTaskMgr.py
│ │ ├── testTaskWorker.py
│ │ ├── userManager.py
│ │ ├── userinit.sh
│ │ └── vclustermgr.py
│ ├── protos/
│ │ ├── rpc.proto
│ │ ├── rpc_pb2.py
│ │ └── rpc_pb2_grpc.py
│ ├── utils/
│ │ ├── env.py
│ │ ├── etcdlib.py
│ │ ├── gputools.py
│ │ ├── imagemgr.py
│ │ ├── log.py
│ │ ├── logs.py
│ │ ├── lvmtool.py
│ │ ├── manage.py
│ │ ├── model.py
│ │ ├── nettools.py
│ │ ├── proxytool.py
│ │ ├── tools.py
│ │ └── updatebase.py
│ └── worker/
│ ├── container.py
│ ├── monitor.py
│ ├── ossmounter.py
│ ├── taskcontroller.py
│ ├── taskworker.py
│ └── worker.py
├── tools/
│ ├── DOCKLET_NOTES.txt
│ ├── R_demo.ipynb
│ ├── alterUserTable.py
│ ├── clean-usage.py
│ ├── cloudsetting.aliyun.template.json
│ ├── dl_start_spark.sh
│ ├── dl_stop_spark.sh
│ ├── docklet-deploy.sh
│ ├── etcd-multi-nodes.sh
│ ├── etcd-one-node.sh
│ ├── nginx_config.sh
│ ├── npmrc
│ ├── pip.conf
│ ├── python_demo.ipynb
│ ├── resolv.conf
│ ├── sources.list
│ ├── start_jupyter.sh
│ ├── update-UserTable.sh
│ ├── update-basefs.sh
│ ├── update_baseurl.sh
│ ├── update_con_network.py
│ ├── update_v0.3.2.py
│ ├── upgrade.py
│ ├── upgrade_file2db.py
│ └── vimrc.local
├── user/
│ ├── stopreqmgr.py
│ └── user.py
└── web/
├── static/
│ ├── css/
│ │ └── docklet.css
│ ├── dist/
│ │ ├── css/
│ │ │ ├── AdminLTE.css
│ │ │ ├── filebox.css
│ │ │ ├── flotconfig.css
│ │ │ ├── modalconfig.css
│ │ │ └── skins/
│ │ │ ├── _all-skins.css
│ │ │ └── skin-blue.css
│ │ └── js/
│ │ └── app.js
│ └── js/
│ ├── plot_monitor.js
│ └── plot_monitorReal.js
├── templates/
│ ├── addCluster.html
│ ├── base_AdminLTE.html
│ ├── batch/
│ │ ├── batch_admin_list.html
│ │ ├── batch_create.html
│ │ ├── batch_info.html
│ │ ├── batch_list.html
│ │ └── batch_output.html
│ ├── beansapplication.html
│ ├── cloud.html
│ ├── config.html
│ ├── create_notification.html
│ ├── dashboard.html
│ ├── description.html
│ ├── error/
│ │ ├── 401.html
│ │ └── 500.html
│ ├── error.html
│ ├── home.template
│ ├── listcontainer.html
│ ├── login.html
│ ├── logs.html
│ ├── monitor/
│ │ ├── history.html
│ │ ├── historyVNode.html
│ │ ├── hosts.html
│ │ ├── hostsConAll.html
│ │ ├── hostsRealtime.html
│ │ ├── monitorUserAll.html
│ │ ├── monitorUserCluster.html
│ │ ├── status.html
│ │ └── statusRealtime.html
│ ├── notification.html
│ ├── notification_info.html
│ ├── opfailed.html
│ ├── opsuccess.html
│ ├── register.html
│ ├── saveconfirm.html
│ ├── settings.html
│ ├── user/
│ │ ├── activate.html
│ │ ├── info.html
│ │ └── mailservererror.html
│ └── user_list.html
├── web.py
└── webViews/
├── admin.py
├── authenticate/
│ ├── auth.py
│ ├── login.py
│ └── register.py
├── batch.py
├── beansapplication.py
├── checkname.py
├── cloud.py
├── cluster.py
├── cookie_tool.py
├── dashboard.py
├── dockletrequest.py
├── log.py
├── monitor.py
├── notification/
│ └── notification.py
├── reportbug.py
├── syslogs.py
├── user/
│ ├── grouplist.py
│ ├── userActivate.py
│ ├── userinfo.py
│ └── userlist.py
└── view.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
__pycache__
*.pyc
*.swp
__temp
*~
.DS_Store
docklet.conf
home.html
src/utils/migrations/
container.conf
container.batch.conf
================================================
FILE: CHANGES
================================================
v0.4.0, May 26, 2019
--------------------
**Bug Fix**
* Fix a bug of update base image.
* Fix a bug of port control & a bug of update_v0.3.2.py.
* Add locks to solve synchronization problems.
* Fix a type error in web/web.py.
* Fix a bug that net stats can't be shown.
**Improvement**
* [#298 #299 #300 ] Support batch computing.
* Add information of login to user log and database.
* Prevent users that is not activated from applying for beans.
* Aggregate api of monitor at the backend and aggregate http request on status realtime pages for monitor information.
* Support user to report a bug in dashboard.
* Display image size when creating vcluster.
* Security enhancement: forbid double slashes url, add header into nginx to defend clickjacking, add CsrfProtect, forbid methods except for GET and POST in nginx and support https...
* Add LoginFailMsg into model & Ban user if he input wrong password for many times.
* Add UDP4 mapping for iptables.
* Support migrating containers.
* Support releasing vcluster when it is stopped for too long automatically.
v0.3.2, Dec 11, 2017
--------------------
**Bug Fix**
* Fix the problem that some monitoring data are used before initialnizing.
* Add some error message when start service failed.
* Add npm registry.
**Improvement**
* [#277] Support egress and ingress qos rate limiting.
* [#277] Support network and ports mappings billings.
* Support network monitoring.
* Limit the number of users' vnodes by ip addresses.
* Add billing detail and billing history detail
* Replace lxc-info with lxc.Container.get_cgroup_item()
v0.3.0, Sep 29, 2017
--------------------
**Bug Fix**
* [#180] generated_password file no exist after master init
* Release ip when create container failed.
**Improvement**
* [#16] display file size, modification time in jupyter notebook
* [#86] time display in UserList
* [#87] add a new panel to approve or decline user activation requests
* [#121] Autofilling may lead to a bug that makes local users cannot login
* [#178] record and display history of all containers
* [#210] rename Dashboard to Workspace
* [#212] add docklet hyperlink in web portal
* Separate user module from master.
* Support multiple masters run in the same time, and the user can choose which to use in webpage.
* Support distributed gateway, if enable, worker will setup its own gateway.
* Support user gateway.
v0.2.8, Jul 28, 2016
--------------------
**Bug Fix**
* [#119] version display error
**Improvement**
* [#52] give user a total quota, let themselves decide how to use quota
* [#72] recording the user's historical resource usage
* [#85] Making workers's state consistent with master
* [#88] setting config file in admin panel
* [#96] Web notifications
* [#113] Recovery : after poweroff, just recover container, not recover service
v0.2.7, May 17, 2016
--------------------
**Bug Fix**
* [#9] updating user profile taking effects immediately
* [#12] logging user's activity
* [#14] Can't stop vcluster by dashboard page
* [#18] subprocess call should check return status
* [#19] lxc config string in config file is limited in 16 bytes
* [#25] bug of external login
* [#30] support lxc.custom.conf in appending
* [#35] nfs mountpoint bug in imagemgr.py
* [#49] Fail to create container
* [#57] status page of normal user failed
* [#68] Not Found error when just click "Sign in" Button
* [#76] unable to show and edit user table in smartphone
**Improvement**
* [#7] enhance quota management
* [#8] independent starting of master and workers
* [#20] check typing and input on web pages and web server
* [#23] add LXCFS for container
* [#41] move system data to global/sys
* [#42] check IP and network pool when releasing IPs
* [#48] token expires after some time
* [#54] display container owner
* [#61] rewrite httprest.py using flask routing
**Notes**
* If you upgrade from former version, please run tools/upgrade.py first.
v0.2.6, Mar 31, 2016
--------------------
An initial release on github.com
* Using the open source AdminLTE theme
================================================
FILE: LICENSE
================================================
Copyright (c) 2016, Peking University (PKU).
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the PKU nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.INCLUDING NEGLIGENCE OR OTHERWISE
================================================
FILE: README.md
================================================
# Docklet
https://unias.github.io/docklet
## Intro
Docklet is an operating system for virtual private cloud. Its goal is
to help a user group effectively share cluster resources in physical
datacenter or in the cloud. In Docklet, the shared resources are organized
and managed as a virtual private cloud among the user group. Every user
has their own private **virtual cluster (vcluster)**, which consists of
a number of virtual Linux container nodes distributed over the physical
cluster. Each vcluster is separated from others and can be operated like
a real physical cluster. Therefore, most applications, especially those
requiring a cluster of resources can run in vcluster seamlessly.
Users manage and use their vcluster resources all through web. The supported
resources include CPUs, GPUs, shared storage, etc. The only client
tool needed is a modern web browser supporting HTML5, like Safari,
Firefox, or Chrome. The integrated *jupyter notebook* provides a web
**Workspace**. Users can code, debug, test, and run their programs,
even visualize the outputs online. Serverless computing and batch
processing is supported.
Docklet creates virtual nodes from a base image. Admins can
pre-install development tools and frameworks according to their
interests. The users are also free to install their specific software
in their vcluster.
Docklet only need **one** public IP address. The vclusters are
configured to use private IP address range, e.g., 172.16.0.0/16,
192.168.0.0/16, 10.0.0.0/8. A proxy is setup to help
users visit their vclusters behind the firewall/gateway.
## Architecture
The Docklet system runtime consists of four main components:
- distributed file system server
- etcd server
- docklet supermaster, master
- docklet worker

For detailed information about configurations, please see [Config](#config).
## Install
Currently the Docklet system is recommend to run in Ubuntu 15.10+.
Ensure that python3.5 is the default python3 version.
Clone Docklet from github
```
git clone https://github.com/unias/docklet.git
```
Run **prepare.sh** from console to install depended packages and
generate necessary configurations.
A *root* users will be created for managing the Docklet system. The
password is recorded in `FS_PREFIX/local/generated_password.txt` .
## Config ##
The main configuration file of docklet is conf/docklet.conf. Most
default setting works for a single host environment.
First copy docklet.conf.template to get docklet.conf.
Pay attention to the following settings:
- NETWORK_DEVICE : the network interface to use.
- ETCD : the etcd server address. For distributed multi hosts
environment, it should be one of the ETCD public server address.
For single host environment, the default value should be OK.
- STORAGE : using disk or file to storage persistent data, for
single host, file is convenient.
- FS_PREFIX: the working dir of docklet runtime. default is
/opt/docklet.
- CLUSTER_NET: the vcluster network ip address range, default is
172.16.0.1/16. This network range should all be allocated to and
managed by docklet.
- PROXY_PORT : the listening port of configurable-http-proxy. It proxy
connections from external public network to internal private
container networks.
- PORTAL_URL : the portal of the system. Users access the system
by visiting this address. If the system is behind a firewall, then
a reverse proxy should be setup. Default is MASTER_IP:NGINX_PORT.
- NGINX_PORT : the access port of the public portal. User use this
port to visit docklet system.
- DISTRIBUTED_GATEWAY : whether the users' gateways are distributed
or not. Both master and worker must be set by same value.
- PUBLIC_IP : public ip of this machine. If DISTRIBUTED_GATEWAY is True,
users' gateways can be setup on this machine. Users can visit this
machine by the public ip. default: IP of NETWORK_DEVICE.
- USER_IP : the ip of user server. default : localhost
- MASTER_IPS : tell the web server the ips of all the cluster master.
- AUTH_KEY: the key to request users server from master, or to request
master from users server. Please set the same value on each machine.
Please don't use the default value.
## Start ##
### distributed file system ###
For multi hosts distributed environment, a distributed file system is
needed to store global data. Currently, glusterfs has been tested.
Lets presume the file system server export filesystem as nfs
**fileserver:/pub** :
In each physical host to run docklet, mount **fileserver:/pub** to
**FS_PEFIX/global** .
For single host environment, nothing to do.
### etcd ###
For single host environment, start **tools/etcd-one-node.sh** . Some recent
Ubuntu releases have included **etcd** in the repository, just `apt-get
install etcd`, and it need not to start etcd manually. For others, you
should install etcd manually.
For multi hosts distributed environment, **must** start
**tools/etcd-multi-nodes.sh** in each etcd server hosts. This scripts
requires users providing the etcd server address as parameters.
### supermaster ###
Supermaster is a server consist of web server, user server and a master server instance.
If it is the first time you start docklet, run `bin/docklet-supermaster init`
to init and start a docklet master, web server and user server. Otherwise, run `bin/docklet-supermaster start`.
When you start a supermaster, you don't need to start an extra master in the same cluster.
### master ###
A master manages all the workers in one data center. Docklet can manage
several data centers, each data center has one master server. But
a docklet system will only have one supermaster.
First, select a server with 2 network interface card, one having a
public IP address/url, e.g., docklet.info; the other having a private IP
address, e.g., 172.16.0.1. This server will be the master.
If it is the first time you start docklet, run `bin/docklet-master init`
to init and start docklet master. Otherwise, run `bin/docklet-master start`,
which will start master in recovery mode in background using
conf/docklet.conf. (Note: if docklet will run in the distributed gateway mode
and recovery mode, please start the workers first.)
Please fill the USER_IP and USER_PORT in conf/docklet.conf, it is the ip and port of user server.
By default, it is `localhost` and `9100`
You can check the daemon status by running `bin/docklet-master status`
The master logs are in **FS_PREFIX/local/log/docklet-master.log** and
**docklet-web.log**.
### worker ###
Worker needs a basefs image to create containers.
You can create such an image with `lxc-create -n test -t download`,
then copy the rootfs to **FS_PREFIX/local**, and rename `rootfs`
to `basefs`.
Note the `jupyerhub` package must be installed for this image. And the
start script `tools/start_jupyter.sh` should be placed at
`basefs/home/jupyter`.
You can check and run `tools/update-basefs.sh` to update basefs.
Run `bin/docklet-worker start`, will start worker in background.
You can check the daemon status by running `bin/docklet-worker status`.
The log is in **FS_PREFIX/local/log/docklet-worker.log**.
Currently, the worker must be run after the master has been started.
## Usage ##
Open a browser, visiting the address specified by PORTAL_URL ,
e.g., ` http://docklet.info/ `
That is it.
# Contribute #
Contributions are welcome. Please check [devguide](doc/devguide/devguide.md)
================================================
FILE: VERSION
================================================
0.4.0
================================================
FILE: bin/docklet-master
================================================
#!/bin/sh
[ $(id -u) != '0' ] && echo "root is needed" && exit 1
# get some path of docklet
bindir=${0%/*}
# $bindir maybe like /opt/docklet/src/../sbin
# use command below to make $bindir in normal absolute path
DOCKLET_BIN=$(cd $bindir; pwd)
DOCKLET_HOME=${DOCKLET_BIN%/*}
DOCKLET_CONF=$DOCKLET_HOME/conf
LXC_SCRIPT=$DOCKLET_CONF/lxc-script
DOCKLET_SRC=$DOCKLET_HOME/src
DOCKLET_LIB=$DOCKLET_SRC
DOCKLET_WEB=$DOCKLET_HOME/web
DOCKLET_USER=$DOCKLET_HOME/user
# default working directory, default to /opt/docklet
FS_PREFIX=/opt/docklet
#network interface , default is eth0
NETWORK_DEVICE=eth0
#etcd server address, default is localhost:2379
ETCD=localhost:2379
#unique cluster_name, default is docklet-vc
CLUSTER_NAME=docklet-vc
#web port, default is 8888
WEB_PORT=8888
USER_PORT=9100
#cluster net, default is 172.16.0.1/16
CLUSTER_NET="172.16.0.1/16"
# ip addresses range of containers for batch job, default is 10.16.0.0/16
BATCH_NET="10.16.0.0/16"
#configurable-http-proxy public port, default is 8000
PROXY_PORT=8000
#configurable-http-proxy api port, default is 8001
PROXY_API_PORT=8001
DISTRIBUTED_GATEWAY=False
. $DOCKLET_CONF/docklet.conf
export FS_PREFIX
RUN_DIR=$FS_PREFIX/local/run
LOG_DIR=$FS_PREFIX/local/log
# This next line determines what user the script runs as.
DAEMON_USER=root
# settings for docklet master
DAEMON_MASTER=$DOCKLET_LIB/master/httprest.py
DAEMON_NAME_MASTER=docklet-master
DAEMON_OPTS_MASTER=
# The process ID of the script when it runs is stored here:
PIDFILE_MASTER=$RUN_DIR/$DAEMON_NAME_MASTER.pid
# settings for docklet web
DAEMON_WEB=$DOCKLET_WEB/web.py
DAEMON_NAME_WEB=docklet-web
PIDFILE_WEB=$RUN_DIR/docklet-web.pid
DAEMON_OPTS_WEB=
# settings for docklet proxy, which is required for web access
DAEMON_PROXY=`which configurable-http-proxy`
DAEMON_NAME_PROXY=docklet-proxy
PIDFILE_PROXY=$RUN_DIR/proxy.pid
DAEMON_OPTS_PROXY=
# settings for docklet user
DAEMON_USER_MODULE=$DOCKLET_USER/user.py
DAEMON_NAME_USER=docklet-user
PIDFILE_USER=$RUN_DIR/docklet-user.pid
DAEMON_OPTS_USER=
RUNNING_CONFIG=$FS_PREFIX/local/docklet-running.conf
export CONFIG=$RUNNING_CONFIG
. /lib/lsb/init-functions
###########
pre_start_master () {
log_daemon_msg "Starting $DAEMON_NAME_MASTER in $FS_PREFIX"
[ ! -d $FS_PREFIX/global ] && mkdir -p $FS_PREFIX/global
[ ! -d $FS_PREFIX/local ] && mkdir -p $FS_PREFIX/local
[ ! -d $FS_PREFIX/global/users ] && mkdir -p $FS_PREFIX/global/users
[ ! -d $FS_PREFIX/global/sys ] && mkdir -p $FS_PREFIX/global/sys
[ ! -d $FS_PREFIX/global/images/private ] && mkdir -p $FS_PREFIX/global/images/private
[ ! -d $FS_PREFIX/global/images/public ] && mkdir -p $FS_PREFIX/global/images/public
[ ! -d $FS_PREFIX/local/volume ] && mkdir -p $FS_PREFIX/local/volume
[ ! -d $FS_PREFIX/local/temp ] && mkdir -p $FS_PREFIX/local/temp
[ ! -d $FS_PREFIX/local/run ] && mkdir -p $FS_PREFIX/local/run
[ ! -d $FS_PREFIX/local/log ] && mkdir -p $FS_PREFIX/local/log
grep -P "^[\s]*[a-zA-Z]" $DOCKLET_CONF/docklet.conf > $RUNNING_CONFIG
echo "DOCKLET_HOME=$DOCKLET_HOME" >> $RUNNING_CONFIG
echo "DOCKLET_BIN=$DOCKLET_BIN" >> $RUNNING_CONFIG
echo "DOCKLET_CONF=$DOCKLET_CONF" >> $RUNNING_CONFIG
echo "LXC_SCRIPT=$LXC_SCRIPT" >> $RUNNING_CONFIG
echo "DOCKLET_SRC=$DOCKLET_SRC" >> $RUNNING_CONFIG
echo "DOCKLET_LIB=$DOCKLET_LIB" >> $RUNNING_CONFIG
# iptables for NAT network for containers to access web
iptables -t nat -F
iptables -t nat -A POSTROUTING -s $CLUSTER_NET -j MASQUERADE
iptables -t nat -A POSTROUTING -s $BATCH_NET -j MASQUERADE
}
do_start_master () {
DAEMON_OPTS_MASTER=$1
# MODE : start mode
# new : clean old data in etcd, global directory and start a new cluster
# recovery : start cluster and recover status from etcd and global directory
# Default is "recovery"
start-stop-daemon --start --oknodo --background --pidfile $PIDFILE_MASTER --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_MASTER -- $DAEMON_OPTS_MASTER
log_end_msg $?
}
pre_start_web () {
log_daemon_msg "Starting $DAEMON_NAME_WEB in $FS_PREFIX"
webip=$(ip addr show $NETWORK_DEVICE | grep -oE "[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/[0-9]+")
[ $? != "0" ] && echo "wrong NETWORK_DEVICE $NETWORK_DEVICE" && exit 1
webip=${webip%/*}
AUTH_COOKIE_URL=http://$webip:$WEB_PORT/jupyter
#echo "set AUTH_COOKIE_URL:$AUTH_COOKIE_URL in etcd with key:$CLUSTER_NAME/web/authurl"
curl -XPUT http://$ETCD/v2/keys/$CLUSTER_NAME/web/authurl -d value="$AUTH_COOKIE_URL" > /dev/null 2>&1
[ $? != 0 ] && echo "set AUTH_COOKIE_URL failed in etcd" && exit 1
}
do_start_web () {
pre_start_web
DAEMON_OPTS_WEB="-p $WEB_PORT"
start-stop-daemon --start --background --pidfile $PIDFILE_WEB --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_WEB -- $DAEMON_OPTS_WEB
log_end_msg $?
}
do_start_proxy () {
if [ "$DISTRIBUTED_GATEWAY" = "True" ]
then
return 1
fi
log_daemon_msg "Starting $DAEMON_NAME_PROXY daemon in $FS_PREFIX"
DAEMON_OPTS_PROXY="--port $PROXY_PORT --api-port $PROXY_API_PORT --default-target=http://localhost:8888"
start-stop-daemon --start --background --pidfile $PIDFILE_PROXY --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_PROXY -- $DAEMON_OPTS_PROXY
log_end_msg $?
}
do_start_user () {
log_daemon_msg "Starting $DAEMON_NAME_USER in $FS_PREFIX"
DAEMON_OPTS_USER="-p $USER_PORT"
start-stop-daemon --start --background --pidfile $PIDFILE_USER --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_USER_MODULE -- $DAEMON_OPTS_USER
log_end_msg $?
}
do_stop_master () {
log_daemon_msg "Stopping $DAEMON_NAME_MASTER daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_MASTER --retry 10
log_end_msg $?
}
do_stop_web () {
log_daemon_msg "Stopping $DAEMON_NAME_WEB daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_WEB --retry 10
log_end_msg $?
}
do_stop_proxy () {
if [ "$DISTRIBUTED_GATEWAY" = "True" ]
then
return 1
fi
log_daemon_msg "Stopping $DAEMON_NAME_PROXY daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_PROXY --retry 10
log_end_msg $?
}
do_stop_user () {
log_daemon_msg "Stopping $DAEMON_NAME_USER daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_USER --retry 10
log_end_msg $?
}
case "$1" in
init)
pre_start_master
do_start_master "new"
do_start_proxy
do_start_web
;;
start)
pre_start_master
do_start_master "recovery"
do_start_proxy
do_start_web
;;
stop)
do_stop_master
do_stop_proxy
do_stop_web
;;
restart)
do_stop_master
do_stop_proxy
do_stop_web
pre_start_master
do_start_master "recovery"
do_start_proxy
do_start_web
;;
start_proxy)
do_start_proxy
;;
start_web)
do_start_web
;;
stop_web)
do_stop_web
;;
reinit)
do_stop_master
do_stop_proxy
pre_start_master
do_start_master "new"
do_start_proxy
;;
status)
status=0
status_of_proc -p $PIDFILE_MASTER "$DAEMON_MASTER" "$DAEMON_NAME_MASTER" || status=$?
status_of_proc -p $PIDFILE_PROXY "$DAEMON_PROXY" "$DAEMON_NAME_PROXY" || status=$?
exit $status
;;
*)
echo "Usage: $DAEMON_NAME_MASTER {init|start|stop|restart|reinit|status|start_proxy|stop_proxy|start_web|stop_web}"
exit 1
;;
esac
exit 0
================================================
FILE: bin/docklet-supermaster
================================================
#!/bin/sh
[ $(id -u) != '0' ] && echo "root is needed" && exit 1
# get some path of docklet
bindir=${0%/*}
# $bindir maybe like /opt/docklet/src/../sbin
# use command below to make $bindir in normal absolute path
DOCKLET_BIN=$(cd $bindir; pwd)
DOCKLET_HOME=${DOCKLET_BIN%/*}
DOCKLET_CONF=$DOCKLET_HOME/conf
LXC_SCRIPT=$DOCKLET_CONF/lxc-script
DOCKLET_SRC=$DOCKLET_HOME/src
DOCKLET_LIB=$DOCKLET_SRC
DOCKLET_WEB=$DOCKLET_HOME/web
DOCKLET_USER=$DOCKLET_HOME/user
# default working directory, default to /opt/docklet
FS_PREFIX=/opt/docklet
#configurable-http-proxy public port, default is 8000
PROXY_PORT=8000
#configurable-http-proxy api port, default is 8001
PROXY_API_PORT=8001
#network interface , default is eth0
NETWORK_DEVICE=eth0
#etcd server address, default is localhost:2379
ETCD=localhost:2379
#unique cluster_name, default is docklet-vc
CLUSTER_NAME=docklet-vc
#web port, default is 8888
WEB_PORT=8888
USER_PORT=9100
#cluster net, default is 172.16.0.1/16
CLUSTER_NET="172.16.0.1/16"
# ip addresses range of containers for batch job, default is 10.16.0.0/16
BATCH_NET="10.16.0.0/16"
. $DOCKLET_CONF/docklet.conf
export FS_PREFIX
RUN_DIR=$FS_PREFIX/local/run
LOG_DIR=$FS_PREFIX/local/log
# This next line determines what user the script runs as.
DAEMON_USER=root
# settings for docklet master
DAEMON_MASTER=$DOCKLET_LIB/master/httprest.py
DAEMON_NAME_MASTER=docklet-master
DAEMON_OPTS_MASTER=
# The process ID of the script when it runs is stored here:
PIDFILE_MASTER=$RUN_DIR/$DAEMON_NAME_MASTER.pid
# settings for docklet proxy, which is required for web access
DAEMON_PROXY=`which configurable-http-proxy`
DAEMON_NAME_PROXY=docklet-proxy
PIDFILE_PROXY=$RUN_DIR/proxy.pid
DAEMON_OPTS_PROXY=
# settings for docklet web
DAEMON_WEB=$DOCKLET_WEB/web.py
DAEMON_NAME_WEB=docklet-web
PIDFILE_WEB=$RUN_DIR/docklet-web.pid
DAEMON_OPTS_WEB=
# settings for docklet user
DAEMON_USER_MODULE=$DOCKLET_USER/user.py
DAEMON_NAME_USER=docklet-user
PIDFILE_USER=$RUN_DIR/docklet-user.pid
DAEMON_OPTS_USER=
RUNNING_CONFIG=$FS_PREFIX/local/docklet-running.conf
export CONFIG=$RUNNING_CONFIG
. /lib/lsb/init-functions
###########
pre_start_master () {
log_daemon_msg "Starting $DAEMON_NAME_MASTER in $FS_PREFIX"
[ ! -d $FS_PREFIX/global ] && mkdir -p $FS_PREFIX/global
[ ! -d $FS_PREFIX/local ] && mkdir -p $FS_PREFIX/local
[ ! -d $FS_PREFIX/global/users ] && mkdir -p $FS_PREFIX/global/users
[ ! -d $FS_PREFIX/global/sys ] && mkdir -p $FS_PREFIX/global/sys
[ ! -d $FS_PREFIX/global/images/private ] && mkdir -p $FS_PREFIX/global/images/private
[ ! -d $FS_PREFIX/global/images/public ] && mkdir -p $FS_PREFIX/global/images/public
[ ! -d $FS_PREFIX/local/volume ] && mkdir -p $FS_PREFIX/local/volume
[ ! -d $FS_PREFIX/local/temp ] && mkdir -p $FS_PREFIX/local/temp
[ ! -d $FS_PREFIX/local/run ] && mkdir -p $FS_PREFIX/local/run
[ ! -d $FS_PREFIX/local/log ] && mkdir -p $FS_PREFIX/local/log
grep -P "^[\s]*[a-zA-Z]" $DOCKLET_CONF/docklet.conf > $RUNNING_CONFIG
echo "DOCKLET_HOME=$DOCKLET_HOME" >> $RUNNING_CONFIG
echo "DOCKLET_BIN=$DOCKLET_BIN" >> $RUNNING_CONFIG
echo "DOCKLET_CONF=$DOCKLET_CONF" >> $RUNNING_CONFIG
echo "LXC_SCRIPT=$LXC_SCRIPT" >> $RUNNING_CONFIG
echo "DOCKLET_SRC=$DOCKLET_SRC" >> $RUNNING_CONFIG
echo "DOCKLET_LIB=$DOCKLET_LIB" >> $RUNNING_CONFIG
# iptables for NAT network for containers to access web
iptables -t nat -F
iptables -t nat -A POSTROUTING -s $CLUSTER_NET -j MASQUERADE
iptables -t nat -A POSTROUTING -s $BATCH_NET -j MASQUERADE
}
do_start_master () {
DAEMON_OPTS_MASTER=$1
# MODE : start mode
# new : clean old data in etcd, global directory and start a new cluster
# recovery : start cluster and recover status from etcd and global directory
# Default is "recovery"
$DOCKLET_HOME/tools/nginx_config.sh
start-stop-daemon --start --oknodo --background --pidfile $PIDFILE_MASTER --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_MASTER -- $DAEMON_OPTS_MASTER
log_end_msg $?
}
do_start_proxy () {
if [ "$DISTRIBUTED_GATEWAY" = "True" ]
then
return 1
fi
log_daemon_msg "Starting $DAEMON_NAME_PROXY daemon in $FS_PREFIX"
DAEMON_OPTS_PROXY="--port $PROXY_PORT --api-port $PROXY_API_PORT --default-target=http://localhost:8888"
start-stop-daemon --start --background --pidfile $PIDFILE_PROXY --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_PROXY -- $DAEMON_OPTS_PROXY
log_end_msg $?
}
pre_start_web () {
log_daemon_msg "Starting $DAEMON_NAME_WEB in $FS_PREFIX"
webip=$(ip addr show $NETWORK_DEVICE | grep -oE "[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/[0-9]+")
[ $? != "0" ] && echo "wrong NETWORK_DEVICE $NETWORK_DEVICE" && exit 1
webip=${webip%/*}
AUTH_COOKIE_URL=http://$webip:$WEB_PORT/jupyter
#echo "set AUTH_COOKIE_URL:$AUTH_COOKIE_URL in etcd with key:$CLUSTER_NAME/web/authurl"
curl -XPUT http://$ETCD/v2/keys/$CLUSTER_NAME/web/authurl -d value="$AUTH_COOKIE_URL" > /dev/null 2>&1
[ $? != 0 ] && echo "set AUTH_COOKIE_URL failed in etcd" && exit 1
}
do_start_web () {
pre_start_web
DAEMON_OPTS_WEB="-p $WEB_PORT"
start-stop-daemon --start --background --pidfile $PIDFILE_WEB --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_WEB -- $DAEMON_OPTS_WEB
log_end_msg $?
}
do_start_user () {
log_daemon_msg "Starting $DAEMON_NAME_USER in $FS_PREFIX"
DAEMON_OPTS_USER="-p $USER_PORT"
start-stop-daemon --start --background --pidfile $PIDFILE_USER --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_USER_MODULE -- $DAEMON_OPTS_USER
log_end_msg $?
}
do_stop_master () {
log_daemon_msg "Stopping $DAEMON_NAME_MASTER daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_MASTER --retry 10
log_end_msg $?
}
do_stop_proxy () {
if [ "$DISTRIBUTED_GATEWAY" = "True" ]
then
return 1
fi
log_daemon_msg "Stopping $DAEMON_NAME_PROXY daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_PROXY --retry 10
log_end_msg $?
}
do_stop_web () {
log_daemon_msg "Stopping $DAEMON_NAME_WEB daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_WEB --retry 10
log_end_msg $?
}
do_stop_user () {
log_daemon_msg "Stopping $DAEMON_NAME_USER daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_USER --retry 10
log_end_msg $?
}
case "$1" in
init)
pre_start_master
do_start_user
do_start_proxy
do_start_web
do_start_master "new"
;;
start)
pre_start_master
do_start_user
do_start_proxy
do_start_web
do_start_master "recovery"
;;
stop)
do_stop_web
do_stop_proxy
do_stop_master
do_stop_user
;;
restart)
do_stop_user
do_stop_web
do_stop_proxy
do_stop_master
pre_start_master
do_start_user
do_start_proxy
do_start_web
do_start_master "recovery"
;;
start_proxy)
do_start_proxy
;;
stop_proxy)
do_stop_proxy
;;
start_web)
do_start_web
;;
stop_web)
do_stop_web
;;
start_user)
do_start_user
;;
stop_user)
do_stop_user
;;
reinit)
do_stop_web
do_stop_proxy
do_stop_master
do_stop_user
pre_start_master
do_start_user
do_start_proxy
do_start_web
do_start_master "new"
;;
status)
status=0
status_of_proc -p $PIDFILE_MASTER "$DAEMON_MASTER" "$DAEMON_NAME_MASTER" || status=$?
status_of_proc -p $PIDFILE_PROXY "$DAEMON_PROXY" "$DAEMON_NAME_PROXY" || status=$?
status_of_proc -p $PIDFILE_WEB "$DAEMON_WEB" "$DAEMON_NAME_WEB" || status=$?
status_of_proc -p $PIDFILE_USER "$DAEMON_USER" "$DAEMON_NAME_USER" || status=$?
exit $status
;;
*)
echo "Usage: $DAEMON_NAME_MASTER {init|start|stop|restart|reinit|status|start_proxy|stop_proxy|start_web|stop_web}"
exit 1
;;
esac
exit 0
================================================
FILE: bin/docklet-worker
================================================
#!/bin/sh
[ $(id -u) != '0' ] && echo "root is needed" && exit 1
# get some path of docklet
bindir=${0%/*}
# $bindir maybe like /opt/docklet/src/../bin
# use command below to make $bindir in normal absolute path
DOCKLET_BIN=$(cd $bindir; pwd)
DOCKLET_HOME=${DOCKLET_BIN%/*}
DOCKLET_CONF=$DOCKLET_HOME/conf
LXC_SCRIPT=$DOCKLET_CONF/lxc-script
DOCKLET_SRC=$DOCKLET_HOME/src
DOCKLET_LIB=$DOCKLET_SRC
DOCKLET_WEB=$DOCKLET_HOME/web
# working directory, default to /opt/docklet
FS_PREFIX=/opt/docklet
# cluster net ip range, default is 172.16.0.1/16
CLUSTER_NET="172.16.0.1/16"
# ip addresses range of containers for batch job, default is 10.16.0.0/16
BATCH_NET="10.16.0.0/16"
#configurable-http-proxy public port, default is 8000
PROXY_PORT=8000
#configurable-http-proxy api port, default is 8001
PROXY_API_PORT=8001
DISTRIBUTED_GATEWAY=False
. $DOCKLET_CONF/docklet.conf
export FS_PREFIX
RUN_DIR=$FS_PREFIX/local/run
LOG_DIR=$FS_PREFIX/local/log
# This next line determines what user the script runs as.
DAEMON_USER=root
# settings for docklet worker
DAEMON=$DOCKLET_LIB/worker/worker.py
DAEMON_NAME=docklet-worker
DAEMON_OPTS=
# The process ID of the script when it runs is stored here:
PIDFILE=$RUN_DIR/$DAEMON_NAME.pid
# settings for docklet batch worker, which is required for batch job processing system
BATCH_ON=True
DAEMON_BATCH=$DOCKLET_LIB/worker/taskworker.py
DAEMON_NAME_BATCH=docklet-taskworker
PIDFILE_BATCH=$RUN_DIR/batch.pid
DAEMON_OPTS_BATCH=
# settings for docklet proxy, which is required for web access
DAEMON_PROXY=`which configurable-http-proxy`
DAEMON_NAME_PROXY=docklet-proxy
PIDFILE_PROXY=$RUN_DIR/proxy.pid
DAEMON_OPTS_PROXY=
DOCKMETER_NAME=$DAEMON_NAME-metering
DOCKMETER_PIDFILE=$RUN_DIR/$DOCKMETER_NAME.pid
. /lib/lsb/init-functions
###########
update_container_conf () {
LXC_VERSION=$(lxc-start --version | awk -F "." '{print $1}')
#echo $LXC_VERSION
if [ "$LXC_VERSION"x != "2"x ]&&[ "$LXC_VERSION"x != "3"x ];then
LXC_VERSION=2
fi
#echo $LXC_VERSION
cp $DOCKLET_CONF/container/lxc$LXC_VERSION.container.conf $DOCKLET_CONF/container.conf
cp $DOCKLET_CONF/container/lxc$LXC_VERSION.container.batch.conf $DOCKLET_CONF/container.batch.conf
#echo "cp $DOCKLET_CONF/container/lxc$LXC_VERSION.container.conf $DOCKLET_CONF/container.conf"
}
pre_start () {
[ ! -d $FS_PREFIX/global ] && mkdir -p $FS_PREFIX/global
[ ! -d $FS_PREFIX/local ] && mkdir -p $FS_PREFIX/local
[ ! -d $FS_PREFIX/global/users ] && mkdir -p $FS_PREFIX/global/users
[ ! -d $FS_PREFIX/local/volume ] && mkdir -p $FS_PREFIX/local/volume
[ ! -d $FS_PREFIX/local/temp ] && mkdir -p $FS_PREFIX/local/temp
[ ! -d $FS_PREFIX/local/run ] && mkdir -p $FS_PREFIX/local/run
[ ! -d $FS_PREFIX/local/log ] && mkdir -p $FS_PREFIX/local/log
tempdir=/opt/docklet/local/temp
RUNNING_CONFIG=$FS_PREFIX/local/docklet-running.conf
grep -P "^[\s]*[a-zA-Z]" $DOCKLET_CONF/docklet.conf > $RUNNING_CONFIG
echo "DOCKLET_HOME=$DOCKLET_HOME" >> $RUNNING_CONFIG
echo "DOCKLET_BIN=$DOCKLET_BIN" >> $RUNNING_CONFIG
echo "DOCKLET_CONF=$DOCKLET_CONF" >> $RUNNING_CONFIG
echo "LXC_SCRIPT=$LXC_SCRIPT" >> $RUNNING_CONFIG
echo "DOCKLET_SRC=$DOCKLET_SRC" >> $RUNNING_CONFIG
echo "DOCKLET_LIB=$DOCKLET_LIB" >> $RUNNING_CONFIG
export CONFIG=$RUNNING_CONFIG
# iptables for NAT network for containers to access web
iptables -t nat -F
iptables -t nat -A POSTROUTING -s $CLUSTER_NET -j MASQUERADE
iptables -t nat -A POSTROUTING -s $BATCH_NET -j MASQUERADE
if [ ! -d $FS_PREFIX/local/basefs ]; then
log_daemon_msg "basefs does not exist, run prepare.sh first" && exit 1
fi
if [ ! -d $FS_PREFIX/local/packagefs ]; then
mkdir -p $FS_PREFIX/local/packagefs
fi
update_container_conf
}
do_start() {
pre_start
DAEMON_OPTS=$1
log_daemon_msg "Starting $DAEMON_NAME in $FS_PREFIX"
#python3 $DAEMON
start-stop-daemon --start --oknodo --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS
log_end_msg $?
}
do_start_batch () {
if [ "$BATCH_ON" = "False" ]
then
return 1
fi
log_daemon_msg "Starting $DAEMON_NAME_BATCH in $FS_PREFIX"
DAEMON_OPTS_BATCH=""
start-stop-daemon --start --background --pidfile $PIDFILE_BATCH --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_BATCH -- $DAEMON_OPTS_BATCH
log_end_msg $?
}
do_start_proxy () {
if [ "$DISTRIBUTED_GATEWAY" = "False" ]
then
return 1
fi
log_daemon_msg "Starting $DAEMON_NAME_PROXY daemon in $FS_PREFIX"
DAEMON_OPTS_PROXY="--port $PROXY_PORT --api-port $PROXY_API_PORT --default-target=http://localhost:8888"
start-stop-daemon --start --background --pidfile $PIDFILE_PROXY --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON_PROXY -- $DAEMON_OPTS_PROXY
log_end_msg $?
}
do_stop () {
log_daemon_msg "Stopping $DAEMON_NAME daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE --retry 10
log_end_msg $?
}
do_stop_batch () {
if [ "$BATCH_ON" = "False" ]
then
return 1
fi
log_daemon_msg "Stopping $DAEMON_NAME_BATCH daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_BATCH --retry 10
log_end_msg $?
}
do_stop_proxy () {
if [ "$DISTRIBUTED_GATEWAY" = "False" ]
then
return 1
fi
log_daemon_msg "Stopping $DAEMON_NAME_PROXY daemon"
start-stop-daemon --stop --quiet --oknodo --remove-pidfile --pidfile $PIDFILE_PROXY --retry 10
log_end_msg $?
}
do_start_meter() {
log_daemon_msg "Starting $DOCKMETER_NAME in $FS_PREFIX"
start-stop-daemon --start --background --pidfile $DOCKMETER_PIDFILE --make-pidfile --exec $DOCKLET_HOME/meter/main.py
log_end_msg $?
}
do_stop_meter() {
log_daemon_msg "Stopping $DOCKMETER_NAME daemon"
start-stop-daemon --stop --pidfile $DOCKMETER_PIDFILE --remove-pidfile
log_end_msg $?
}
case "$1" in
start)
do_start "normal-worker"
do_start_batch
do_start_proxy
;;
stop)
do_stop
do_stop_batch
do_stop_proxy
;;
start-meter)
do_start_meter
;;
stop-meter)
do_stop_meter
;;
start_batch)
do_start "batch-worker"
do_start_batch
;;
stop_batch)
do_stop
do_stop_batch
;;
start_proxy)
do_start_proxy
;;
stop_proxy)
do_stop_proxy
;;
console)
pre_start
cprofilev $DAEMON $DAEMON_OPTS
;;
restart)
do_stop
do_stop_batch
do_stop_proxy
do_start "normal-worker"
do_start_batch
do_start_proxy
;;
status)
status_of_proc -p $PIDFILE "$DAEMON" "$DAEMON_NAME" && exit 0 || exit $?
status_of_proc -p $PIDFILE_BATCH "$DAEMON_BATCH" "$DAEMON_NAME_BATCH" || status=$?
status_of_proc -p $PIDFILE_PROXY "$DAEMON_PROXY" "$DAEMON_NAME_PROXY" || status=$?
;;
*)
echo "Usage: $DAEMON_NAME {start|stop|restart|status}"
exit 1
;;
esac
exit 0
================================================
FILE: cloudsdk-installer.sh
================================================
#!/bin/bash
if [[ "`whoami`" != "root" ]]; then
echo "FAILED: Require root previledge !" > /dev/stderr
exit 1
fi
pip3 install aliyun-python-sdk-core-v3
pip3 install aliyun-python-sdk-ecs
exit 0
================================================
FILE: conf/container/lxc2.container.batch.conf
================================================
# This is the common container.conf for all containers.
# If want set custom settings, you have two choices:
# 1. Directly modify this file, which is not recommend, because the
# setting will be overriden when new version container.conf released.
# 2. Use a custom config file in this conf directory: lxc.custom.conf,
# it uses the same grammer as container.conf, and will be merged
# with the default container.conf by docklet at runtime.
#
# The following is an example mounting user html directory
# lxc.mount.entry = /public/home/%USERNAME%/public_html %ROOTFS%/root/public_html none bind,rw,create=dir 0 0
#
#### include /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
############## DOCKLET CONFIG ##############
# Setup 0 tty devices
lxc.tty = 0
lxc.rootfs = %ROOTFS%
lxc.utsname = %HOSTNAME%
lxc.network.type = veth
lxc.network.name = eth0
# veth.pair is limited in 16 bytes
lxc.network.veth.pair = %VETHPAIR%
lxc.network.script.up = %LXCSCRIPT%/lxc-ifup
lxc.network.script.down = %LXCSCRIPT%/lxc-ifdown
lxc.network.ipv4 = %IP%
lxc.network.ipv4.gateway = %GATEWAY%
lxc.network.flags = up
lxc.network.mtu = 1420
lxc.cgroup.pids.max = 2048
lxc.cgroup.memory.limit_in_bytes = %CONTAINER_MEMORY%M
#lxc.cgroup.memory.kmem.limit_in_bytes = 512M
#lxc.cgroup.memory.soft_limit_in_bytes = 4294967296
#lxc.cgroup.memory.memsw.limit_in_bytes = 8589934592
# lxc.cgroup.cpu.cfs_period_us : period time of cpu, default 100000, means 100ms
# lxc.cgroup.cpu.cfs_quota_us : quota time of this process
lxc.cgroup.cpu.cfs_quota_us = %CONTAINER_CPU%
lxc.cap.drop = sys_admin net_admin mac_admin mac_override sys_time sys_module
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/data %ROOTFS%/root/nfs none bind,rw,create=dir 0 0
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/hosts/batch-%TASKID%.hosts %ROOTFS%/etc/hosts none bind,ro,create=file 0 0
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/ssh %ROOTFS%/root/.ssh none bind,ro,create=dir 0 0
lxc.mount.entry = %FS_PREFIX%/local/temp/%LXCNAME%/ %ROOTFS%/tmp none bind,rw,create=dir 0 0
# setting hostname
lxc.hook.pre-start = %LXCSCRIPT%/lxc-prestart
# setting nfs softlink
#lxc.hook.mount = %LXCSCRIPT%/lxc-mount
================================================
FILE: conf/container/lxc2.container.conf
================================================
# This is the common container.conf for all containers.
# If want set custom settings, you have two choices:
# 1. Directly modify this file, which is not recommend, because the
# setting will be overriden when new version container.conf released.
# 2. Use a custom config file in this conf directory: lxc.custom.conf,
# it uses the same grammer as container.conf, and will be merged
# with the default container.conf by docklet at runtime.
#
# The following is an example mounting user html directory
# lxc.mount.entry = /public/home/%USERNAME%/public_html %ROOTFS%/root/public_html none bind,rw,create=dir 0 0
#
#### include /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
############## DOCKLET CONFIG ##############
# Setup 0 tty devices
lxc.tty = 0
lxc.rootfs = %ROOTFS%
lxc.utsname = %HOSTNAME%
lxc.network.type = veth
lxc.network.name = eth0
# veth.pair is limited in 16 bytes
lxc.network.veth.pair = %VETHPAIR%
lxc.network.script.up = %LXCSCRIPT%/lxc-ifup
lxc.network.script.down = %LXCSCRIPT%/lxc-ifdown
lxc.network.ipv4 = %IP%
lxc.network.ipv4.gateway = %GATEWAY%
lxc.network.flags = up
lxc.network.mtu = 1420
lxc.cgroup.pids.max = 2048
lxc.cgroup.memory.limit_in_bytes = %CONTAINER_MEMORY%M
#lxc.cgroup.memory.kmem.limit_in_bytes = 512M
#lxc.cgroup.memory.soft_limit_in_bytes = 4294967296
#lxc.cgroup.memory.memsw.limit_in_bytes = 8589934592
# lxc.cgroup.cpu.cfs_period_us : period time of cpu, default 100000, means 100ms
# lxc.cgroup.cpu.cfs_quota_us : quota time of this process
lxc.cgroup.cpu.cfs_quota_us = %CONTAINER_CPU%
lxc.cap.drop = sys_admin net_admin mac_admin mac_override sys_time sys_module
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/data %ROOTFS%/root/nfs none bind,rw,create=dir 0 0
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/hosts/%CLUSTERID%.hosts %ROOTFS%/etc/hosts none bind,ro,create=file 0 0
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/ssh %ROOTFS%/root/.ssh none bind,ro,create=dir 0 0
lxc.mount.entry = %FS_PREFIX%/local/temp/%LXCNAME%/ %ROOTFS%/tmp none bind,rw,create=dir 0 0
# setting hostname
lxc.hook.pre-start = %LXCSCRIPT%/lxc-prestart
# setting nfs softlink
#lxc.hook.mount = %LXCSCRIPT%/lxc-mount
================================================
FILE: conf/container/lxc3.container.batch.conf
================================================
# This is the common container.conf for all containers.
# If want set custom settings, you have two choices:
# 1. Directly modify this file, which is not recommend, because the
# setting will be overriden when new version container.conf released.
# 2. Use a custom config file in this conf directory: lxc.custom.conf,
# it uses the same grammer as container.conf, and will be merged
# with the default container.conf by docklet at runtime.
#
# The following is an example mounting user html directory
# lxc.mount.entry = /public/home/%USERNAME%/public_html %ROOTFS%/root/public_html none bind,rw,create=dir 0 0
#
#### include /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
############## DOCKLET CONFIG ##############
# Setup 0 tty devices
lxc.tty.max = 0
lxc.rootfs.path = %ROOTFS%
lxc.uts.name = %HOSTNAME%
lxc.net.0.type = veth
lxc.net.0.name = eth0
# veth.pair is limited in 16 bytes
lxc.net.0.veth.pair = %VETHPAIR%
lxc.net.0.script.up = %LXCSCRIPT%/lxc-ifup
lxc.net.0.script.down = %LXCSCRIPT%/lxc-ifdown
lxc.net.0.ipv4.address = %IP%
lxc.net.0.ipv4.gateway = %GATEWAY%
lxc.net.0.flags = up
lxc.net.0.mtu = 1420
lxc.cgroup.pids.max = 2048
lxc.cgroup.memory.limit_in_bytes = %CONTAINER_MEMORY%M
#lxc.cgroup.memory.kmem.limit_in_bytes = 512M
#lxc.cgroup.memory.soft_limit_in_bytes = 4294967296
#lxc.cgroup.memory.memsw.limit_in_bytes = 8589934592
# lxc.cgroup.cpu.cfs_period_us : period time of cpu, default 100000, means 100ms
# lxc.cgroup.cpu.cfs_quota_us : quota time of this process
lxc.cgroup.cpu.cfs_quota_us = %CONTAINER_CPU%
lxc.cap.drop = sys_admin net_admin mac_admin mac_override sys_time sys_module
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/data %ROOTFS%/root/nfs none bind,rw,create=dir 0 0
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/hosts/batch-%TASKID%.hosts %ROOTFS%/etc/hosts none bind,ro,create=file 0 0
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/ssh %ROOTFS%/root/.ssh none bind,ro,create=dir 0 0
lxc.mount.entry = %FS_PREFIX%/local/temp/%LXCNAME%/ %ROOTFS%/tmp none bind,rw,create=dir 0 0
# setting hostname
lxc.hook.pre-start = %LXCSCRIPT%/lxc-prestart
# setting nfs softlink
#lxc.hook.mount = %LXCSCRIPT%/lxc-mount
================================================
FILE: conf/container/lxc3.container.conf
================================================
# This is the common container.conf for all containers.
# If want set custom settings, you have two choices:
# 1. Directly modify this file, which is not recommend, because the
# setting will be overriden when new version container.conf released.
# 2. Use a custom config file in this conf directory: lxc.custom.conf,
# it uses the same grammer as container.conf, and will be merged
# with the default container.conf by docklet at runtime.
#
# The following is an example mounting user html directory
# lxc.mount.entry = /public/home/%USERNAME%/public_html %ROOTFS%/root/public_html none bind,rw,create=dir 0 0
#
#### include /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
############## DOCKLET CONFIG ##############
# Setup 0 tty devices
lxc.tty.max = 0
lxc.rootfs.path = %ROOTFS%
lxc.uts.name = %HOSTNAME%
lxc.net.0.type = veth
lxc.net.0.name = eth0
# veth.pair is limited in 16 bytes
lxc.net.0.veth.pair = %VETHPAIR%
lxc.net.0.script.up = %LXCSCRIPT%/lxc-ifup
lxc.net.0.script.down = %LXCSCRIPT%/lxc-ifdown
lxc.net.0.ipv4.address = %IP%
lxc.net.0.ipv4.gateway = %GATEWAY%
lxc.net.0.flags = up
lxc.net.0.mtu = 1420
lxc.cgroup.pids.max = 2048
lxc.cgroup.memory.limit_in_bytes = %CONTAINER_MEMORY%M
#lxc.cgroup.memory.kmem.limit_in_bytes = 512M
#lxc.cgroup.memory.soft_limit_in_bytes = 4294967296
#lxc.cgroup.memory.memsw.limit_in_bytes = 8589934592
# lxc.cgroup.cpu.cfs_period_us : period time of cpu, default 100000, means 100ms
# lxc.cgroup.cpu.cfs_quota_us : quota time of this process
lxc.cgroup.cpu.cfs_quota_us = %CONTAINER_CPU%
lxc.cap.drop = sys_admin net_admin mac_admin mac_override sys_time sys_module
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/data %ROOTFS%/root/nfs none bind,rw,create=dir 0 0
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/hosts/%CLUSTERID%.hosts %ROOTFS%/etc/hosts none bind,ro,create=file 0 0
lxc.mount.entry = %FS_PREFIX%/global/users/%USERNAME%/ssh %ROOTFS%/root/.ssh none bind,ro,create=dir 0 0
lxc.mount.entry = %FS_PREFIX%/local/temp/%LXCNAME%/ %ROOTFS%/tmp none bind,rw,create=dir 0 0
# setting hostname
lxc.hook.pre-start = %LXCSCRIPT%/lxc-prestart
# setting nfs softlink
#lxc.hook.mount = %LXCSCRIPT%/lxc-mount
================================================
FILE: conf/docklet.conf.template
================================================
# ==================================================
#
# [Local config example]
#
# ==================================================
# CLUSTER_NAME: name of host cluster, every host cluster should have
# a unique name, default is docklet-vc
# CLUSTER_NAME=docklet-vc
# FS_PREFIX: path to store global and local data for docklet
# default is /opt/docklet.
#
# Note: $FS_PREFIX/global is for storing persistent data, e.g.,
# custom container images, user data, etc. For a multi hosts
# environement, it is the mountpoint of the distributed filesystem
# that all physical hosts (master and slave) share.
# E.g., for a system with three hosts: computing hosts A and B,
# strorage host C. Host C exports its stroage filesystem through nfs
# as C:/data, then host A and B should mount C:/data to $FS_PREFIX/global.
# Please make sure that the mount is OK before launching docklet.
#
# FS_PREFIX=/opt/docklet
# STORAGE: local storage type, file or disk, default is file
# note lvm is required for either case
#
# file : a large file simulating raw disk storing container runtime
# data, located in FS_PREFIX/local, for single machine testing purpose.
#
# disk : raw disk for storing container files, for production purpose.
# If using disk, a partition must be allocated to docklet
# - a disk device name must be specified by DISK , e.g, /dev/sdc9
# - this device must be formatted as Linux-LVM, and initialized
# as a physical volume (pvcreate /dev/sdc9) in advance.
# TAKE CARE to ensure the disk is OK before launching docklet.
#
# STORAGE=file
#
# DISK: disk device name if STORAGE is disk
# DISK=/dev/sdc9
# CLUSTER_SIZE: virtual cluster size, default is 1
# CLUSTER_SIZE=1
# CLUSTER_NET: cluster network ip address range, default is 172.16.0.1/16
# CLUSTER_NET=172.16.0.1/16
# Deprecated since v0.2.7. read from quota group set in web admin page
# CONTAINER_CPU: CPU quota of container, default is 100000
# A single CPU core has total=100000 (100ms), so the default 100000
# mean a single container can occupy a whole core.
# For a CPU with two cores, this can be set to 200000
# CONTAINER_CPU=100000
# Deprecated since v0.2.7. read from quota group set in web admin page
# CONTAINER_DISK: disk quota of container image upper layer, count in MB,
# default is 1000
# CONTAINER_DISK=1000
# Deprecated since v0.2.7. read from quota group set in web admin page
# CONTAINER_MEMORY: memory quota of container, count in MB, default is 1000
# CONTAINER_MEMORY=1000
# DISKPOOL_SIZE: lvm group size, count in MB, default is 10000
# Only valid with STORAGE=file
# DISKPOOL_SIZE=10000
# ETCD: etcd address, default is localhost:2379
# For a muti hosts environment, the administrator should configure how
# etcd cluster work together
# ETCD=localhost:2379
# NETWORK_DEVICE: specify the network interface docklet uses,
# Default is eth0
# NETWORK_DEVICE=eth0
# PORTAL_URL: the public docklet portal url. for a production system,
# it should be a valid URL, like http://docklet.info
# default is MASTER_IP:NGINX_PORT
# PORTAL_URL=http://localhost:8080
# MASTER_IP: master listen ip, default listens on all interfaces
# MASTER_IP=0.0.0.0
# MASTER_PORT: master listen port, default is 9000
# MASTER_PORT=9000
# WORKER_PORT: worker listen port, default is 9001
# WORKER_PORT=9001
# NGINX_PORT: the access port of the public portal, default is 8080
# This is the listening port of nginx server. The nginx server forwards
# requests according to the requests' urls. If the urls are to workspaces,
# it will forward requests to the configurable-http-proxy, otherwise,
# to the docklet web. Usually 80 is recommded for production environment
# NGINX_PORT=8080
# PROXY_PORT: the listening port of configurable-http-proxy, default is 8000
# it proxy connections from exteral public network to internal private
# container networks.
# PROXY_PORT=8000
# PROXY_API_PORT: configurable-http-proxy api port, default is 8001
# Admins can query the proxy table by calling:
# curl http://localhost:8001/api/routes
# PROXY_API_PORT=8001
# WEB_PORT: docklet web listening port, default is 8888
# Note: docklet web server is located behind the docklet proxy.
# Users access docklet first through proxy, then docklet web server.
# Therefore, it is not for user direct access. In most cases,
# admins need not to change the default value.
# WEB_PORT=8888
# LOG_LEVEL: logging level, of DEBUG, INFO, WARNING, ERROR, CRITICAL
# default is DEBUG
# LOG_LEVEL=DEBUG
# LOG_LIFE: how many days the logs will be kept, default is 10
# LOG_LIFE=10
# WEB_LOG_LEVEL: logging level, of DEBUG, INFO, WARNING, ERROR, CRITICAL
# default is DEBUG
# WEB_LOG_LEVEL=DEBUG
# EXTERNAL_LOGIN: whether docklet will use external account to log in
# True or False, default is False
# default: authenticate local and PAM users
# EXTERNAL_LOGIN=False
# DATA_QUOTA : whether enable the quota of data volume or not
# True or False, default: False
# DATA_QUOTA=False
# DATA_QUOTA_CMD : the cmd to set the quota of a given directory. It accepts two arguments:
# arg1: the directory name, relative path from the data volume root, e.g, "/users/bob/data"
# arg2: the quota value in GB of string, e.g., "100"
# default: "gluster volume quota docklet-volume limit-usage %s %s"
# DATA_QUOTA_CMD="gluster volume quota docklet-volume limit-usage %s %s"
# DISTRIBUTED_GATEWAY : whether the users' gateways are distributed or not
# Must be set by same value on master and workers.
# True or False, default: False
# DISTRIBUTED_GATEWAY=False
# PUBLIC_IP : publick ip of this machine. If DISTRIBUTED_GATEWAY is True,
# users' gateways can be setup on this machine. Users can visit this machine
# by the public ip. default: IP of NETWORK_DEVICE.
# PUBLIC_IP=0.0.0.0
# NGINX_CONF: the config path of nginx, default: /etc/nginx
# NGINX_CONF=/etc/nginx
# MASTER_IPS: all master ips in a cente, depart by ','.
# e.g:192.168.192.191@master1,192.168.192.192@master2
# you can also add description to each master.
# e.g:master1_desc="this is master1"
# defalut:0.0.0.0@docklet
# MASTER_IPS=0.0.0.0@docklet
# USER_IP: user listen ip
# default:0.0.0.0
# USER_IP=0.0.0.0
# USER_PORT: user listen port
# default:9100
# USER_PORT=9100
# AUTH_KEY: the key to request users server from master,
# or to request master from users server. Please set the
# same value on each machine. Please don't use the default value.
# AUTH_KEY=docklet
# ALLOCATED_PORTS: the ports on this host that will be allocated to users.
# The allocated ports are for ports mapping. Default: 10000-65535
# The two ports next to '-' are inclueded. If there are several ranges,
# Please seperate them by ',' , for example: 10000-20000,30000-40000
# ALLOCATED_PORTS=10000-65535
# ALLOW_SCALE_OUT: allow docklet to rent server on the cloud to scale out
# Only when you deploy docklet on the cloud can you set it to True
# ALLOW_SCALE_OUT=False
# WARNING_DAYS: user will receive a warning email for releasing
# when his/her vcluster has been stopped for more then the days.
# Default: 7
# WARNING_DAYS=7
# RELEASE_DAYS: the vcluster will be released when it has been
# stopped for more then the days. Needs to be larger then WARNING_DAYS.
# Default: 14
# RELEASE_DAYS=14
# ==================================================
#
# Batch Config
#
# ==================================================
# BATCH_ON: whether to start batch job processing system when start
# the docklet. Default: True
# BATCH_ON=True
# BATCH_MASTER_PORT: the rpc server port on master.
# default: 50050
# BATCH_MASTER_PORT=50050
# BATCH_WORKER_PORT: the rpc server port on worker.
# default: 50051
# BATCH_WORKER_PORT=50051
# BATCH_NET: ip addresses range of containers for batch job, default is 10.16.0.0/16
# BATCH_NET=10.16.0.0/16
# BATCH_TASK_CIDR: 2^(BATCH_TASK_CIDR)-2 is the number of ip addresses for a task, default is 4
# BATCH_TASK_CIDR=4
# BATCH_MAX_THREAD_WORKER: the maximun number of threads of the rpc server on
# the batch job worker. default:5
# BATCH_MAX_THREAD_WORKER=5
# BATCH_GPU_BILLING: beans cost per hour by different GPUs
# The GPU's name can be found by 'nvidia-smi -L' and all spaces need be replaced by '-'
# default: 100
# BATCH_GPU_BILLING=default:100,GeForce-GTX-1080-Ti:100,GeForce-GTX-2080-Ti:150,Tesla-V100-PCIE-16GB:200
================================================
FILE: conf/lxc-script/lxc-ifdown
================================================
#!/bin/sh
# $1 : name of container ( name in lxc-start with -n)
# $2 : net
# $3 : network flags, up or down
# $4 : network type, for example, veth
# $5 : value of lxc.network.veth.pair
. $LXC_ROOTFS_PATH/../env.conf
ovs-vsctl --if-exists del-port $Bridge $5
cnt=$(ovs-vsctl list-ports ${Bridge} | wc -l)
if [ "$cnt" = "1" ]; then
greport=$(ovs-vsctl list-ports ${Bridge} | grep "gre" | wc -l)
if [ "$greport" = "1" ]; then
ovs-vsctl del-br $Bridge
fi
fi
================================================
FILE: conf/lxc-script/lxc-ifup
================================================
#!/bin/sh
# $1 : name of container ( name in lxc-start with -n)
# $2 : net
# $3 : network flags, up or down
# $4 : network type, for example, veth
# $5 : value of lxc.network.veth.pair
. $LXC_ROOTFS_PATH/../env.conf
ovs-vsctl --may-exist add-br $Bridge
ovs-vsctl --may-exist add-port $Bridge $5
================================================
FILE: conf/lxc-script/lxc-mount
================================================
#!/bin/sh
# $1 Container name.
# $2 Section (always 'lxc').
# $3 The hook type (i.e. 'clone' or 'pre-mount').
#cd $LXC_ROOTFS_PATH/root ; rm -rf nfs && ln -s ../nfs nfs
================================================
FILE: conf/lxc-script/lxc-prestart
================================================
#!/bin/sh
# $1 Container id
# $2 Container name.
# $3 Section (always 'lxc').
# $4 The hook type (i.e. 'clone' or 'pre-mount').
# following environment variables are set by lxc :
# $LXC_NAME: is the container's name.
# $LXC_ROOTFS_MOUNT: the path to the mounted root filesystem.
# $LXC_CONFIG_FILE: the path to the container configuration file.
# $LXC_SRC_NAME: in the case of the clone hook, this is the original container's name.
# $LXC_ROOTFS_PATH: this is the lxc.rootfs entry for the container.
# Note this is likely not where the mounted rootfs is to be found, use LXC_ROOTFS_MOUNT for that.
. $LXC_ROOTFS_PATH/../env.conf
echo $HNAME > $LXC_ROOTFS_PATH/etc/hostname
================================================
FILE: conf/nginx_docklet.conf
================================================
server
{
listen %NGINX_PORT;
#ssl on;
#ssl_certificate /etc/nginx/ssl/server.crt;
#ssl_certificate_key /etc/nginx/ssl/server.key;
#ssl_protocols TLSv1.2 TLSv1.3;
#ssl_prefer_server_ciphers on;
#ssl_ciphers TLS13-AES-128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
server_name nginx_docklet.conf;
charset UTF-8;
add_header X-Frame-Options SAMEORIGIN;
merge_slashes off;
rewrite (.*)//+(.*) $1/$2 permanent;
index index.html index.htm;
client_max_body_size 20m;
if ($request_method ~* OPTIONS){
return 403;
}
location ~ ^/NginxStatus/ {
stub_status on;
access_log off;
}
location ~ ^/(\d+\.\d+\.\d+\.\d+)/ {
proxy_pass http://$1:%PROXY_PORT;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location / {
client_max_body_size 20m;
client_body_buffer_size 256k;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_buffer_size 256k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
proxy_max_temp_file_size 128m;
proxy_ignore_client_abort on;
proxy_pass http://%MASTER_IP:%WEB_PORT;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
================================================
FILE: doc/devdoc/coding.md
================================================
# NOTE
## here is some thinking and notes in coding
* path : scripts' path should be known by scripts to call/import other script -- use environment variables
* FS_PREFIX : docklet filesystem path to put data
* overlay : " modprobe overlay " to add overlay module
* after reboot :
* bridges lost -- it's ok, recreate it
* loop device lost -- losetup /dev/loop0 BLOCK_FILE again, and lvm will get group and volume back automatically
* lvm can do snapshot, image management can use lvm's snapshot -- No! lvm snapshot will use the capacity of LVM group.
* cgroup memory control maybe not work. need run command below:
echo 'GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"' >> /etc/default/grub && update-grub && reboot
* debian don't support cpu.cfs_quota_us option in cgroup. it needs to recompile linux kernel with CONFIG_CFS_BANDWIDTH option
* ip can add bridge/link/GRE, maybe we should test whether ip can replace of ovs-vsctl and brctl. ( see "man ip-link" )
* lxc.mount.entry :
* do not use relevant path. use absolute path, like :
lxc.mount.entry = /root/from-dir /root/rootfs/to-dir none bind 0 0 # lxc.rootfs = /root/rootfs
if use relevant paht, container path will be mounted on /usr/lib/x86_64..../ , a not existed path
* path of host and container should both exist. if not exist in container, it will be mounted on /usr/lib/x86_64....
* if path in container not exists, you can use option : create=dir/file, like :
lxc.mount.entry = /root/from-dir /root/rootfs/to-dir none bind,create=dir 0 0 # lxc.rootfs = /root/rootfs
* lxc.mount.entry : bind and rbind ( see "man mount" )
* bind means mount a part of filesystem on somewhere else of this filesystem
* but bind only attachs a single filesystem. That means the submount of source directory of mount may disappear in target directory.
* if you want to make submount work, use rbind option.
rbind will make entire file hierarchy including submounts mounted on another place.
* NOW, we use bind in container.sh. maybe it need rbind if FS_PREFIX/global/users/$USERNAME/nfs is under glusterfs mountpoint
* rpc server maybe not security. anyone can call rpc method if he knows ip address.
* maybe we can use "transport" option of xmlrpc.client.ServerProxy(uri, transport="http://user:pass@host:port/path") and SimpleXMLRPCRequestHandler of xmlrpc.server.SimpleXMLRPCServer(addr, requestHandler=..) to parse the rpc request and authenticate the request
xmlrpc.client.ServerProxy can also support https request, it is also a security method
* If we use rpc with authentication, maybe we can use http server and http request to replace rpc
* frontend and backend
arch:
+-----------------+
Web -- Flask --HttpRest Core |
+-----------------+
Now, HttpRest and Core work as backend
Web and Flask work as frontend
all modules are in backend
Flask just dispatch urls and render web pages
(Maybe Flask can be merged in Core and works as http server)
(Then Flask needs to render pages, parse urls, response requests, ...)
(It maybe not fine)
* httprest.py :
httphandler needs to call vclustermgr/nodemgr/... to handler request
we need to call these classes in httphandler
Way-1: init/new these classes in httphandler init function (httphandler need to init parent class) -- wrong : httpserver will create a new httphandler instance for every http request ( see /usr/lib/python3.4/socketserver.py )
Way-2: use global varibles -- Now this way
* in shell, run python script or other not built-in command, the command will run in new process and new process group ( see csapp shell lab )
so, the environment variables set in shell can not be see in python/...
but command like below can work :
A=ab B=ba ./python.py
* maybe we need to parse argvs in python
some module to parse argvs : sys.argv, optparse, getopt, argparse
* in shell, { command; } means run command in current shell, ";" is necessary
( command; ) means run command in sub shell
* function in registered in rpc server must have return.
without return, the rpc client will raise an exception
* ** NEED TO BE FIX **
we add a prefix in etcdlib
so when we getkey, key may be a absolute path from base url
when we setkey use the key we get, etcdlib will append the absolute path to prefix, it will wrong
* overlay : upperdir and workdir must in the same mount filesystem.
that means we should mount LV first and then mkdir upperdir and workdir in the LV mountpoint
* when use 'worker.py > log' to redirect output of python script, it will empty output of log.
because python interpreter will use buffer to collect output.
we can use ways below to fix this problem:
stdbuf -o 0 worker.py > log # but it fail in my try. don't know why
python3 -u worker.py > log # recommended, -u option of python3
print('output', flush=True) # flush option of print
sys.stdout.flush() # flush by hand
* CPU QUOTA should not be too small. too small it will work so slowly
================================================
FILE: doc/devdoc/config_info.md
================================================
# Info of docklet
## container info
container name : username-clusterid-nodeid
hostname : host-nodeid
lxc config : /var/lib/lxc/username-clusterid-nodeid/config
lxc rootfs : /var/lib/lxc/username-clusterid-nodeid/rootfs
lxc rootfs
|__ / : aufs : basefs + volume/username-clusterid-nodeid
|__ /nfs : global/users/username/data
|__ /etc/hosts : global/users/username/clusters/clusterid/hosts
|__ /root/.ssh : global/users/username/ssh
## ETCD Table
we use etcd for some configuration information of our clusters, here is some details.
every cluster has a CLUSTER_NAME and all data of this cluster is put in a directory called CLUSTER_NAME in etcd just like a table.
so, different cluster should has different CLUSTER_NAME.
below is content of cluster info in CLUSTER_NAME 'table' in etcd:
<type> <name> <content> <description>
key token random code token for checking whether master and workers has the same global filesystem
dir machines ... info of physical clusters
dir machines/allnodes ip:ok record all nodes, for recovery and checks
dir machines/runnodes ip: ? record running node for this start up.
when startup: ETCD
| IP:waiting | 1. worker write worker-ip:waiting
2. master update IP:init-mode | IP:init-mode | 3. worker init itself by init-mode
| IP:work | 4. worker finish init and update IP:work
5. master add workerip and update IP:ok | IP:ok |
key service/master master-ip
key service/mode new,recovery start mode of cluster
key vcluster/nextid ID next available ID
## filesystem
here is the path and content description of docklet filesystem
FS_PREFIX
|__ global/users/{username}
| |__ clusters/clustername : clusterid, cluster size, status, containers, ... in json format
| |__ hosts/id.hosts : ip host-nodeid host-nodeid.clustername
| |__ data : direcroty in distributed filesystem for user to put his data
| |__ ssh : ssh keys
|
|__ local
|__ docklet-storage : loop file for lvm
|__ basefs : base image
|__ volume / { username-clusterid-nodeid } : upper layer of container
## vcluster files
### hosts file:(raw)
IP-0 host-0 host-0.clustername
IP-1 host-1 host-1.clustername
...
### info file:(json)
{
clusterid: ID ,
status: stopped/running ,
size: size ,
containers: [
{ containername: lxc_name, hostname: hostname, ip: lxc_ip, host: host_ip },
{ containername: lxc_name, hostname: hostname, ip: lxc_ip, host: host_ip },
...
]
}
================================================
FILE: doc/devdoc/network-arch.md
================================================
# Architecture of Network
## Architecture of containers networks
In current version, to avoid VLAN ID using up, docklet employs a new architecture of containers networks. According to the new architecture, users' networks are exclusive, while the network were shared by all users before. And the new architecture gets rid of VLAN, so it solves the problem of VLAN ID using up. The architecture is shown as follows:

There are some points to describe the architecture:
1.Each user has an unique and exclusive virtual network. The container inside the network communicates with outside via gateway.
2.If there is a container in the host, then there will be a user's OVS bridge. Each user's container will connect to user's OVS bridge by Veth Pair. A user's OVS bridge will be named after "docklet-br-<userid>".
3.Each user's network is star topology, each host on which there is no gateway will connect to the host on which the user's gateway is by GRE tunnel. Thus, there may be many GRE tunnels between two hosts(Each GRE tunnels belongs to different user.), Docklet takes user's id as keys to distinguish from each other.
4.OVS bridge and GRE tunnels are created and destroyed dynamically, which means that network including bridge and GRE tunnels is created only when user starts the container and is destroyed by calling '/conf/lxc-script/lxc-ifdown' script only when user stops the container.
5.There are two modes to set up gateways: distributed or centralized. Centralized gateways is the default mode and it will set up the gateways only on Master host, while distributed gateways mode will set up gateways on different workers, just like the picture shown above. NAT/iptables in Linux Kernel is needed when container communicate with outside network via gateway.
## Processing users' requests (Workspace requests)
The picture of processing user's requests will show the whole architecture of Docklet. The process is shown as follows, firstly, these are the requests to Workspace:

## Processing users' requests (Other requests)
Other requests.

================================================
FILE: doc/devdoc/networkmgr.md
================================================
# Network Manager
## About
网络管理是为docklet提供网络管理的模块。
关于需求,主要有两点:
* 一个中心管理池,按 网络段(IP/CIDR) 给用户分配网络池
* 很多用户网络池,按 一个或者几个网络地址 给用户的cluster分配网络地址
## Data Structure
面对这两种需求,设计了两种数据结构来管理网络地址。
* 区间池 / interval pool : 分配、回收 网络段
interval pool 中的元素为区间,其由很多个区间组成。
一个朴素的 区间池 是这样的 : interval pool : [A1,A2],[B1,B2],[C1,C2],...[X1,X2]
每次申请一段地址的时候,从上述区间中选择一个区间分配,并将该区间中剩余部分放回区间池
而考虑到 网络段(IP/CIDR) 是 2 的幂的结构,所以可以将区间池进一步设计成如下结构:
interval pool:
... ...
cidr=16 : [A1,A2], [A3,A4], ...
cidr=17 : [B1,B2], [B3,B4], ...
cidr=18 : [C1,C2], [C3,C4], ...
... ...
上述结构还可以进一步优化,因为 每一个区间的结尾地址可以通过开始地址和CIDR算出来,所以每个区间只需要写一个起始地址就可以了
所以:
interval pool:
... ...
cidr=16 : A1, A3, ...
cidr=17 : B1, B3, ...
cidr=18 : C1, C3, ...
... ...
而其中,每一个元素,比如 A1,其实代表的是一个区间 [A1, A1+2^16-1]
这种基于2的幂的区间设计的好处是可以方便的进行 分配 和 合并 区间,操作起来更加高效。
* 枚举池 / enumeration pool : 分配、回收一个、多个网络地址
enum pool 中的元素为单个网络地址,比如:
enum pool : A, B, C, D, ... X
## API
操作上述两种数据结构的API,这里省略
## Network Manager Storage Design
* center : 中心池,提供 用户网络段 的分配、回收
info : IP/CIDR
intervalpool :
cidr16 : ...
cidr17 : ...
... ...
* system : 系统保留地址,为系统内部的 网络地址 提供 分配回收
info : IP/CIDR
enumpool : ...
* vlan/<username> : 为某个用户提供地址分配、回收服务
info : IP/CIDR
enumpool : ...
vlanid : id
================================================
FILE: doc/devdoc/openvswitch-vlan.md
================================================
# Test of VLAN on openvswitch
## Note 1
基本操作,建网桥,配置地址,启动网桥
ovs-vsctl add-br br0
ip address add 172.0.0.1/8 dev br0
ip link set br0 up
## Note 2
LXC conf 中指定 pair 的名称,从而方便控制 网络链接
所以,需要修改 conf 文件来实现这一点
lxc.network.type = veth
lxc.network.name = eth0
lxc.network.script.up = Bridge=br0 /home/leebaok/Container/lxc-ifup
lxc.network.script.down = Bridge=br0 /home/leebaok/Container/lxc-ifdown
lxc.network.veth.pair = base
lxc.network.ipv4 = 172.0.0.10/8
lxc.network.ipv4.gateway = 172.0.0.1
lxc.network.flags = up
lxc.network.mtu = 1420
我们对上面的配置解释一下:
* lxc.network.link 现在不需要了
* lxc.network.script.up/down 来指定container启动前和关闭后的网络准备和释放,这个脚本的路径是物理机的路径,因为这个脚本是由物理机来执行的,“Bridge=br0” 是为了传参数给后面的脚本
* lxc.network.veth.pair 是网络连接的名字,即container和物理机的哪个口连接
配置了网络设置的脚本路径,我们还需要实现这两个具体的脚本:
* /home/leebaok/Container/lxc-ifup
#!/bin/bash
# $1 : name of container ( name in lxc-start with -n )
# $2 : net
# $3 : network flags, up or down
# $4 : network type, for example, veth
# $5 : value of lxc.network.veth.pair
ovs-vsctl --may-exist add-port $Bridge $5
# ovs-vsctl set port $5 tag=$Tag
* /home/leebaok/Container/lxc-ifdown
#!/bin/bash
# $1 : name of container ( name in lxc-start with -n )
# $2 : net
# $3 : network flags, up or down
# $4 : network type, for example, veth
# $5 : value of lxc.network.veth.pair
ovs-vsctl --if-exists del-port $Bridge $5
## Note 3
VLAN tag 操作:
ovs-vsctl set port <port-name> tag=<tag-id>
ovs-vsctl clear port <port-name> tag
patch 是用来连接两个网桥的,操作如下:
ovs-vsctl add-br br0
ovs-vsctl add-br br1
ovs-vsctl add-port br0 patch0 -- set interface patch0 type=patch options:peer=patch1
ovs-vsctl add-port br1 patch1 -- set interface patch1 type=patch options:peer=patch0
# NOW : two bridges are connected by patch
## Note 4
一台机器上一个域的网桥只有一个,比如在 host-0 上,建两个网桥:
ovs-vsctl add-br br0
ip address add 172.0.0.1/8 dev br0
ip link set br0 up
ovs-vsctl add-br br1
ip address add 172.0.0.2/8 dev br1
ip link set br1 up
则,后配置的那个网桥会失效
因为系统认为,172.0.0.1/8 内的机器都应该在 br0 中
而以下配置是正确的:
ovs-vsctl add-br br0
ip address add 172.0.0.1/24 dev br0
ip link set br0 up
ovs-vsctl add-br br1
ip address add 172.0.1.1/24 dev br1
ip link set br1 up
## Note 5
关于网关,网桥/交换机是二层设备,网关是三层组件,我们可以将网桥连接起来,多个网桥共用一个网关
ovs-vsctl add-br br0
ip link set br0 up
ovs-vsctl add-br br1
ip address add 172.0.0.1/24 dev br1
ip link set br1 up
ovs-vsctl add-port br0 patch0 -- set interface patch0 type=patch options:peer=patch1
ovs-vsctl add-port br1 patch1 -- set interface patch1 type=patch options:peer=patch0
# lxc config :
# ip -- 172.0.0.11/24
# gateway -- 172.0.0.1
# lxc.network.veth.pair -- base , base is connected on br0
lxc-start -f container.conf -n base -F -- /bin/bash
# NOW : lxc network is running ok
## Note 6
基于多个网桥实现VLAN
### 方案一
ovs-vsctl add-br br0
ip link set br0 up
ovs-vsctl add-br br1
ip address add 172.0.0.1/24 dev br1
ip link set br1 up
ovs-vsctl add-port br0 patch0 -- set interface patch0 type=patch options:peer=patch1
ovs-vsctl add-port br1 patch1 -- set interface patch1 type=patch options:peer=patch0
# lxc config :
# ip -- 172.0.0.11/24
# gateway -- 172.0.0.1
# lxc.network.veth.pair -- base , base is connected on br0
lxc-start -f container.conf -n base -F -- /bin/bash
# NOW : lxc network is running ok
## above is the same as before
ovs-vsctl set port base tag=5
ovs-vsctl set port patch0 tag=5
# NOW : lxc network is running ok
# ARCH
+-----------------------+ +----------------------+
| br0 | | br1 : 172.0.0.1/24 |
+--+-----tag=5---tag=5--+ +---+-------+----------+
| | | patch | |
| | +-------------------+ |
| | |
internal base:172.0.0.11/24 internal
(gateway:172.0.0.1)
# flow : base --> patch --> br1/internal
* 方案可行
* 但是,每个 VLAN 需要一个网关
### 方案二 (不可行)
# ARCH
+-------------------------------------------------------------+
| br0 |
+--+-----tag=5---tag=5---------+-----tag=6---tag=6---------+--+
| | | +-----+ | | | +-----+ |
| | +--| br1 |--+ | +--| br2 |--+
| | +-----+ | +-----+
internal base1:172.0.0.11/24 base2:172.0.0.12/24
# flow 1 : base1 --> br1 --> internal
# flow 2 : base1 --> br1 --> br2 --> base2
* 方案不可行,因为上面的 flow 可以使得 base1、base2 在二层通信,无法隔离
## Note 7
上述可行方案的简化版
### 简化版一
ovs-vsctl add-br br0
ip link set br0 up
# add a fake bridge connected to br0 with vlan tag=5
ovs-vsctl add-br fakebr br0 5
ip address add 172.0.0.1/24 dev fakebr
ip link set fakebr up
# lxc config:
# ip : 172.0.0.11/24
# gateway : 172.0.0.1/24
# lxc.network.veth.pair -- base , base is connected on br0
lxc-start -f container.conf -n base -F -- /bin/bash
ovs-vsctl set port base tag=5
# ARCH
+-----------------------+
| br0 |
+--+-----tag=5---tag=5--+
| | |
| | fakebr:172.0.0.1/24
| |
internal base:172.0.0.11/24
(gateway:172.0.0.1)
# flow : base --> fakebr
### 简化版二
ovs-vsctl add-br br0
ip link set br0 up
# add an internal interface for vlan
ovs-vsctl add-port br0 vlanif tag=5 -- set interface vlanif type=internal
ip address add 172.0.0.1/24 dev vlanif
ip link set vlanif up
# lxc config:
# ip : 172.0.0.11/24
# gateway : 172.0.0.1/24
# lxc.network.veth.pair -- base , base is connected on br0
lxc-start -f container.conf -n base -F -- /bin/bash
ovs-vsctl set port base tag=5
# ARCH
+-----------------------+
| br0 |
+--+-----tag=5---tag=5--+
| | |
| | vlanif:172.0.0.1/24
| |
internal base:172.0.0.11/24
(gateway:172.0.0.1)
# flow : base --> vlanif
### 简化版一 & 简化版二
使用 ovs-vsctl show 查看的时候,上述两个版本显示的信息是一样的,说明 fakebr 其实本质上可能就是一个 internal interface
其实,方案一中,对 br1 的 IP(172.0.0.1/24)的配置,其实就是对 br1 的 internal 的 interface 的配置,所以其实多余的网桥不是必须的,而 interface 才是真正需要的。
而,internal interface 相当于是连接着本地Linux的虚拟网卡,这块网卡的另一端连着OVS的虚拟网桥。
而,Linux 的网络栈又管理着物理网卡、虚拟网卡,以及对这些网卡的包进行转发、路由等处理。
似乎,Linux 的网络栈又成了一个大的交换机/网桥,上面连接着 internal interface 和 物理网卡。
## Note 8
基于上述的实践和探索,其实 **我们需要给一个VLAN配置一个可以出去的网关、网卡。**
那么,我们一个简单可行的方案可以这样:
+------------------------------------------------------------------------------+
| bridge |
| <------- VLAN ID=5 ---------> <---- VLAN ID=6 ------> |
+--+-----tag=5---tag=5------------tag=5-------------tag=6-------------tag=6----+
| | | | | |
| | lxc-2:172.0.0.12/24 | | |
internal | (gateway:172.0.0.1) | | |
| | | |
lxc-1:172.0.0.11/24 gw5:172.0.0.1/24 lxc-3:172.0.1.11/24 gw6:172.0.1.1/24
(gateway:172.0.0.1) internal (gateway:172.0.1.1) internal
| |
| |
+----------- NAT / iptables --------+
||||
||||
\\\///
\\//
\/
# end
================================================
FILE: doc/devdoc/proxy-control.md
================================================
# Some Note for configurable-http-proxy usage
## intsall
sudo apt-get install nodejs nodejs-legacy npm
sudo npm install -g configurable-http-proxy
## start
configurable-http-proxy -h : for help
configurable-http-proxy --ip IP \
--port PORT \
--api-ip IP \
--api-port PORT \
--default-target http://IP:PORT \
--log-level debug/info/warn/error
default ip:port is 0.0.0.0:8000,
default api-ip:api-port is localhost:8001
## control route table
### get route table
* without token:
curl http://localhost:8001/api/routes
* with token:
curl -H "Authorization: token TOKEN" http://localhost:8001/api/routes
### add/set route table
* without token:
curl -XPOST --data '{"target":"http://TARGET-IP:TARGET-PORT"}' http://localhost:8001/api/routes/PROXY-URL
* with token:
curl -H "Authorization: token TOKEN" -XPOST --data '{"target":"http://TARGET-IP:TARGET-PORT"}' http://localhost:8001/api/routes/PROXY-URL
### delete route table line
* without token:
curl -XDELETE http://localhost:8001/api/routes/PROXY-URL
* with token:
curl -H "Authorization: token TOKEN" -XDELETE http://localhost:8001/api/routes/PROXY-URL
================================================
FILE: doc/devdoc/startup.md
================================================
# startup mode
## new mode
#### step 1 : data
<Master>
clean etcd table
write token
init etcd table
clean global directory of user clusters
#### step 2 : nodemgr
<Master> <Slave>
init network
wait for all nodes starts
|_____ listen node joins IP:waiting <--- worker starts
update etcd ----> IP:init-mode ---> worker init
|____ stop all containers
|____ umount mountpoint, delete lxc files, delete LV
|____ delete VG, umount loop dev, delete loop file
|____ init loop file, loop dev, create VG
add node to list <--- IP:work <---- init done, begin work
check all nodes begin work
#### step 3 : vclustermgr
Nothing to do
## recovery mode
#### step 1 : data
<Master>
write token
init some of etcd table
#### step 2 : nodemgr
<Master> <Slave>
init network
wait for all nodes starts
|_____ listen node joins IP:waiting <--- worker starts
update etcd ----> IP:init-mode ---> worker init
|____ check loop file, loop dev, VG
|____ check all containers and mountpoint
add node to list <--- IP:work <---- init done, begin work
check all nodes begin work
#### step 3 : vclustermgr
<Master> <Slave>
recover vclusters:some need start ---------------> recover containers: some need start
================================================
FILE: doc/devguide/devguide.md
================================================
# Docklet Development Guide on GitHub
This document is intended for GitHubers to contribute for Docklet System.
## Introduction of Docklet Development Workflow
We use fork and pull request workflow to push forward Docklet Project.

## Step by Step
### Prepare
Before work, we need to prepare our working repository. These actions should be executed just once.
##### Step 1 : fork
Open https://github.com/unias/docklet in your browser and click **Fork** button on the top-right corner.
##### Step 2 : clone & config
* clone docklet from your github repository
```
git clone https://github.com/YourName/docklet.git
```
* config your local repository
```
# add unias/docklet as your upstream
git remote add upstream https://github.com/unias/docklet.git
# set push to upstream not work
git remote set-url --push upstream no_push
```
### Work
This part is about the steps of making contributions to Docklet by pull request.
#### Work : Begin
##### Step 3 : fetch
Fetch the latest code from **upstream(unias/docklet)**
```
git fetch upstream
```
##### Step 4 : branch
Create new branch for your work
```
git checkout -b BranchName upstream/master
```
This is not the step you must do and you can work on local master branch. But we recommend you follow these steps. Using branch to develop new features fits git.
#### Work : Work
Now you can focus on your work by **commit** and **push**.
##### Step 5 : commit & commit
Commit is commit. Nothing to say.
##### Step 6 : push & push
Push your work to **your own Github repository** by **BranchName**
```
git push origin BranchName
```
#### Work : End
After you complete work of this feature, you maybe want to create a pull request to unias/docklet. Please follow steps below.
##### Step 7 : fetch
Fetch the latest code from **unias/docklet**
```
git fetch upstream
```
##### Step 8 : merge
Merge upstream's latest code to your working branch
```
git merge upstream/master
```
Please ensure that you are on your working branch.
If conflict happens, resolve it and commit.
##### Step 9 : push
Push to your github repository by BranchName.
```
git push origin BranchName
```
##### Step 10 : pull request
Open https://github.com/YourName/docklet, click **New pull request** and select your working **BranchName** to create the pull request.
## Tips
##### local master
After you fetch upstream code, you can move forward your local master branch to upstream/master. And push your github repository master branch to update.
```
git fetch upstream
git checkout master
git merge upstream/master
git push origin master
```
##### pretty git log or git log with GUI
You can config your git log command with pretty format.
```
git config --global alias.lg "log --graph --color --pretty=format:' %Cred%h %Creset/ %<(10,trunc)%Cblue%an%Creset | %<(60,trunc)%s | %cr %Cred%d' --remotes --branches"
```
Now, type **git lg** to see what happens.
Of course, you can use GUI with git. **gitg** is a good choice. It shows log of git very friendly.
##### understand git log
git log has much information. You should understand the log info of git. This can help you know how to move forward your work. Especially the reference of branches : upstream/master, HEAD, master, origin/master, other branches.
##### graphs/network of github
The Graphs/Network of Github is very useful. With this, you can know whether you can create a pull request without conflict. Open https://github.com/unias/docklet/network in your browser and see the network graph of docklet.
================================================
FILE: doc/example/example-LogisticRegression.py
================================================
# import package
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, datasets
%matplotlib inline
# load data : we only use target==0 and target==1 (2 types classify) and feature 0 and feature 2 ()
iris = datasets.load_iris()
X = iris.data[iris.target!=2][:, [0,2]]
Y = iris.target[iris.target!=2]
h = .02 # step size in the mesh
logreg = linear_model.LogisticRegression(C=1e5)
logreg.fit(X, Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
#plt.figure(1, figsize=(4, 3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
================================================
FILE: meter/connector/master.py
================================================
#!/usr/bin/python3
import socket, select, errno, threading, os
class master_connector:
tcp_port = 1727
max_minions = 256
conn = {}
epoll_fd = select.epoll()
def establish_vswitch(ovsname):
os.system('ovs-vsctl del-br ovs-%s >/dev/null 2>&1' % ovsname)
os.system('ovs-vsctl add-br ovs-%s' % ovsname)
os.system('brctl addif ovs-bridge ovs-%s >/dev/null 2>&1' % ovsname)
os.system('ip link set ovs-system up')
os.system('ip link set ovs-%s up' % ovsname)
def build_gre_conn(ovsname, ipaddr):
name = ipaddr.replace('.','_')
os.system('ovs-vsctl add-port ovs-%s gre-%s -- set interface gre-%s type=gre options:remote_ip=%s 2>/dev/null' % (ovsname, name, name, ipaddr))
def break_gre_conn(ovsname, ipaddr):
name = ipaddr.replace('.','_')
os.system('ovs-vsctl del-port ovs-%s gre-%s 2>/dev/null' % (ovsname, name))
def close_connection(fd):
master_connector.epoll_fd.unregister(fd)
master_connector.conn[fd][0].close()
addr = master_connector.conn[fd][1]
master_connector.conn.pop(fd)
master_connector.break_gre_conn('master', addr)
def do_message_response(input_buffer):
assert(input_buffer == b'ack')
return b'ack'
def start():
thread = threading.Thread(target = master_connector.run_forever, args = [])
thread.setDaemon(True)
thread.start()
return thread
def run_forever():
listen_fd = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
listen_fd.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
listen_fd.bind(('', master_connector.tcp_port))
listen_fd.listen(master_connector.max_minions)
master_connector.epoll_fd.register(listen_fd.fileno(), select.EPOLLIN)
datalist = {}
master_connector.establish_vswitch('master')
try:
while True:
epoll_list = master_connector.epoll_fd.poll()
for fd, events in epoll_list:
if fd == listen_fd.fileno():
fileno, addr = listen_fd.accept()
fileno.setblocking(0)
master_connector.epoll_fd.register(fileno.fileno(), select.EPOLLIN | select.EPOLLET)
master_connector.conn[fileno.fileno()] = (fileno, addr[0])
master_connector.build_gre_conn('master', addr[0])
elif select.EPOLLIN & events:
datas = b''
while True:
try:
data = master_connector.conn[fd][0].recv(10)
if not data and not datas:
master_connector.close_connection(fd)
break
else:
datas += data
except socket.error as msg:
if msg.errno == errno.EAGAIN:
try:
datalist[fd] = master_connector.do_message_response(datas)
master_connector.epoll_fd.modify(fd, select.EPOLLET | select.EPOLLOUT)
except:
master_connector.close_connection(fd)
else:
master_connector.close_connection(fd)
break
elif select.EPOLLOUT & events:
sendLen = 0
while True:
sendLen += master_connector.conn[fd][0].send(datalist[fd][sendLen:])
if sendLen == len(datalist[fd]):
break
master_connector.epoll_fd.modify(fd, select.EPOLLIN | select.EPOLLET)
elif select.EPOLLHUP & events:
master_connector.close_connection(fd)
else:
continue
finally:
os.system('ovs-vsctl del-br ovs-master >/dev/null 2>&1')
================================================
FILE: meter/connector/minion.py
================================================
#!/usr/bin/python3
import socket, time, threading, os
class minion_connector:
def connect(server_ip):
from connector.master import master_connector
connected = True
while True:
try:
fd = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
fd.connect((server_ip, master_connector.tcp_port))
connected = True
print("[info]", "connected to master.")
master_connector.establish_vswitch('minion')
master_connector.build_gre_conn('minion', server_ip)
while True:
data = b'ack'
if fd.send(data) != len(data):
break
readData = fd.recv(1024)
time.sleep(0.5)
fd.close()
except socket.error as e:
master_connector.break_gre_conn('minion', server_ip)
if connected:
print("[info]", "non-connected with master.")
except Exception as e:
pass
finally:
if connected:
os.system('ovs-vsctl del-br ovs-minion >/dev/null 2>&1')
connected = False
time.sleep(1)
def start(server_ip):
thread = threading.Thread(target = minion_connector.connect, args = [server_ip])
thread.setDaemon(True)
thread.start()
return thread
================================================
FILE: meter/daemon/http.py
================================================
import json, cgi, threading
from http.server import BaseHTTPRequestHandler, HTTPServer
class base_http_handler(BaseHTTPRequestHandler):
def load_module(self):
return None
def do_POST(self):
try:
default_exception = 'unsupported request.'
success = True
data = None
length = self.headers['content-length']
if length == None:
length = self.headers['content-length'] = 0
if int(length) > (1<<12):
raise Exception("data too large")
http_form = cgi.FieldStorage(fp=self.rfile, headers=self.headers,environ={'REQUEST_METHOD':'POST','CONTENT_TYPE': "text/html"})
form = {}
for item in http_form:
try:
value = http_form[item].file.read().strip()
except:
value = http_form[item].value
try:
value = value.decode()
except:
pass
form[item] = value
parts = self.path.split('/', 2)
if len(parts) != 3:
raise Exception(default_exception)
[null, version, path] = parts
pymodule = self.load_module() + '_' + version
module = __import__('daemon.' + pymodule)
handler = module.__dict__[pymodule].__dict__['case_handler']
method = path.replace('/', '_')
if not hasattr(handler, method):
raise Exception(default_exception)
data = handler.__dict__[method](form, self.handler_class.args)
except Exception as e:
success = False
data = {"reason": str(e)}
finally:
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
self.wfile.write(json.dumps({"success": success, "data": data}).encode())
self.wfile.write("\n".encode())
return
class master_http_handler(base_http_handler):
http_port = 1728
def load_module(self):
self.handler_class = master_http_handler
return 'master'
class minion_http_handler(base_http_handler):
http_port = 1729
def load_module(self):
self.handler_class = minion_http_handler
return 'minion'
class http_daemon_listener:
def __init__(self, handler_class, args = None):
handler_class.args = args
self.handler_class = handler_class
def listen(self):
server = HTTPServer(('', self.handler_class.http_port), self.handler_class)
server.serve_forever()
================================================
FILE: meter/daemon/master_v1.py
================================================
import subprocess, os
def http_client_post(ip, port, url, entries = {}):
import urllib.request, urllib.parse, json
url = url if not url.startswith('/') else url[1:]
response = urllib.request.urlopen('http://%s:%d/%s' % (ip, port, url), urllib.parse.urlencode(entries).encode())
obj = json.loads(response.read().decode().strip())
response.close()
return obj
class case_handler:
# [Order-by] lexicographic order
# curl -L -X POST http://0.0.0.0:1728/v1/minions/list
def minions_list(form, args):
minions = []
for item in args.conn:
minions.append(args.conn[item][1])
return {"minions": minions}
# curl -L -X POST -F mem=4096 -F cpu=2 http://0.0.0.0:1728/v1/resource/allocation
def resource_allocation(form, args):
mem = int(form['mem'])
cpu = int(form['cpu'])
candidates = {}
from daemon.http import minion_http_handler
for item in args.conn:
addr = args.conn[item][1]
obj = http_client_post(addr, minion_http_handler.http_port, '/v1/system/memsw/available')
if obj['success'] and obj['data']['Mbytes'] >= mem:
candidates[addr] = obj['data']
if len(candidates) <= 0:
raise Exception("no minions")
else:
from policy.allocate import candidates_selector
one = candidates_selector.select(candidates)
return {"recommend": one}
# curl -L -X POST -F user=docklet http://0.0.0.0:1728/v1/user/live/add
def user_live_add(form, args):
if not os.path.exists('/var/lib/docklet/global/users/%s' % form['user']):
return False
subprocess.getoutput('echo live > /var/lib/docklet/global/users/%s/status' % form['user'])
return True
# curl -L -X POST -F user=docklet http://0.0.0.0:1728/v1/user/live/remove
def user_live_remove(form, args):
subprocess.getoutput('rm -f /var/lib/docklet/global/users/%s/status' % form['user'])
return True
# curl -L -X POST http://0.0.0.0:1728/v1/user/live/list
def user_live_list(form, args):
return subprocess.getoutput('ls -1 /var/lib/docklet/global/users/*/status 2>/dev/null | awk -F\/ \'{print $(NF-1)\'}').split()
================================================
FILE: meter/daemon/minion_v1.py
================================================
from intra.system import system_manager
from intra.billing import billing_manager
from intra.cgroup import cgroup_manager
from policy.quota import *
from intra.smart import smart_controller
class case_handler:
# [Order-by] lexicographic order
# curl -L -X POST -F uuid=docklet-1-0 http://0.0.0.0:1729/v1/billing/increment
def billing_increment(form, args):
return billing_manager.fetch_increment_and_clean(form['uuid'])
# curl -L -X POST http://0.0.0.0:1729/v1/cgroup/container/list
def cgroup_container_list(form, args):
return cgroup_manager.get_cgroup_containers()
# curl -L -X POST -F policy=etime_rev_policy http://0.0.0.0:1729/v1/smart/quota/policy
def smart_quota_policy(form, args):
msg = 'success'
try:
smart_controller.set_policy(eval(form['policy']))
except Exception as e:
msg = e
return {'message': msg}
# curl -L -X POST -F uuid=n1 http://0.0.0.0:1729/v1/cgroup/container/limit
def cgroup_container_limit(form, args):
return cgroup_manager.get_container_limit(form['uuid'])
# curl -L -X POST -F uuid=n1 http://0.0.0.0:1729/v1/cgroup/container/sample
def cgroup_container_sample(form, args):
return cgroup_manager.get_container_sample(form['uuid'])
# curl -L -X POST http://0.0.0.0:1729/v1/system/loads
def system_loads(form, args):
return system_manager.get_system_loads()
# curl -L -X POST http://0.0.0.0:1729/v1/system/memsw/available
def system_memsw_available(form, args):
return system_manager.get_available_memsw()
# curl -L -X POST -F size=16 http://0.0.0.0:1729/v1/system/swap/extend
def system_swap_extend(form, args):
return system_manager.extend_swap(int(form['size']))
# curl -L -X POST http://0.0.0.0:1729/v1/system/swap/clear
def system_swap_clear(form, args):
return system_manager.clear_all_swaps()
# curl -L -X POST http://0.0.0.0:1729/v1/system/total/physical/memory
def system_total_physical_memory(form, args):
return system_manager.get_total_physical_memory_for_containers()
'''
# curl -X POST -F uuid=n1 http://0.0.0.0:1729/v1/blacklist/add
def blacklist_add(form):
exists = form['uuid'] in smart_controller.blacklist
if not exists:
smart_controller.blacklist.add(form['uuid'])
return {"changed": not exists}
# curl -X POST -F uuid=n1 http://0.0.0.0:1729/v1/blacklist/remove
def blacklist_remove(form):
exists = form['uuid'] in smart_controller.blacklist
if exists:
smart_controller.blacklist.remove(form['uuid'])
return {"changed": exists}
# curl -X POST http://0.0.0.0:1729/v1/blacklist/show
def blacklist_show(form):
blacklist = []
for item in smart_controller.blacklist:
blacklist.append(item)
return blacklist
'''
================================================
FILE: meter/intra/billing.py
================================================
import subprocess, time, os
from intra.system import system_manager
class billing_manager:
history_book = {}
def on_lxc_acct_usage(uuid, prev, curr, interval):
cpu_gen = max(0, curr['cpu_sample'] - prev['cpu_sample']) >> 20 # in ms
mem_gen = ((curr['mem_phys_sample'] + prev['mem_phys_sample']) * interval) >> 11 # in kbytes
try:
os.makedirs('%s/%s' % (system_manager.db_prefix, uuid))
except:
pass
with open('%s/%s/usage' % (system_manager.db_prefix, uuid), 'a') as fp:
fp.write('%d %d\n' % (cpu_gen, mem_gen))
def add_usage_sample(uuid, sample, interval):
if uuid in billing_manager.history_book:
billing_manager.on_lxc_acct_usage(uuid, billing_manager.history_book[uuid], sample, interval)
billing_manager.history_book[uuid] = sample
def clean_dead_node(uuid):
if uuid in billing_manager.history_book:
billing_manager.history_book.pop(uuid)
def fetch_increment_and_clean(uuid):
cpu_acct = 0.0
mem_acct = 0.0
cnt_acct = 0
try:
fetch_path = '%s/%s/%f' % (system_manager.db_prefix, uuid, time.time())
os.rename('%s/%s/usage' % (system_manager.db_prefix, uuid), fetch_path)
with open(fetch_path, 'r') as fp:
line = fp.readline()
while line != '':
[cpu, mem] = line.split()
line = fp.readline()
cnt_acct += 1
cpu_acct += float(cpu)
mem_acct += float(mem)
os.remove(fetch_path)
except:
pass
return {"cpu_acct": cpu_acct, "mem_acct": mem_acct, "cnt_acct": cnt_acct}
================================================
FILE: meter/intra/cgroup.py
================================================
import subprocess, os
class cgroup_controller:
def read_value(group, uuid, item):
path = cgroup_manager.__default_prefix__ % (group, uuid, item)
if not os.path.exists(path):
raise Exception('read: container "%s" not found!' % uuid)
with open(path, 'r') as file:
value = file.read()
return value.strip()
def write_value(group, uuid, item, value):
path = cgroup_manager.__default_prefix__ % (group, uuid, item)
if not os.path.exists(path):
raise Exception('write: container "%s" not found!' % uuid)
try:
with open(path, 'w') as file:
file.write(str(value))
except:
pass
class cgroup_manager:
__prefix_docker__ = '/sys/fs/cgroup/%s/system.slice/docker-%s.scope/%s'
__prefix_lxc__ = '/sys/fs/cgroup/%s/lxc/%s/%s'
__prefix_lxcinit__ = '/sys/fs/cgroup/%s/init.scope/lxc/%s/%s'
def set_default_memory_limit(limit):
cgroup_manager.__default_memory_limit__ = limit
def set_cgroup_prefix(prefix = __prefix_lxc__):
cgroup_manager.__default_prefix__ = prefix
def auto_detect_prefix():
cgroup_manager.__default_prefix__ = cgroup_manager.__prefix_docker__
if len(cgroup_manager.get_cgroup_containers()) > 0:
return
cgroup_manager.__default_prefix__ = cgroup_manager.__prefix_lxcinit__
if len(cgroup_manager.get_cgroup_containers()) > 0:
return
cgroup_manager.__default_prefix__ = cgroup_manager.__prefix_lxc__
if len(cgroup_manager.get_cgroup_containers()) > 0:
return
# print("[info]", "set cgroup prefix to %s" % cgroup_manager.__default_prefix__)
def get_cgroup_containers():
containers = subprocess.getoutput("find %s -type d 2>/dev/null | awk -F\/ '{print $(NF-1)}'" % (cgroup_manager.__default_prefix__ % ('cpu', '*', '.'))).split()
uuids = []
for item in containers:
if item.startswith('docker-') and item.endswith('.scope') and len(item) > 64:
uuids.append(item[7:-6])
else:
uuids.append(item)
return uuids
def get_container_pid(uuid):
return int(cgroup_controller.read_value('cpu', uuid, 'tasks').split()[0])
def get_container_sample(uuid):
mem_page_sample = int(cgroup_controller.read_value('memory', uuid, 'memory.memsw.usage_in_bytes'))
mem_phys_sample = int(cgroup_controller.read_value('memory', uuid, 'memory.usage_in_bytes'))
cpu_sample = int(cgroup_controller.read_value('cpu', uuid, 'cpuacct.usage'))
pids_sample = int(cgroup_controller.read_value('pids', uuid, 'pids.current'))
container_pid = cgroup_manager.get_container_pid(uuid)
from intra.system import system_manager
real_time = system_manager.get_proc_etime(container_pid)
return {"cpu_sample": cpu_sample, "pids_sample": pids_sample, "mem_page_sample": mem_page_sample, "mem_phys_sample": mem_phys_sample, "pid": container_pid, "real_time": real_time}
def get_container_limit(uuid):
mem_phys_quota = int(cgroup_controller.read_value('memory', uuid, 'memory.limit_in_bytes'))
mem_page_quota = int(cgroup_controller.read_value('memory', uuid, 'memory.memsw.limit_in_bytes'))
cpu_shares = int(cgroup_controller.read_value('cpu', uuid, 'cpu.shares'))
cpu_quota = int(cgroup_controller.read_value('cpu', uuid, 'cpu.cfs_quota_us'))
cpu_quota = cpu_quota if cpu_quota >= 0 else -1
pids_quota = cgroup_controller.read_value('pids', uuid, 'pids.max')
pids_quota = int(pids_quota) if pids_quota != 'max' else -1
return {"cpu_quota": cpu_quota, "cpu_shares": cpu_shares, "mem_phy_quota": mem_phys_quota, "mem_page_quota": mem_page_quota, "pids_quota": pids_quota}
def get_container_oom_status(uuid):
[_x, idle, _y, oom] = cgroup_controller.read_value('memory', uuid, 'memory.oom_control').split()
return (idle == '1', oom == '1')
def set_container_oom_idle(uuid, idle):
cgroup_controller.write_value('memory', uuid, 'memory.oom_control', 1 if idle else 0)
def protect_container_oom(uuid):
cgroup_controller.write_value('memory', uuid, 'memory.oom_control', 1)
data = cgroup_manager.get_container_limit(uuid)
if data["mem_page_quota"] >= 9223372036854771712:
memory_limit_in_bytes = cgroup_manager.__default_memory_limit__ << 30
mem_phy_quota = min(data["mem_phy_quota"], memory_limit_in_bytes)
mem_page_quota = memory_limit_in_bytes
cgroup_controller.write_value('freezer', uuid, 'freezer.state', 'FROZEN')
cgroup_controller.write_value('memory', uuid, 'memory.limit_in_bytes', mem_phy_quota)
cgroup_controller.write_value('memory', uuid, 'memory.limit_in_bytes', mem_phy_quota)
cgroup_controller.write_value('memory', uuid, 'memory.memsw.limit_in_bytes', mem_page_quota)
cgroup_controller.write_value('freezer', uuid, 'freezer.state', 'THAWED')
def set_container_physical_memory_limit(uuid, Mbytes, freeze = False):
if freeze:
cgroup_controller.write_value('freezer', uuid, 'freezer.state', 'FROZEN')
memory_limit = int(max(0, Mbytes)) << 20
cgroup_controller.write_value('memory', uuid, 'memory.limit_in_bytes', memory_limit)
if freeze:
cgroup_controller.write_value('freezer', uuid, 'freezer.state', 'THAWED')
def set_container_cpu_priority_limit(uuid, ceof):
cpu_scaling = min(1024, 10 + int(1024 * ceof))
cgroup_controller.write_value('cpu', uuid, 'cpu.shares', cpu_scaling)
================================================
FILE: meter/intra/smart.py
================================================
import subprocess, time, os, threading, math
from intra.system import system_manager
from intra.cgroup import cgroup_manager
from intra.billing import billing_manager
class smart_controller:
def set_policy(policy):
smart_controller.policy = policy
def start(interval = 4):
thread = threading.Thread(target = smart_controller.smart_control_forever, args = [interval])
thread.setDaemon(True)
thread.start()
return thread
def smart_control_forever(interval):
last_live = []
while True:
time.sleep(interval)
try:
mem_usage_mapping = {}
live = cgroup_manager.get_cgroup_containers()
for item in live:
try:
last_live.remove(item)
except:
pass
try:
cgroup_manager.protect_container_oom(item)
sample = cgroup_manager.get_container_sample(item)
mem_usage_mapping[item] = math.ceil(sample['mem_page_sample'] * 1e-6)
billing_manager.add_usage_sample(item, sample, interval)
except:
pass
for item in last_live:
billing_manager.clean_dead_node(item)
last_live = live
is_ready = True
memory_available = system_manager.get_available_memsw()
if memory_available['Mbytes'] <= 0:
size_in_gb = int(math.ceil(-memory_available['Mbytes'] / 1024 / 16) * 16)
print("[warning]", 'overloaded containers, auto-extending %d G memsw.' % size_in_gb)
system_manager.extend_swap(size_in_gb)
total_score = 0.0
score_mapping = {}
for item in live:
score = max(1e-8, smart_controller.policy.get_score_by_uuid(item))
score_mapping[item] = score
print(item, "(score/cpu)", score)
total_score += score
# CPU Scoring
for item in live:
ceof = score_mapping[item] / total_score
cgroup_manager.set_container_cpu_priority_limit(item, ceof)
# Iterative Memory Scoring
free_mem = system_manager.get_total_physical_memory_for_containers()['Mbytes']
local_nodes = live
mem_alloc = {}
for item in live:
mem_alloc[item] = 0
while free_mem > 0 and len(local_nodes) > 0:
excess_mem = 0
next_local_nodes = []
for item in local_nodes:
mem_alloc[item] += int(math.floor(free_mem * score_mapping[item] / total_score))
if mem_alloc[item] >= mem_usage_mapping[item]:
excess_mem += mem_alloc[item] - mem_usage_mapping[item]
mem_alloc[item] = mem_usage_mapping[item]
else:
next_local_nodes.append(item)
free_mem = excess_mem
local_nodes = next_local_nodes
for item in live:
mem_alloc[item] += int(math.floor(free_mem * score_mapping[item] / total_score))
cgroup_manager.set_container_physical_memory_limit(item, mem_alloc[item])
print(item, "(malloc:usage)", mem_alloc[item], mem_usage_mapping[item])
if len(live) > 0:
print("-------------------------------")
except:
pass
# echo "8:0 1000" > /sys/fs/cgroup/blkio/lxc/docklet-1-0/blkio.throttle.write_bps_device
# https://www.kernel.org/doc/Documentation/devices.txt
# while true; do clear; cat /sys/fs/cgroup/blkio/lxc/docklet-1-0/blkio.throttle.io_service_bytes; sleep 0.5; done
# hugetlb, net_cls, net_prio, /sbin/tc
================================================
FILE: meter/intra/system.py
================================================
import subprocess, time, os
from intra.cgroup import cgroup_manager
class system_manager:
db_prefix = '.'
def set_db_prefix(prefix):
system_manager.db_prefix = prefix
try:
os.makedirs(prefix)
except:
pass
def clear_all_swaps():
subprocess.getoutput('swapoff -a')
subprocess.getoutput('losetup -D')
def extend_swap(size):
if size < 0:
(mem_free, mem_total) = system_manager.get_memory_sample()
size = (mem_total + mem_total // 8) // 1024
nid = 128
while subprocess.getoutput("cat /proc/swaps | grep cg-loop | awk '{print $1}' | awk -F\- '{print $NF}' | grep %d$" % nid) != "":
nid = nid + 1
start_time = time.time()
# setup
os.system('dd if=/dev/zero of=/tmp/cg-swap-%d bs=1G count=0 seek=%d >/dev/null 2>&1' % (nid, size))
os.system('mknod -m 0660 /dev/cg-loop-%d b 7 %d >/dev/null 2>&1' % (nid, nid))
os.system('losetup /dev/cg-loop-%d /tmp/cg-swap-%d >/dev/null 2>&1' % (nid, nid))
os.system('mkswap /dev/cg-loop-%d >/dev/null 2>&1' % nid)
success = os.system('swapon /dev/cg-loop-%d >/dev/null 2>&1' % nid) == 0
# detach
# os.system('swapoff /dev/cg-loop-%d >/dev/null 2>&1' % nid)
# os.system('losetup -d /dev/cg-loop-%d >/dev/null 2>&1' % nid)
# os.system('rm -f /dev/cg-loop-%d /tmp/cg-swap-%d >/dev/null 2>&1' % (nid, nid))
end_time = time.time()
return {"setup": success, "time": end_time - start_time }
def get_cpu_sample():
[a, b, c, d] = subprocess.getoutput("cat /proc/stat | grep ^cpu\ | awk '{print $2, $3, $4, $6}'").split()
cpu_time = int(a) + int(b) + int(c) + int(d)
return (cpu_time, time.time())
def get_memory_sample():
mem_free = int(subprocess.getoutput("awk '{if ($1==\"MemAvailable:\") print $2}' /proc/meminfo 2>/dev/null")) // 1024
mem_total = int(subprocess.getoutput("awk '{if ($1==\"MemTotal:\") print $2}' /proc/meminfo 2>/dev/null")) // 1024
return (mem_free, mem_total)
def get_swap_sample():
swap_free = int(subprocess.getoutput("awk '{if ($1==\"SwapFree:\") print $2}' /proc/meminfo 2>/dev/null")) // 1024
swap_total = int(subprocess.getoutput("awk '{if ($1==\"SwapTotal:\") print $2}' /proc/meminfo 2>/dev/null")) // 1024
return (swap_free, swap_total)
def get_system_loads():
if 'last_cpu_sample' not in system_manager.__dict__:
system_manager.last_cpu_sample = system_manager.get_cpu_sample()
time.sleep(1)
cpu_sample = system_manager.get_cpu_sample()
(mem_free, mem_total) = system_manager.get_memory_sample()
(swap_free, swap_total) = system_manager.get_swap_sample()
ncpus = int(subprocess.getoutput("grep processor /proc/cpuinfo | wc -l"))
cpu_free = ncpus - (cpu_sample[0] - system_manager.last_cpu_sample[0]) * 0.01 / (cpu_sample[1] - system_manager.last_cpu_sample[1])
cpu_free = 0.0 if cpu_free <= 0.0 else cpu_free
system_manager.last_cpu_sample = cpu_sample
return {"mem_free": mem_free, "mem_total": mem_total, "swap_free": swap_free, "swap_total": swap_total, "cpu_free": cpu_free, "cpu_total": ncpus }
def get_proc_etime(pid):
fmt = subprocess.getoutput("ps -A -opid,etime | grep '^ *%d' | awk '{print $NF}'" % pid).strip()
if fmt == '':
return -1
parts = fmt.split('-')
days = int(parts[0]) if len(parts) == 2 else 0
fmt = parts[-1]
parts = fmt.split(':')
hours = int(parts[0]) if len(parts) == 3 else 0
parts = parts[len(parts)-2:]
minutes = int(parts[0])
seconds = int(parts[1])
return ((days * 24 + hours) * 60 + minutes) * 60 + seconds
def get_available_memsw():
total_mem_limit = 0
total_mem_used = 0
sysloads = system_manager.get_system_loads()
live = cgroup_manager.get_cgroup_containers()
for item in live:
try:
sample = cgroup_manager.get_container_sample(item)
limit = cgroup_manager.get_container_limit(item)
total_mem_limit += limit["mem_page_quota"]
total_mem_used += sample["mem_page_sample"]
except:
pass
total_mem_limit >>= 20
total_mem_used = (total_mem_used + (1<<20) - 1) >> 20
available_mem_resource = sysloads['mem_free'] + \
sysloads['swap_free'] - total_mem_limit + total_mem_used
return {"Mbytes": available_mem_resource, "physical": sysloads['mem_free'], "cpu_free": sysloads['cpu_free']}
def get_total_physical_memory_for_containers():
total_mem_used = 0
sysloads = system_manager.get_system_loads()
live = cgroup_manager.get_cgroup_containers()
for item in live:
try:
sample = cgroup_manager.get_container_sample(item)
total_mem_used += sample["mem_page_sample"]
except:
pass
total_mem_used = (total_mem_used + (1<<20) - 1) >> 20
total_physical_memory_for_containers = sysloads['mem_free'] + total_mem_used
return {"Mbytes": total_physical_memory_for_containers}
================================================
FILE: meter/main.py
================================================
#!/usr/bin/python3
########################################
# Boot for Local:
# sudo ./main (or: sudo ./main [master-ipaddr])
#
########################################
# Usage for Local:
# curl -F uuid="lxc-name1" http://0.0.0.0:1729/v1/cgroup/container/sample
#
import time, sys, signal, json, subprocess, os
if __name__ == '__main__':
if not subprocess.getoutput('lsb_release -r -s 2>/dev/null').startswith('16.04'):
raise Exception('Ubuntu 16.04 LTS is required.')
if not os.path.exists('/sys/fs/cgroup/memory/memory.memsw.usage_in_bytes'):
raise Exception('Please append "swapaccount=1" to kernel.')
if subprocess.getoutput('whoami') != 'root':
raise Exception('Root privilege is required.')
from daemon.http import *
if len(sys.argv) == 1:
sys.argv.append('disable-network')
def signal_handler(signal, frame):
if sys.argv[1] == 'master':
subprocess.getoutput('ovs-vsctl del-br ovs-master >/dev/null 2>&1')
else:
subprocess.getoutput('ovs-vsctl del-br ovs-minion >/dev/null 2>&1')
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
if sys.argv[1] != 'master': # for minions
from intra.cgroup import cgroup_manager
cgroup_manager.auto_detect_prefix()
cgroup_manager.set_default_memory_limit(4)
from intra.system import system_manager
system_manager.set_db_prefix('/var/lib/docklet/meter')
# system_manager.extend_swap(32)
if sys.argv[1] != 'disable-network':
from connector.minion import minion_connector
minion_connector.start(sys.argv[1])
else:
print("(No network mode)")
from policy.quota import identify_policy
from intra.smart import smart_controller
smart_controller.set_policy(identify_policy)
smart_controller.start()
print("Minion REST Daemon Starts Listening ..")
http = http_daemon_listener(minion_http_handler)
http.listen()
else: # for master: sudo ./main master
from connector.master import master_connector
master_connector.start()
print("Master REST Daemon Starts Listening ..")
http = http_daemon_listener(master_http_handler, master_connector)
http.listen()
================================================
FILE: meter/policy/allocate.py
================================================
class candidates_selector:
def select(candidates):
return max(candidates, key=lambda addr: candidates[addr]['cpu_free'])
================================================
FILE: meter/policy/quota.py
================================================
from intra.system import system_manager
from intra.cgroup import cgroup_manager
import subprocess
class identify_policy:
def get_score_by_uuid(uuid):
return 1.0
class etime_rev_policy(identify_policy):
def get_score_by_uuid(uuid):
pid = cgroup_manager.get_container_pid(uuid)
etime = system_manager.get_proc_etime(pid)
return 1.0 / (1.0 + etime)
class mem_usage_policy(identify_policy):
def get_score_by_uuid(uuid):
sample = cgroup_manager.get_container_sample(uuid)
return sample["mem_page_sample"]
class mem_quota_policy(identify_policy):
def get_score_by_uuid(uuid):
sample = cgroup_manager.get_container_limit(uuid)
return sample["mem_page_quota"]
class cpu_usage_policy(identify_policy):
def get_score_by_uuid(uuid):
sample = cgroup_manager.get_container_sample(uuid)
return sample["cpu_sample"]
class cpu_usage_rev_policy(identify_policy):
def get_score_by_uuid(uuid):
sample = cgroup_manager.get_container_sample(uuid)
return 1024 * 1024 / (1.0 + sample["cpu_sample"])
class cpu_speed_policy(identify_policy):
def get_score_by_uuid(uuid):
sample = cgroup_manager.get_container_sample(uuid)
pid = cgroup_manager.get_container_pid(uuid)
etime = system_manager.get_proc_etime(pid)
return sample["cpu_sample"] / etime
class user_state_policy(identify_policy):
def get_score_by_uuid(uuid):
user = uuid.split('-')[0]
online = subprocess.getoutput('cat /var/lib/docklet/global/users/%s/status 2>/dev/null' % user) == 'live'
return 10.0 if online else 1.0
================================================
FILE: prepare.sh
================================================
#!/bin/bash
##################################################
# before-start.sh
# when you first use docklet, you should run this script to
# check and prepare the environment
# *important* : you need run this script again and again till success
##################################################
if [[ "`whoami`" != "root" ]]; then
echo "FAILED: Require root previledge !" > /dev/stderr
exit 1
fi
# install packages that docklet needs (in ubuntu)
# some packages' name maybe different in debian
apt-get install -y lxc lxcfs lxc-templates lvm2 bridge-utils curl exim4 openssh-server openvswitch-switch
apt-get install -y python3 python3-netifaces python3-flask python3-flask-sqlalchemy python3-pampy python3-httplib2 python3-pip
apt-get install -y python3-psutil python3-flask-migrate python3-paramiko
apt-get install -y python3-lxc
apt-get install -y python3-requests python3-suds
apt-get install -y nodejs npm
apt-get install -y etcd
apt-get install -y glusterfs-client attr
apt-get install -y nginx
pip3 install Flask-WTF
apt-get install -y gdebi-core
pip3 install grpcio grpcio-tools googleapis-common-protos
#add ip forward
echo "net.ipv4.ip_forward=1" >>/etc/sysctl.conf
sysctl -p
# check cgroup control
#which cgm &> /dev/null || { echo "FAILED : cgmanager is required, please install cgmanager" && exit 1; }
#cpucontrol=$(cgm listkeys cpu)
#[[ -z $(echo $cpucontrol | grep cfs_quota_us) ]] && echo "FAILED : cpu.cfs_quota_us of cgroup is not supported, you may need to recompile kernel" && exit 1
#memcontrol=$(cgm listkeys memory)
#if [[ -z $(echo $memcontrol | grep limit_in_bytes) ]]; then
# echo "FAILED : memory.limit_in_bytes of cgroup is not supported"
# echo "Try : "
# echo -e " echo 'GRUB_CMDLINE_LINUX=\"cgroup_enable=memory swapaccount=1\"' >> /etc/default/grub; update-grub; reboot" > /dev/stderr
# echo "Info : if not success, you may need to recompile kernel"
# exit 1
#fi
# check and install configurable-http-proxy
which configurable-http-proxy &>/dev/null || { npm config set registry https://registry.npm.taobao.org && npm install -g configurable-http-proxy; }
which configurable-http-proxy &>/dev/null || { echo "Error: install configurable-http-proxy failed, you should try again" && exit 1; }
echo ""
[[ -f conf/docklet.conf ]] || { echo "Generating docklet.conf from template" && cp conf/docklet.conf.template conf/docklet.conf; }
[[ -f web/templates/home.html ]] || { echo "Generating HomePage from home.template" && cp web/templates/home.template web/templates/home.html; }
FS_PREFIX=/opt/docklet
. conf/docklet.conf
export FS_PREFIX
mkdir -p $FS_PREFIX/global
mkdir -p $FS_PREFIX/local/
echo "directory FS_PREFIX (${FS_PREFIX}) have been created"
if [[ ! -d $FS_PREFIX/local/basefs && ! $1 = "withoutfs" ]]; then
mkdir -p $FS_PREFIX/local/basefs
echo "Generating basefs"
# wget -P $FS_PREFIX/local http://iwork.pku.edu.cn:1616/basefs-0.11.tar.bz2 && tar xvf $FS_PREFIX/local/basefs-0.11.tar.bz2 -C $FS_PREFIX/local/ > /dev/null
[ $? != "0" ] && echo "Generate basefs failed, please download it from http://unias.github.io/docklet/download to FS_PREFIX/local and then extract it using root. (defalut FS_PRERIX is /opt/docklet)"
fi
echo "Some packagefs can be downloaded from http://unias.github.io/docklet.download"
echo "you can download the packagefs and extract it to FS_PREFIX/local using root. (default FS_PREFIX is /opt/docklet"
echo ""
echo "All preparation installations are done."
echo "****************************************"
echo "* Please Read Lines Below Before Start *"
echo "****************************************"
echo ""
echo "you may want to custom home page of docklet. Please modify web/templates/home.html"
echo "Next, make sure exim4 can deliver mail out. To enable, run:"
echo "dpkg-reconfigure exim4-config"
echo "select internet site"
echo ""
echo "Then start docklet as described in README.md"
================================================
FILE: src/master/beansapplicationmgr.py
================================================
#!/usr/bin/python3
'''
This module consists of three parts:
1.send_beans_email: a function to send email to remind users of their beans.
2.ApplicationMgr: a class that will deal with users' requests about beans application.
3.ApprovalRobot: a automatic robot to examine and approve users' applications.
'''
import threading,datetime,random,time
from utils.model import db,User,ApplyMsg
from master.userManager import administration_required
from utils import env
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.header import Header
from master.settings import settings
# send email to remind users of their beans
def send_beans_email(to_address, username, beans):
email_from_address = settings.get('EMAIL_FROM_ADDRESS')
if (email_from_address in ['\'\'', '\"\"', '']):
return
#text = 'Dear '+ username + ':\n' + ' Your beans in docklet are less than' + beans + '.'
text = '<html><h4>Dear '+ username + ':</h4>'
text += '''<p> Your beans in <a href='%s'>docklet</a> are %d now. </p>
<p> If your beans are less than or equal to 0, all your worksapces will be stopped.</p>
<p> Please apply for more beans to keep your workspaces running by following link:</p>
<p> <a href='%s/beans/application/'>%s/beans/application/</p>
<br>
<p> Note: DO NOT reply to this email!</p>
<br><br>
<p> <a href='http://docklet.unias.org'>Docklet Team</a>, SEI, PKU</p>
''' % (env.getenv("PORTAL_URL"), beans, env.getenv("PORTAL_URL"), env.getenv("PORTAL_URL"))
text += '<p>'+ str(datetime.datetime.now()) + '</p>'
text += '</html>'
subject = 'Docklet beans alert'
msg = MIMEMultipart()
textmsg = MIMEText(text,'html','utf-8')
msg['Subject'] = Header(subject, 'utf-8')
msg['From'] = email_from_address
msg['To'] = to_address
msg.attach(textmsg)
s = smtplib.SMTP()
s.connect()
s.sendmail(email_from_address, to_address, msg.as_string())
s.close()
# a class that will deal with users' requests about beans application.
class ApplicationMgr:
def __init__(self):
# create database
try:
ApplyMsg.query.all()
except:
db.create_all()
# user apply for beans
def apply(self,username,number,reason):
user = User.query.filter_by(username=username).first()
if user is not None and user.beans >= 1000:
return [False, "Your beans must be less than 1000."]
if int(number) < 100 or int(number) > 5000:
return [False, "Number field must be between 100 and 5000!"]
applymsgs = ApplyMsg.query.filter_by(username=username).all()
lasti = len(applymsgs) - 1 # the last index, the last application is also the latest application.
if lasti >= 0 and applymsgs[lasti].status == "Processing":
return [False, "You already have a processing application, please be patient."]
# store the application into the database
applymsg = ApplyMsg(username,number,reason)
db.session.add(applymsg)
db.session.commit()
return [True,""]
# get all applications of a user
def query(self,username):
applymsgs = ApplyMsg.query.filter_by(username=username).all()
ans = []
for msg in applymsgs:
ans.append(msg.ch2dict())
return ans
# get all unread applications
@administration_required
def queryUnRead(self,*,cur_user):
applymsgs = ApplyMsg.query.filter_by(status="Processing").all()
ans = []
for msg in applymsgs:
ans.append(msg.ch2dict())
return {"success":"true","applymsgs":ans}
# agree an application
@administration_required
def agree(self,msgid,*,cur_user):
applymsg = ApplyMsg.query.get(msgid)
if applymsg is None:
return {"success":"false","message":"Application doesn\'t exist."}
applymsg.status = "Agreed"
user = User.query.filter_by(username=applymsg.username).first()
if user is not None:
# update users' beans
user.beans += applymsg.number
db.session.commit()
return {"success":"true"}
# reject an application
@administration_required
def reject(self,msgid,*,cur_user):
applymsg = ApplyMsg.query.get(msgid)
if applymsg is None:
return {"success":"false","message":"Application doesn\'t exist."}
applymsg.status = "Rejected"
db.session.commit()
return {"success":"true"}
# a automatic robot to examine and approve users' applications.
class ApprovalRobot(threading.Thread):
def __init__(self,maxtime=3600):
threading.Thread.__init__(self)
self.stop = False
self.interval = 20
self.maxtime = maxtime # The max time that users may wait for from 'processing' to 'agreed'
def stop(self):
self.stop = True
def run(self):
while not self.stop:
# query all processing applications
applymsgs = ApplyMsg.query.filter_by(status="Processing").all()
for msg in applymsgs:
secs = (datetime.datetime.now() - msg.time).seconds
#ranint = random.randint(self.interval,self.maxtime)
if secs >= self.maxtime:
msg.status = "Agreed"
user = User.query.filter_by(username=msg.username).first()
if user is not None:
# update users'beans
user.beans += msg.number
db.session.commit()
time.sleep(self.interval)
================================================
FILE: src/master/bugreporter.py
================================================
from master.settings import settings
import smtplib
from utils.log import logger
from utils import env
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.header import Header
from datetime import datetime
import json
def send_bug_mail(username, bugmessage):
#admin_email_address = env.getenv('ADMIN_EMAIL_ADDRESS')
nulladdr = ['\'\'', '\"\"', '']
email_from_address = settings.get('EMAIL_FROM_ADDRESS')
admin_email_address = settings.get('ADMIN_EMAIL_ADDRESS')
logger.info("receive bug from %s: %s" % (username, bugmessage))
if (email_from_address in nulladdr or admin_email_address in nulladdr):
return {'success': 'false'}
#text = 'Dear '+ username + ':\n' + ' Your account in docklet has been activated'
text = '<html><h4>Dear '+ 'admin' + ':</h4>'
text += '''<p> A bug has been report by %s.</p>
<br/>
<strong> %s </strong>
<br/>
<p> Please check it !</p>
<br/><br/>
<p> Docklet Team, SEI, PKU</p>
''' % (username, bugmessage)
text += '<p>'+ str(datetime.utcnow()) + '</p>'
text += '</html>'
subject = 'A bug of Docklet has been reported'
if admin_email_address[0] == '"':
admins_addr = admin_email_address[1:-1].split(" ")
else:
admins_addr = admin_email_address.split(" ")
alladdr=""
for addr in admins_addr:
alladdr = alladdr+addr+", "
alladdr=alladdr[:-2]
msg = MIMEMultipart()
textmsg = MIMEText(text,'html','utf-8')
msg['Subject'] = Header(subject, 'utf-8')
msg['From'] = email_from_address
msg['To'] = alladdr
msg.attach(textmsg)
s = smtplib.SMTP()
s.connect()
try:
s.sendmail(email_from_address, admins_addr, msg.as_string())
except Exception as e:
logger.error(e)
s.close()
return {'success':'true'}
================================================
FILE: src/master/cloudmgr.py
================================================
#!/usr/bin/python3
from io import StringIO
import os,sys,subprocess,time,re,datetime,threading,random,shutil
from utils.model import db, Image
from master.deploy import *
import json
from utils.log import logger
from utils import env
import requests
fspath = env.getenv('FS_PREFIX')
class AliyunMgr():
def __init__(self):
self.AcsClient = __import__('aliyunsdkcore.client', fromlist=["AcsClient"])
self.Request = __import__('aliyunsdkecs.request.v20140526', fromlist=[
"CreateInstanceRequest",
"StopInstanceRequest",
"DescribeInstancesRequest",
"DeleteInstanceRequest",
"StartInstanceRequest",
"DescribeInstancesRequest",
"AllocateEipAddressRequest",
"AssociateEipAddressRequest"])
def loadClient(self):
if not os.path.exists(fspath+"/global/sys/cloudsetting.json"):
currentfilepath = os.path.dirname(os.path.abspath(__file__))
templatefilepath = currentfilepath + "/../tools/cloudsetting.aliyun.template.json"
shutil.copyfile(templatefilepath,fspath+"/global/sys/cloudsetting.json")
logger.error("please modify the setting file first")
return False
try:
settingfile = open(fspath+"/global/sys/cloudsetting.json", 'r')
self.setting = json.loads(settingfile.read())
settingfile.close()
self.clt = self.AcsClient.AcsClient(self.setting['AccessKeyId'],self.setting['AccessKeySecret'], self.setting['RegionId'])
logger.info("load CLT of Aliyun success")
return True
except Exception as e:
logger.error(e)
return False
def createInstance(self):
request = self.Request.CreateInstanceRequest.CreateInstanceRequest()
request.set_accept_format('json')
request.add_query_param('RegionId', self.setting['RegionId'])
if 'ZoneId' in self.setting and not self.setting['ZoneId'] == "":
request.add_query_param('ZoneId', self.setting['ZoneId'])
if 'VSwitchId' in self.setting and not self.setting['VSwitchId'] == "":
request.add_query_param('VSwitchId', self.setting['VSwitchId'])
request.add_query_param('ImageId', 'ubuntu_16_0402_64_20G_alibase_20170818.vhd')
request.add_query_param('InternetMaxBandwidthOut', 1)
request.add_query_param('InstanceName', 'docklet_tmp_worker')
request.add_query_param('HostName', 'worker-tmp')
request.add_query_param('SystemDisk.Size', int(self.setting['SystemDisk.Size']))
request.add_query_param('InstanceType', self.setting['InstanceType'])
request.add_query_param('Password', self.setting['Password'])
response = self.clt.do_action_with_exception(request)
logger.info(response)
instanceid=json.loads(bytes.decode(response))['InstanceId']
return instanceid
def startInstance(self, instanceid):
request = self.Request.StartInstanceRequest.StartInstanceRequest()
request.set_accept_format('json')
request.add_query_param('InstanceId', instanceid)
response = self.clt.do_action_with_exception(request)
logger.info(response)
def createEIP(self):
request = self.Request.AllocateEipAddressRequest.AllocateEipAddressRequest()
request.set_accept_format('json')
request.add_query_param('RegionId', self.setting['RegionId'])
response = self.clt.do_action_with_exception(request)
logger.info(response)
response=json.loads(bytes.decode(response))
eipid=response['AllocationId']
eipaddr=response['EipAddress']
return [eipid, eipaddr]
def associateEIP(self, instanceid, eipid):
request = self.Request.AssociateEipAddressRequest.AssociateEipAddressRequest()
request.set_accept_format('json')
request.add_query_param('AllocationId', eipid)
request.add_query_param('InstanceId', instanceid)
response = self.clt.do_action_with_exception(request)
logger.info(response)
def getInnerIP(self, instanceid):
request = self.Request.DescribeInstancesRequest.DescribeInstancesRequest()
request.set_accept_format('json')
response = self.clt.do_action_with_exception(request)
instances = json.loads(bytes.decode(response))['Instances']['Instance']
for instance in instances:
if instance['InstanceId'] == instanceid:
return instance['NetworkInterfaces']['NetworkInterface'][0]['PrimaryIpAddress']
return json.loads(bytes.decode(response))['Instances']['Instance'][0]['VpcAttributes']['PrivateIpAddress']['IpAddress'][0]
def isStarted(self, instanceids):
request = self.Request.DescribeInstancesRequest.DescribeInstancesRequest()
request.set_accept_format('json')
response = self.clt.do_action_with_exception(request)
instances = json.loads(bytes.decode(response))['Instances']['Instance']
for instance in instances:
if instance['InstanceId'] in instanceids:
if not instance['Status'] == "Running":
return False
return True
def rentServers(self,number):
instanceids=[]
eipids=[]
eipaddrs=[]
for i in range(int(number)):
instanceids.append(self.createInstance())
time.sleep(2)
time.sleep(10)
for i in range(int(number)):
[eipid,eipaddr]=self.createEIP()
eipids.append(eipid)
eipaddrs.append(eipaddr)
time.sleep(2)
masterip=env.getenv('ETCD').split(':')[0]
for i in range(int(number)):
self.associateEIP(instanceids[i],eipids[i])
time.sleep(2)
time.sleep(5)
for instanceid in instanceids:
self.startInstance(instanceid)
time.sleep(2)
time.sleep(10)
while not self.isStarted(instanceids):
time.sleep(10)
time.sleep(5)
return [masterip, eipaddrs]
def addNode(self):
if not self.loadClient():
return {'success':'false'}
[masterip, eipaddrs] = self.rentServers(1)
threads = []
for eip in eipaddrs:
thread = threading.Thread(target = deploy, args=(eip,masterip,'root',self.setting['Password'],self.setting['VolumeName']))
thread.setDaemon(True)
thread.start()
threads.append(thread)
for thread in threads:
thread.join()
return {'success':'true'}
def addNodeAsync(self):
thread = threading.Thread(target = self.addNode)
thread.setDaemon(True)
thread.start()
class EmptyMgr():
def addNodeAsync(self):
logger.error("current cluster does not support scale out")
return False
class CloudMgr():
def getSettingFile(self):
if not os.path.exists(fspath+"/global/sys/cloudsetting.json"):
currentfilepath = os.path.dirname(os.path.abspath(__file__))
templatefilepath = currentfilepath + "/../tools/cloudsetting.aliyun.template.json"
shutil.copyfile(templatefilepath,fspath+"/global/sys/cloudsetting.json")
settingfile = open(fspath+"/global/sys/cloudsetting.json", 'r')
setting = settingfile.read()
settingfile.close()
return {'success':'true', 'result':setting}
def modifySettingFile(self, setting):
if setting == None:
logger.error("setting is None")
return {'success':'false'}
settingfile = open(fspath+"/global/sys/cloudsetting.json", 'w')
settingfile.write(setting)
settingfile.close()
return {'success':'true'}
def __init__(self):
if env.getenv("ALLOW_SCALE_OUT") == "True":
self.engine = AliyunMgr()
else:
self.engine = EmptyMgr()
================================================
FILE: src/master/deploy.py
================================================
#!/usr/bin/python3
import paramiko, time, os
from utils.log import logger
from utils import env
def myexec(ssh,command):
stdin,stdout,stderr = ssh.exec_command(command)
endtime = time.time() + 3600
while not stdout.channel.eof_received:
time.sleep(2)
if time.time() > endtime:
stdout.channel.close()
logger.error(command + ": fail")
return
# for line in stdout.readlines():
# if line is None:
# time.sleep(5)
# else:
# print(line)
def deploy(ipaddr,masterip,account,password,volumename):
while True:
try:
transport = paramiko.Transport((ipaddr,22))
transport.connect(username=account,password=password)
break
except Exception as e:
time.sleep(2)
pass
sftp = paramiko.SFTPClient.from_transport(transport)
currentfilepath = os.path.dirname(os.path.abspath(__file__))
deployscriptpath = currentfilepath + "/../tools/docklet-deploy.sh"
sftp.put(deployscriptpath,'/root/docklet-deploy.sh')
sftp.put('/etc/hosts', '/etc/hosts')
transport.close()
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
while True:
try:
ssh.connect(ipaddr, username = account, password = password, timeout = 300)
break
except Exception as e:
time.sleep(2)
pass
myexec(ssh,"sed -i 's/%MASTERIP%/" + masterip + "/g' /root/docklet-deploy.sh")
myexec(ssh,"sed -i 's/%VOLUMENAME%/" + volumename + "/g' /root/docklet-deploy.sh")
myexec(ssh,'chmod +x /root/docklet-deploy.sh')
myexec(ssh,'/root/docklet-deploy.sh')
ssh.close()
return
================================================
FILE: src/master/httprest.py
================================================
#!/usr/bin/python3
# load environment variables in the beginning
# because some modules need variables when import
# for example, userManager/model.py
import sys
if sys.path[0].endswith("master"):
sys.path[0] = sys.path[0][:-6]
from flask import Flask, request
# must first init loadenv
from utils import tools, env
# default CONFIG=/opt/docklet/local/docklet-running.conf
config = env.getenv("CONFIG")
tools.loadenv(config)
# second init logging
# must import logger after initlogging, ugly
from utils.log import initlogging
initlogging("docklet-master")
from utils.log import logger
import os
import http.server, cgi, json, sys, shutil, traceback
import xmlrpc.client
from socketserver import ThreadingMixIn
from utils import etcdlib, imagemgr
from master import nodemgr, vclustermgr, notificationmgr, lockmgr, cloudmgr, jobmgr, taskmgr
from utils.logs import logs
from master import userManager, beansapplicationmgr, monitor, sysmgr, network, releasemgr
from worker.monitor import History_Manager
import threading
import requests
from utils.nettools import portcontrol
#default EXTERNAL_LOGIN=False
external_login = env.getenv('EXTERNAL_LOGIN')
if (external_login == 'TRUE'):
from userDependence import external_auth
userpoint = "http://" + env.getenv('USER_IP') + ":" + str(env.getenv('USER_PORT'))
G_userip = env.getenv("USER_IP")
def post_to_user(url = '/', data={}):
return requests.post(userpoint+url,data=data).json()
app = Flask(__name__)
from functools import wraps
def login_required(func):
@wraps(func)
def wrapper(*args, **kwargs):
logger.info ("get request, path: %s" % request.path)
token = request.form.get("token", None)
if (token == None):
logger.info ("get request without token, path: %s" % request.path)
return json.dumps({'success':'false', 'message':'user or key is null'})
result = post_to_user("/authtoken/", {'token':token})
if result.get('success') == 'true':
username = result.get('username')
beans = result.get('beans')
else:
return result
#if (cur_user == None):
# return json.dumps({'success':'false', 'message':'token failed or expired', 'Unauthorized': 'True'})
return func(username, beans, request.form, *args, **kwargs)
return wrapper
def auth_key_required(func):
@wraps(func)
def wrapper(*args,**kwargs):
key_1 = env.getenv('AUTH_KEY')
key_2 = request.form.get("auth_key",None)
#logger.info(str(ip) + " " + str(G_userip))
if key_2 is not None and key_1 == key_2:
return func(*args, **kwargs)
else:
return json.dumps({'success':'false','message': 'auth_key is required!'})
return wrapper
def beans_check(func):
@wraps(func)
def wrapper(*args, **kwargs):
beans = args[1]
if beans <= 0:
return json.dumps({'success':'false','message':'user\'s beans are less than or equal to zero!'})
else:
return func(*args, **kwargs)
return wrapper
@app.route("/isalive/", methods = ['POST'])
@login_required
def isalive(user, beans, form):
return json.dumps({'success':'true'})
@app.route("/logs/list/", methods=['POST'])
@login_required
def logs_list(user, beans, form):
user_group = post_to_user('/user/selfQuery/', {'token': request.form.get("token", None)}).get('data', None).get('group', None)
return json.dumps(logs.list(user_group = user_group))
@app.route("/logs/get/", methods=['POST'])
@login_required
def logs_get(user, beans, form):
user_group = post_to_user('/user/selfQuery/', {'token': request.form.get("token", None)}).get('data', None).get('group', None)
return json.dumps(logs.get(user_group = user_group, filename = form.get('filename', '')))
@app.route("/cluster/create/", methods=['POST'])
@login_required
@beans_check
def create_cluster(user, beans, form):
global G_vclustermgr
global G_ulockmgr
clustername = form.get('clustername', None)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
G_ulockmgr.acquire(user)
try:
image = {}
image['name'] = form.get("imagename", None)
image['type'] = form.get("imagetype", None)
image['owner'] = form.get("imageowner", None)
user_info = post_to_user("/user/selfQuery/", {'token':form.get("token")})
user_info = json.dumps(user_info)
logger.info ("handle request : create cluster %s with image %s " % (clustername, image['name']))
setting = {
'cpu': form.get('cpuSetting'),
'memory': form.get('memorySetting'),
'disk': form.get('diskSetting')
}
res = post_to_user("/user/usageInc/", {'token':form.get('token'), 'setting':json.dumps(setting)})
status = res.get('success')
result = res.get('result')
if not status:
return json.dumps({'success':'false', 'action':'create cluster', 'message':result})
[status, result] = G_vclustermgr.create_cluster(clustername, user, image, user_info, setting)
if status:
return json.dumps({'success':'true', 'action':'create cluster', 'message':result})
else:
post_to_user("/user/usageRecover/", {'token':form.get('token'), 'setting':json.dumps(setting)})
return json.dumps({'success':'false', 'action':'create cluster', 'message':result})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/cluster/scaleout/", methods=['POST'])
@login_required
@beans_check
def scaleout_cluster(user, beans, form):
global G_vclustermgr
global G_ulockmgr
clustername = form.get('clustername', None)
logger.info ("scaleout: %s" % form)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
G_ulockmgr.acquire(user)
try:
logger.info("handle request : scale out %s" % clustername)
image = {}
image['name'] = form.get("imagename", None)
image['type'] = form.get("imagetype", None)
image['owner'] = form.get("imageowner", None)
user_info = post_to_user("/user/selfQuery/", {'token':form.get("token")})
user_info = json.dumps(user_info)
setting = {
'cpu': form.get('cpuSetting'),
'memory': form.get('memorySetting'),
'disk': form.get('diskSetting')
}
res = post_to_user("/user/usageInc/", {'token':form.get('token'), 'setting':json.dumps(setting)})
status = res.get('success')
result = res.get('result')
if not status:
return json.dumps({'success':'false', 'action':'scale out', 'message': result})
[status, result] = G_vclustermgr.scale_out_cluster(clustername, user, image, user_info, setting)
if status:
return json.dumps({'success':'true', 'action':'scale out', 'message':result})
else:
post_to_user("/user/usageRecover/", {'token':form.get('token'), 'setting':json.dumps(setting)})
return json.dumps({'success':'false', 'action':'scale out', 'message':result})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/cluster/scalein/", methods=['POST'])
@login_required
def scalein_cluster(user, beans, form):
global G_vclustermgr
global G_ulockmgr
clustername = form.get('clustername', None)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
G_ulockmgr.acquire(user)
try:
logger.info("handle request : scale in %s" % clustername)
containername = form.get("containername", None)
[status, usage_info] = G_vclustermgr.get_clustersetting(clustername, user, containername, False)
if status:
post_to_user("/user/usageRelease/", {'token':form.get('token'), 'cpu':usage_info['cpu'], 'memory':usage_info['memory'],'disk':usage_info['disk']})
[status, result] = G_vclustermgr.scale_in_cluster(clustername, user, containername)
if status:
return json.dumps({'success':'true', 'action':'scale in', 'message':result})
else:
return json.dumps({'success':'false', 'action':'scale in', 'message':result})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/cluster/start/", methods=['POST'])
@login_required
@beans_check
def start_cluster(user, beans, form):
global G_vclustermgr
global G_ulockmgr
clustername = form.get('clustername', None)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
G_ulockmgr.acquire(user)
try:
user_info = post_to_user("/user/selfQuery/", {'token':form.get("token")})
logger.info ("handle request : start cluster %s" % clustername)
[status, result] = G_vclustermgr.start_cluster(clustername, user, user_info)
if status:
return json.dumps({'success':'true', 'action':'start cluster', 'message':result})
else:
return json.dumps({'success':'false', 'action':'start cluster', 'message':result})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/cluster/stop/", methods=['POST'])
@login_required
def stop_cluster(user, beans, form):
global G_vclustermgr
global G_ulockmgr
clustername = form.get('clustername', None)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
G_ulockmgr.acquire(user)
try:
logger.info ("handle request : stop cluster %s" % clustername)
[status, result] = G_vclustermgr.stop_cluster(clustername, user)
if status:
return json.dumps({'success':'true', 'action':'stop cluster', 'message':result})
else:
return json.dumps({'success':'false', 'action':'stop cluster', 'message':result})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/cluster/delete/", methods=['POST'])
@login_required
def delete_cluster(user, beans, form):
global G_vclustermgr
global G_ulockmgr
clustername = form.get('clustername', None)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
G_ulockmgr.acquire(user)
try:
logger.info ("handle request : delete cluster %s" % clustername)
user_info = post_to_user("/user/selfQuery/" , {'token':form.get("token")})
user_info = json.dumps(user_info)
[status, usage_info] = G_vclustermgr.get_clustersetting(clustername, user, "all", True)
if status:
post_to_user("/user/usageRelease/", {'token':form.get('token'), 'cpu':usage_info['cpu'], 'memory':usage_info['memory'],'disk':usage_info['disk']})
[status, result] = G_vclustermgr.delete_cluster(clustername, user, user_info)
if status:
return json.dumps({'success':'true', 'action':'delete cluster', 'message':result})
else:
return json.dumps({'success':'false', 'action':'delete cluster', 'message':result})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/cluster/info/", methods=['POST'])
@login_required
def info_cluster(user, beans, form):
global G_vclustermgr
clustername = form.get('clustername', None)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
logger.info ("handle request : info cluster %s" % clustername)
[status, result] = G_vclustermgr.get_clusterinfo(clustername, user)
if status:
return json.dumps({'success':'true', 'action':'info cluster', 'message':result})
else:
return json.dumps({'success':'false', 'action':'info cluster', 'message':result})
@app.route("/cluster/list/", methods=['POST'])
@login_required
def list_cluster(user, beans, form):
global G_vclustermgr
logger.info ("handle request : list clusters for %s" % user)
[status, clusterlist] = G_vclustermgr.list_clusters(user)
if status:
return json.dumps({'success':'true', 'action':'list cluster', 'clusters':clusterlist})
else:
return json.dumps({'success':'false', 'action':'list cluster', 'message':clusterlist})
@app.route("/cluster/stopall/",methods=['POST'])
@auth_key_required
def stopall_cluster():
global G_vclustermgr
global G_ulockmgr
user = request.form.get('username',None)
if user is None:
return json.dumps({'success':'false', 'message':'User is required!'})
G_ulockmgr.acquire(user)
try:
logger.info ("handle request : stop all clusters for %s" % user)
[status, clusterlist] = G_vclustermgr.list_clusters(user)
if status:
for cluster in clusterlist:
G_vclustermgr.stop_cluster(cluster,user)
return json.dumps({'success':'true', 'action':'stop all cluster'})
else:
return json.dumps({'success':'false', 'action':'stop all cluster', 'message':clusterlist})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/cluster/flush/", methods=['POST'])
@login_required
def flush_cluster(user, beans, form):
global G_vclustermgr
clustername = form.get('clustername', None)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
from_lxc = form.get('from_lxc', None)
G_vclustermgr.flush_cluster(user,clustername,from_lxc)
return json.dumps({'success':'true', 'action':'flush'})
@app.route("/cluster/save/", methods=['POST'])
@login_required
def save_cluster(user, beans, form):
global G_vclustermgr
clustername = form.get('clustername', None)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
imagename = form.get("image", None)
description = form.get("description", None)
containername = form.get("containername", None)
isforce = form.get("isforce", None)
G_ulockmgr.acquire(user)
try:
if not isforce == "true":
[status,message] = G_vclustermgr.image_check(user,imagename)
if not status:
return json.dumps({'success':'false','reason':'exists', 'message':message})
user_info = post_to_user("/user/selfQuery/", {'token':form.get("token")})
[status,message] = G_vclustermgr.create_image(user,clustername,containername,imagename,description,user_info["data"]["groupinfo"]["image"])
if status:
logger.info("image has been saved")
return json.dumps({'success':'true', 'action':'save'})
else:
logger.debug(message)
return json.dumps({'success':'false', 'reason':'exceed', 'message':message})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/admin/ulock/release/", methods=['POST'])
@login_required
def release_ulock(user, beans, form):
global G_ulockmgr
if user != 'root':
return json.dumps({'success':'false', 'message':'root is required.'})
release_user = form.get("ulockname",None)
if release_user is None:
return json.dumps({'success':'false', 'message':'ulockname is required.'})
try:
G_ulockmgr.release(release_user)
except Exception as e:
logger.error(traceback.format_exc())
return json.dumps({'success':'false', 'message':'fail to release lock %s' % release_user})
return json.dumps({'success':'true', 'message':'lock %s release successfully' % release_user})
@app.route("/admin/migrate_cluster/", methods=['POST'])
@auth_key_required
def migrate_cluster():
global G_vclustermgr
global G_ulockmgr
user = request.form.get('username',None)
if user is None:
return json.dumps({'success':'false', 'message':'User is required!'})
clustername = request.form.get('clustername', None)
if (clustername == None):
return json.dumps({'success':'false', 'message':'clustername is null'})
new_hosts = request.form.get('new_hosts', None)
if (new_hosts == None):
return json.dumps({'success':'false', 'message':'new_hosts is null'})
new_host_list = new_hosts.split(',')
G_ulockmgr.acquire(user)
auth_key = env.getenv('AUTH_KEY')
try:
logger.info ("handle request : migrate cluster to %s. user:%s clustername:%s" % (str(new_hosts), user, clustername))
res = post_to_user("/master/user/groupinfo/", {'auth_key':auth_key})
groups = json.loads(res['groups'])
quotas = {}
for group in groups:
#logger.info(group)
quotas[group['name']] = group['quotas']
rc_info = post_to_user("/master/user/recoverinfo/", {'username':user,'auth_key':auth_key})
groupname = rc_info['groupname']
user_info = {"data":{"id":rc_info['uid'],"groupinfo":quotas[groupname]}}
logger.info("Migrate cluster for user(%s) cluster(%s) to new_hosts(%s). user_info(%s)"
%(clustername, user, str(new_host_list), user_info))
[status,msg] = G_vclustermgr.migrate_cluster(clustername, user, new_host_list, user_info)
if not status:
logger.error(msg)
return json.dumps({'success':'false', 'message': msg})
return json.dumps({'success':'true', 'action':'migrate_container'})
except Exception as ex:
logger.error(traceback.format_exc())
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/host/migrate/", methods=['POST'])
@login_required
def migrate_host(user, beans, form):
global G_vclustermgr
global G_ulockmgr
src_host = request.form.get('src_host', None)
dst_host_list = request.form.getlist('dst_host_list', None)
if src_host is None or dst_host_list is None:
return json.dumps({'success':'false', 'message': 'src host or dst host list is null'})
[status, msg] = G_vclustermgr.migrate_host(src_host, dst_host_list, G_ulockmgr)
if status:
return json.dumps({'success': 'true', 'action': 'migrate_host'})
else:
return json.dumps({'success': 'false', 'message': msg})
@app.route("/image/list/", methods=['POST'])
@login_required
def list_image(user, beans, form):
global G_imagemgr
images = G_imagemgr.list_images(user)
return json.dumps({'success':'true', 'images': images})
@app.route("/image/updatebase/", methods=['POST'])
@login_required
def update_base(user, beans, form):
global G_imagemgr
global G_vclustermgr
[success, status] = G_imagemgr.update_base_image(user, G_vclustermgr, form.get('image'))
return json.dumps({'success':'true', 'message':status})
@app.route("/image/description/", methods=['POST'])
@login_required
def description_image(user, beans, form):
global G_imagemgr
image = {}
image['name'] = form.get("imagename", None)
image['type'] = form.get("imagetype", None)
image['owner'] = form.get("imageowner", None)
description = G_imagemgr.get_image_description(user,image)
return json.dumps({'success':'true', 'message':description})
@app.route("/image/share/", methods=['POST'])
@login_required
def share_image(user, beans, form):
global G_imagemgr
image = form.get('image')
G_ulockmgr.acquire(user)
try:
G_imagemgr.shareImage(user,image)
return json.dumps({'success':'true', 'action':'share'})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/image/unshare/", methods=['POST'])
@login_required
def unshare_image(user, beans, form):
global G_imagemgr
image = form.get('image', None)
G_ulockmgr.acquire(user)
try:
G_imagemgr.unshareImage(user,image)
return json.dumps({'success':'true', 'action':'unshare'})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/image/delete/", methods=['POST'])
@login_required
def delete_image(user, beans, form):
global G_imagemgr
image = form.get('image', None)
G_ulockmgr.acquire(user)
try:
G_imagemgr.removeImage(user,image)
return json.dumps({'success':'true', 'action':'delete'})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/image/copy/", methods=['POST'])
@login_required
def copy_image(user, beans, form):
global G_imagemgr
global G_ulockmgr
image = form.get('image', None)
target = form.get('target',None)
token = form.get('token',None)
G_ulockmgr.acquire(user)
try:
res = G_imagemgr.copyImage(user,image,token,target)
return json.dumps(res)
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message': str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/image/copytarget/", methods=['POST'])
@login_required
@auth_key_required
def copytarget_image(user, beans, form):
global G_imagemgr
global G_ulockmgr
imagename = form.get('imagename',None)
description = form.get('description',None)
try:
G_ulockmgr.acquire(user)
res = G_imagemgr.updateinfo(user,imagename,description)
return json.dumps({'success':'true', 'action':'copy image to target.'})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message':str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/cloud/setting/get/", methods=['POST'])
@login_required
def query_account_cloud(cur_user, user, form):
global G_cloudmgr
logger.info("handle request: cloud/setting/get/")
result = G_cloudmgr.getSettingFile()
return json.dumps(result)
@app.route("/cloud/setting/modify/", methods=['POST'])
@login_required
def modify_account_cloud(cur_user, user, form):
global G_cloudmgr
logger.info("handle request: cloud/setting/modify/")
result = G_cloudmgr.modifySettingFile(form.get('setting',None))
return json.dumps(result)
@app.route("/cloud/node/add/", methods=['POST'])
@login_required
def add_node_cloud(user, beans, form):
global G_cloudmgr
logger.info("handle request: cloud/node/add/")
G_cloudmgr.engine.addNodeAsync()
result = {'success':'true'}
return json.dumps(result)
@app.route("/addproxy/", methods=['POST'])
@login_required
def addproxy(user, beans, form):
global G_vclustermgr
logger.info ("handle request : add proxy")
proxy_ip = form.get("ip", None)
proxy_port = form.get("port", None)
clustername = form.get("clustername", None)
[status, message] = G_vclustermgr.addproxy(user,clustername,proxy_ip,proxy_port)
if status is True:
return json.dumps({'success':'true', 'action':'addproxy'})
else:
return json.dumps({'success':'false', 'message': message})
@app.route("/deleteproxy/", methods=['POST'])
@login_required
def deleteproxy(user, beans, form):
global G_vclustermgr
logger.info ("handle request : delete proxy")
clustername = form.get("clustername", None)
G_vclustermgr.deleteproxy(user,clustername)
return json.dumps({'success':'true', 'action':'deleteproxy'})
@app.route("/port_mapping/add/", methods=['POST'])
@login_required
def add_port_mapping(user, beans, form):
global G_vclustermgr
global G_ulockmgr
logger.info ("handle request : add port mapping")
node_name = form.get("node_name",None)
node_ip = form.get("node_ip", None)
node_port = form.get("node_port", None)
clustername = form.get("clustername", None)
if node_name is None or node_ip is None or node_port is None or clustername is None:
return json.dumps({'success':'false', 'message': 'Illegal form.'})
user_info = post_to_user("/user/selfQuery/", data = {"token": form.get("token")})
G_ulockmgr.acquire(user)
try:
[status, message] = G_vclustermgr.add_port_mapping(user,clustername,node_name,node_ip,node_port,user_info['data']['groupinfo'])
if status is True:
return json.dumps({'success':'true', 'action':'addproxy'})
else:
return json.dumps({'success':'false', 'message': message})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message':str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/port_mapping/delete/", methods=['POST'])
@login_required
def delete_port_mapping(user, beans, form):
global G_vclustermgr
global G_ulockmgr
logger.info ("handle request : delete port mapping")
node_name = form.get("node_name",None)
clustername = form.get("clustername", None)
node_port = form.get("node_port", None)
if node_name is None or clustername is None:
return json.dumps({'success':'false', 'message': 'Illegal form.'})
G_ulockmgr.acquire(user)
try:
[status, message] = G_vclustermgr.delete_port_mapping(user,clustername,node_name,node_port)
if status is True:
return json.dumps({'success':'true', 'action':'addproxy'})
else:
return json.dumps({'success':'false', 'message': message})
except Exception as ex:
logger.error(str(ex))
return json.dumps({'success':'false', 'message':str(ex)})
finally:
G_ulockmgr.release(user)
@app.route("/monitor/hosts/<com_id>/<issue>/", methods=['POST'])
@login_required
def hosts_monitor(user, beans, form, com_id, issue):
global G_clustername
logger.info("handle request: monitor/hosts")
res = {}
fetcher = monitor.Fetcher(com_id)
if issue == 'meminfo':
res['meminfo'] = fetcher.get_meminfo()
elif issue == 'gpuinfo':
res['gpuinfo'] = fetcher.get_gpuinfo()
elif issue == 'cpuinfo':
res['cpuinfo'] = fetcher.get_cpuinfo()
elif issue == 'cpuconfig':
res['cpuconfig'] = fetcher.get_cpuconfig()
elif issue == 'diskinfo':
res['diskinfo'] = fetcher.get_diskinfo()
elif issue == 'osinfo':
res['osinfo'] = fetcher.get_osinfo()
#elif issue == 'concpuinfo':
# res['concpuinfo'] = fetcher.get_concpuinfo()
elif issue == 'containers':
res['containers'] = fetcher.get_containers()
elif issue == 'status':
res['status'] = fetcher.get_status()
elif issue == 'containerslist':
res['containerslist'] = fetcher.get_containerslist()
elif issue == 'containersinfo':
res = []
conlist = fetcher.get_containerslist()
for container in conlist:
ans = {}
confetcher = monitor.Container_Fetcher(etcdaddr,G_clustername)
ans = confetcher.get_basic_info(container)
ans['cpu_use'] = confetcher.get_cpu_use(container)
ans['mem_use'] = confetcher.get_mem_use(container)
res.append(ans)
else:
return json.dumps({'success':'false', 'message':'not supported request'})
return json.dumps({'success':'true', 'monitor':res})
@app.route("/monitor/vnodes/<con_id>/<issue>/", methods=['POST'])
@login_required
def vnodes_monitor(user, beans, form, con_id, issue):
global G_clustername
global G_historymgr
logger.info("handle request: monitor/vnodes")
res = {}
fetcher = monitor.Container_Fetcher(con_id)
if issue == 'info':
res = fetcher.get_info()
elif issue == 'cpu_use':
res['cpu_use'] = fetcher.get_cpu_use()
elif issue == 'mem_use':
res['mem_use'] = fetcher.get_mem_use()
elif issue == 'disk_use':
res['disk_use'] = fetcher.get_disk_use()
elif issue == 'basic_info':
res['basic_info'] = fetcher.get_basic_info()
elif issue == 'net_stats':
res['net_stats'] = fetcher.get_net_stats()
elif issue == 'history':
res['history'] = G_historymgr.getHistory(con_id)
elif issue == 'owner':
names = con_id.split('-')
result = post_to_user("/user/query/", data = {"token": form.get(token)})
if result['success'] == 'false':
res['username'] = ""
res['truename'] = ""
else:
res['username'] = result['data']['username']
res['truename'] = result['data']['truename']
else:
res = "Unspported Method!"
return json.dumps({'success':'true', 'monitor':res})
@app.route("/monitor/user/<issue>/", methods=['POST'])
@login_required
def user_quotainfo_monitor(user, beans, form, issue):
global G_historymgr
if issue == 'quotainfo':
logger.info("handle request: monitor/user/quotainfo/")
user_info = post_to_user("/user/selfQuery/", {'token':form.get("token")})
quotainfo = user_info['data']['groupinfo']
return json.dumps({'success':'true', 'quotainfo':quotainfo})
elif issue == 'createdvnodes':
logger.info("handle request: monitor/user/createdvnodes/")
res = G_historymgr.getCreatedVNodes(user)
return json.dumps({'success':'true', 'createdvnodes':res})
elif issue == 'net_stats':
logger.info("handle request: monitor/user/net_stats/")
res = monitor.Container_Fetcher.get_user_net_stats(user)
return json.dumps({'success':'true', 'net_stats':res})
else:
return json.dumps({'success':'false', 'message':"Unspported Method!"})
@app.route("/monitor/listphynodes/", methods=['POST'])
@login_required
def listphynodes_monitor(user, beans, form):
global G_nodemgr
logger.info("handle request: monitor/listphynodes/")
res = {}
res['allnodes'] = G_nodemgr.get_nodeips()
return json.dumps({'success':'true', 'monitor':res})
@app.route("/monitor/pending_gpu_tasks/", methods=['POST'])
@login_required
def pending_gpu_tasks_monitor(user, beans, form):
global G_taskmgr
logger.info("handle request: monitor/pending_gpu_tasks/")
res = {}
res['pending_tasks'] = G_taskmgr.get_pending_gpu_tasks_info()
return json.dumps({'success':'true', 'monitor':res})
@app.route("/billing/beans/", methods=['POST'])
@auth_key_required
def billing_beans():
form = request.form
res = post_to_user("/billing/beans/",data=form)
logger.info(res)
return json.dumps(res)
@app.route("/system/parmList/", methods=['POST'])
@login_required
def parmList_system(user, beans, form):
global G_sysmgr
logger.info("handle request: system/parmList/")
result = G_sysmgr.getParmList()
return json.dumps(result)
@app.route("/system/modify/", methods=['POST'])
@login_required
def modify_system(user, beans, form):
global G_sysmgr
logger.info("handle request: system/modify/")
field = form.get("field", None)
parm = form.get("parm", None)
val = form.get("val", None)
[status, message] = G_sysmgr.modify(field,parm,val)
if status is True:
return json.dumps({'success':'true', 'action':'modify_system'})
else:
return json.dumps({'success':'false', 'message': message})
return json.dumps(result)
@app.route("/system/clear_history/", methods=['POST'])
@login_required
def clear_system(user, beans, form):
global G_sysmgr
logger.info("handle request: system/clear_history/")
field = form.get("field", None)
parm = form.get("parm", None)
[status, message] = G_sysmgr.clear(field,parm)
if status is True:
return json.dumps({'success':'true', 'action':'clear_history'})
else:
return json.dumps({'success':'false', 'message': message})
return json.dumps(result)
@app.route("/system/add/", methods=['POST'])
@login_required
def add_system(user, beans, form):
global G_sysmgr
logger.info("handle request: system/add/")
field = form.get("field", None)
parm = form.get("parm", None)
val = form.get("val", None)
[status, message] = G_sysmgr.add(field, parm, val)
if status is True:
return json.dumps({'success':'true', 'action':'add_parameter'})
else:
return json.dumps({'success':'false', 'message': message})
return json.dumps(result)
@app.route("/system/delete/", methods=['POST'])
@login_required
def delete_system(user, beans, form):
global G_sysmgr
logger.info("handle request: system/delete/")
field = form.get("field", None)
parm = form.get("parm", None)
[status, message] = G_sysmgr.delete(field,parm)
if status is True:
return json.dumps({'success':'true', 'action':'delete_parameter'})
else:
return json.dumps({'success':'false', 'message': message})
return json.dumps(result)
@app.route("/system/reset_all/", methods=['POST'])
@login_required
def resetall_system(user, beans, form):
global G_sysmgr
logger.info("handle request: system/reset_all/")
field = form.get("field", None)
[status, message] = G_sysmgr.reset_all(field)
if status is True:
return json.dumps({'success':'true', 'action':'reset_all'})
else:
return json.dumps({'success':'false', 'message': message})
return json.dumps(result)
@app.route("/batch/job/add/", methods=['POST'])
@login_required
@beans_check
def add_job(user,beans,form):
global G_jobmgr
job_data = form.to_dict()
job_info = {
'tasks': {}
}
message = {
'success': 'true',
'message': 'add batch job success'
}
for key in job_data:
if key == 'csrf_token':
continue
key_arr = key.split('_')
value = job_data[key]
if key_arr[0] == 'srcAddr' and value == '':
#task_idx = 'task_' + key_arr[1]
if task_idx in job_info['tasks']:
job_info['tasks'][task_idx]['srcAddr'] = '/root'
else:
job_info['tasks'][task_idx] = {
'srcAddr': '/root'
}
elif key_arr[0] != 'dependency'and value == '':
message['success'] = 'false'
message['message'] = 'value of %s is null' % key
elif len(key_arr) == 1:
job_info[key_arr[0]] = value
elif len(key_arr) == 2:
key_prefix, task_idx = key_arr[0], key_arr[1]
#task_idx = 'task_' + task_idx
if task_idx in job_info["tasks"]:
job_info["tasks"][task_idx][key_prefix] = value
else:
tmp_dict = {
key_prefix: value
}
job_info["tasks"][task_idx] = tmp_dict
elif len(key_arr) == 3:
key_prefix, task_idx, mapping_idx = key_arr[0], key_arr[1], key_arr[2]
#task_idx = 'task_' + task_idx
mapping_idx = 'mapping_' + mapping_idx
if task_idx in job_info["tasks"]:
if "mapping" in job_info["tasks"][task_idx]:
if mapping_idx in job_info["tasks"][task_idx]["mapping"]:
job_info["tasks"][task_idx]["mapping"][mapping_idx][key_prefix] = value
else:
tmp_dict = {
key_prefix: value
}
job_info["tasks"][task_idx]["mapping"][mapping_idx] = tmp_dict
else:
job_info["tasks"][task_idx]["mapping"] = {
mapping_idx: {
key_prefix: value
}
}
else:
tmp_dict = {
"mapping":{
mapping_idx: {
key_prefix: value
}
}
}
job_info["tasks"][task_idx] = tmp_dict
logger.debug('batch job adding info %s' % json.dumps(job_info, indent=4))
[status, msg] = G_jobmgr.add_job(user, job_info)
if status:
return json.dumps(message)
else:
logger.debug('fail to add batch job: %s' % msg)
message["success"] = "false"
message["message"] = msg
return json.dumps(message)
return json.dumps(message)
@app.route("/batch/job/list/", methods=['POST'])
@login_required
def list_job(user,beans,form):
global G_jobmgr
result = {
'success': 'true',
'data': G_jobmgr.list_jobs(user)
}
return json.dumps(result)
@app.route("/batch/job/listall/", methods=['POST'])
@login_required
def list_all_job(user,beans,form):
global G_jobmgr
result = {
'success': 'true',
'data': G_jobmgr.list_all_jobs()
}
return json.dumps(result)
@app.route("/batch/job/info/", methods=['POST'])
@login_required
def info_job(user,beans,form):
global G_jobmgr
jobid = form.get("jobid","")
[success, data] = G_jobmgr.get_job(user, jobid)
if success:
return json.dumps({'success':'true', 'data':data})
else:
return json.dumps({'success':'false', 'message': data})
@app.route("/batch/job/stop/", methods=['POST'])
@login_required
def stop_job(user,beans,form):
global G_jobmgr
jobid = form.get("jobid","")
[success,msg] = G_jobmgr.stop_job(user,jobid)
if success:
return json.dumps({'success':'true', 'action':'stop job'})
else:
return json.dumps({'success':'false', 'message': msg})
@app.route("/batch/job/output/", methods=['POST'])
@login_required
def get_output(user,beans,form):
global G_jobmgr
jobid = form.get("jobid","")
taskid = form.get("taskid","")
vnodeid = form.get("vnodeid","")
issue = form.get("issue","")
result = {
'success': 'true',
'data': G_jobmgr.get_output(user,jobid,taskid,vnodeid,issue)
}
return json.dumps(result)
@app.route("/batch/task/info/", methods=['POST'])
@login_required
def info_task(user,beans,form):
pass
@app.route("/batch/vnodes/list/", methods=['POST'])
@login_required
def batch_vnodes_list(user,beans,form):
global G_taskmgr
result = {
'success': 'true',
'data': G_taskmgr.get_user_batch_containers(user)
}
return json.dumps(result)
# @app.route("/inside/cluster/scaleout/", methods=['POST'])
# @inside_ip_required
# def inside_cluster_scalout(cur_user, cluster_info, form):
# global G_usermgr
# global G_vclustermgr
# clustername = cluster_info['name']
# logger.info("handle request : scale out %s" % clustername)
# image = {}
# image['name'] = form.get("imagename", None)
# image['type'] = form.get("imagetype", None)
# image['owner'] = form.get("imageowner", None)
# user_info = G_usermgr.selfQuery(cur_user = cur_user)
# user = user_info['data']['username']
# user_info = json.dumps(user_info)
# setting = {
# 'cpu': form.get('cpuSetting'),
# 'memory': form.get('memorySetting'),
# 'disk': form.get('diskSetting')
# }
# [status, result] = G_usermgr.usageInc(cur_user = cur_user, modification = setting)
# if not status:
# return json.dumps({'success':'false', 'action':'scale out', 'message': result})
# [status, result] = G_vclustermgr.scale_out_cluster(clustername, user, image, user_info, setting)
# if status:
# return json.dumps({'success':'true', 'action':'scale out', 'message':result})
# else:
# G_usermgr.usageRecover(cur_user = cur_user, modification = setting)
# return json.dumps({'success':'false', 'action':'scale out', 'message':result})
@app.errorhandler(500)
def internal_server_error(error):
logger.debug("An internel server error occured")
logger.error(traceback.format_exc())
return json.dumps({'success':'false', 'message':'500 Internal Server Error', 'Unauthorized': 'True'})
if __name__ == '__main__':
logger.info('Start Flask...:')
try:
secret_key_file = open(env.getenv('FS_PREFIX') + '/local/httprest_secret_key.txt')
app.secret_key = secret_key_file.read()
secret_key_file.close()
except:
from base64 import b64encode
from os import urandom
secret_key = urandom(24)
secret_key = b64encode(secret_key).decode('utf-8')
app.secret_key = secret_key
secret_key_file = open(env.getenv('FS_PREFIX') + '/local/httprest_secret_key.txt', 'w')
secret_key_file.write(secret_key)
secret_key_file.close()
os.environ['APP_KEY'] = app.secret_key
runcmd = sys.argv[0]
app.runpath = runcmd.rsplit('/', 1)[0]
global G_nodemgr
global G_vclustermgr
global G_notificationmgr
global etcdclient
global G_networkmgr
global G_clustername
global G_sysmgr
global G_historymgr
global G_applicationmgr
global G_ulockmgr
global G_cloudmgr
global G_jobmgr
global G_taskmgr
# move 'tools.loadenv' to the beginning of this file
fs_path = env.getenv("FS_PREFIX")
logger.info("using FS_PREFIX %s" % fs_path)
etcdaddr = env.getenv("ETCD")
logger.info("using ETCD %s" % etcdaddr)
G_clustername = env.getenv("CLUSTER_NAME")
logger.info("using CLUSTER_NAME %s" % G_clustername)
# get network interface
net_dev = env.getenv("NETWORK_DEVICE")
logger.info("using NETWORK_DEVICE %s" % net_dev)
ipaddr = network.getip(net_dev)
if ipaddr==False:
logger.error("network device is not correct")
sys.exit(1)
else:
logger.info("using ipaddr %s" % ipaddr)
# init etcdlib client
try:
etcdclient = etcdlib.Client(etcdaddr, prefix = G_clustername)
except Exception:
logger.error ("connect etcd failed, maybe etcd address not correct...")
sys.exit(1)
mode = 'recovery'
if len(sys.argv) > 1 and sys.argv[1] == "new":
mode = 'new'
# get public IP and set public Ip in etcd
public_IP = env.getenv("PUBLIC_IP")
etcdclient.setkey("machines/publicIP/"+ipaddr, public_IP)
# do some initialization for mode: new/recovery
if mode == 'new':
# clean and initialize the etcd table
if etcdclient.isdir(""):
etcdclient.clean()
else:
etcdclient.createdir("")
# token is saved at fs_path/golbal/token
token = tools.gen_token()
tokenfile = open(fs_path+"/global/token", 'w')
tokenfile.write(token)
tokenfile.write("\n")
tokenfile.close()
etcdclient.setkey("token", token)
etcdclient.setkey("service/master", ipaddr)
etcdclient.setkey("service/mode", mode)
etcdclient.createdir("machines/allnodes")
etcdclient.createdir("machines/runnodes")
etcdclient.setkey("vcluster/nextid", "1")
# clean all users vclusters files : FS_PREFIX/global/users/<username>/clusters/<clusterid>
usersdir = fs_path+"/global/users/"
for user in os.listdir(usersdir):
shutil.rmtree(usersdir+user+"/clusters")
shutil.rmtree(usersdir+user+"/hosts")
os.mkdir(usersdir+user+"/clusters")
os.mkdir(usersdir+user+"/hosts")
else:
# check whether cluster exists
if not etcdclient.isdir("")[0]:
logger.error ("cluster not exists, you should use mode:new ")
sys.exit(1)
# initialize the etcd table for recovery
token = tools.gen_token()
tokenfile = open(fs_path+"/global/token", 'w')
tokenfile.write(token)
tokenfile.write("\n")
tokenfile.close()
etcdclient.setkey("token", token)
etcdclient.setkey("service/master", ipaddr)
etcdclient.setkey("service/mode", mode)
if etcdclient.isdir("_lock")[0]:
etcdclient.deldir("_lock")
#init portcontrol
portcontrol.init_new()
G_ulockmgr = lockmgr.LockMgr()
clusternet = env.getenv("CLUSTER_NET")
logger.info("using CLUSTER_NET %s" % clusternet)
G_sysmgr = sysmgr.SystemManager()
G_networkmgr = network.NetworkMgr(clusternet, etcdclient, mode, ipaddr)
G_networkmgr.printpools()
G_cloudmgr = cloudmgr.CloudMgr()
# start NodeMgr and NodeMgr will wait for all nodes to start ...
G_nodemgr = nodemgr.NodeMgr(G_networkmgr, etcdclient, addr = ipaddr, mode=mode)
logger.info("nodemgr started")
distributedgw = env.getenv("DISTRIBUTED_GATEWAY")
G_vclustermgr = vclustermgr.VclusterMgr(G_nodemgr, G_networkmgr, etcdclient, ipaddr, mode, distributedgw)
logger.info("vclustermgr started")
G_imagemgr = imagemgr.ImageMgr()
logger.info("imagemgr started")
G_releasemgr = releasemgr.ReleaseMgr(G_vclustermgr,G_ulockmgr)
G_releasemgr.start()
logger.info("releasemgr started")
logger.info("startting to listen on: ")
masterip = env.getenv('MASTER_IP')
logger.info("using MASTER_IP %s", masterip)
masterport = env.getenv('MASTER_PORT')
logger.info("using MASTER_PORT %d", int(masterport))
G_historymgr = History_Manager()
master_collector = monitor.Master_Collector(G_nodemgr,ipaddr+":"+str(masterport))
master_collector.start()
logger.info("master_collector started")
# server = http.server.HTTPServer((masterip, masterport), DockletHttpHandler)
logger.info("starting master server")
G_taskmgr = taskmgr.TaskMgr(G_nodemgr, monitor.Fetcher, ipaddr)
G_jobmgr = jobmgr.JobMgr(G_taskmgr)
G_taskmgr.set_jobmgr(G_jobmgr)
G_taskmgr.start()
app.run(host = masterip, port = masterport, threaded=True)
================================================
FILE: src/master/jobmgr.py
================================================
import time, threading, random, string, os, traceback, requests
import master.monitor
import subprocess,json
from functools import wraps
from datetime import datetime
from utils.log import initlogging, logger
from utils.model import db, Batchjob, Batchtask
from utils import env
def db_commit():
try:
db.session.commit()
except Exception as err:
db.session.rollback()
logger.error(traceback.format_exc())
raise
class BatchJob(object):
def __init__(self, jobid, user, job_info, old_job_db=None):
if old_job_db is None:
self.job_db = Batchjob(jobid,user,job_info['jobName'],int(job_info['jobPriority']))
else:
self.job_db = old_job_db
self.job_db.clear()
job_info = {}
job_info['jobName'] = self.job_db.name
job_info['jobPriority'] = self.job_db.priority
all_tasks = self.job_db.tasks.all()
job_info['tasks'] = {}
for t in all_tasks:
job_info['tasks'][t.idx] = json.loads(t.config)
self.user = user
#self.raw_job_info = job_info
self.job_id = jobid
self.job_name = job_info['jobName']
self.job_priority = int(job_info['jobPriority'])
self.lock = threading.Lock()
self.tasks = {}
self.dependency_out = {}
self.tasks_cnt = {'pending':0, 'scheduling':0, 'running':0, 'retrying':0, 'failed':0, 'finished':0, 'stopped':0}
#init self.tasks & self.dependency_out & self.tasks_cnt
logger.debug("Init BatchJob user:%s job_name:%s create_time:%s" % (self.job_db.username, self.job_db.name, str(self.job_db.create_time)))
raw_tasks = job_info["tasks"]
self.tasks_cnt['pending'] = len(raw_tasks.keys())
for task_idx in raw_tasks.keys():
task_info = raw_tasks[task_idx]
if old_job_db is None:
task_db = Batchtask(jobid+"_"+task_idx, task_idx, task_info)
self.job_db.tasks.append(task_db)
else:
task_db = Batchtask.query.get(jobid+"_"+task_idx)
task_db.clear()
self.tasks[task_idx] = {}
self.tasks[task_idx]['id'] = jobid+"_"+task_idx
self.tasks[task_idx]['config'] = task_info
self.tasks[task_idx]['db'] = task_db
self.tasks[task_idx]['status'] = 'pending'
self.tasks[task_idx]['dependency'] = []
dependency = task_info['dependency'].strip().replace(' ', '').split(',')
if len(dependency) == 1 and dependency[0] == '':
continue
for d in dependency:
if not d in raw_tasks.keys():
raise ValueError('task %s is not defined in the dependency of task %s' % (d, task_idx))
self.tasks[task_idx]['dependency'].append(d)
if not d in self.dependency_out.keys():
self.dependency_out[d] = []
self.dependency_out[d].append(task_idx)
if old_job_db is None:
db.session.add(self.job_db)
db_commit()
self.log_status()
logger.debug("BatchJob(id:%s) dependency_out: %s" % (self.job_db.id, json.dumps(self.dependency_out, indent=3)))
def data_lock(f):
@wraps(f)
def new_f(self, *args, **kwargs):
self.lock.acquire()
try:
result = f(self, *args, **kwargs)
except Exception as err:
self.lock.release()
raise err
self.lock.release()
return result
return new_f
# return the tasks without dependencies
@data_lock
def get_tasks_no_dependency(self,update_status=False):
logger.debug("Get tasks without dependencies of BatchJob(id:%s)" % self.job_db.id)
ret_tasks = []
for task_idx in self.tasks.keys():
if (self.tasks[task_idx]['status'] == 'pending' and
len(self.tasks[task_idx]['dependency']) == 0):
if update_status:
self.tasks_cnt['pending'] -= 1
self.tasks_cnt['scheduling'] += 1
self.tasks[task_idx]['db'] = Batchtask.query.get(self.tasks[task_idx]['id'])
self.tasks[task_idx]['db'].status = 'scheduling'
self.tasks[task_idx]['status'] = 'scheduling'
task_name = self.tasks[task_idx]['db'].id
ret_tasks.append([task_name, self.tasks[task_idx]['config'], self.job_priority])
self.log_status()
db_commit()
return ret_tasks
@data_lock
def stop_job(self):
self.job_db = Batchjob.query.get(self.job_id)
self.job_db.status = 'stopping'
db_commit()
# update status of this job based
def _update_job_status(self):
allcnt = len(self.tasks.keys())
if self.tasks_cnt['failed'] != 0:
self.job_db.status = 'failed'
self.job_db.end_time = datetime.now()
elif self.tasks_cnt['finished'] == allcnt:
self.job_db.status = 'done'
self.job_db.end_time = datetime.now()
elif self.job_db.status == 'stopping':
if self.tasks_cnt['running'] == 0 and self.tasks_cnt['scheduling'] == 0 and self.tasks_cnt['retrying'] == 0:
self.job_db.status = 'stopped'
self.job_db.end_time = datetime.now()
elif self.tasks_cnt['running'] != 0 or self.tasks_cnt['retrying'] != 0:
self.job_db.status = 'running'
else:
self.job_db.status = 'pending'
db_commit()
# start run a task, update status
@data_lock
def update_task_running(self, task_idx):
logger.debug("Update status of task(idx:%s) of BatchJob(id:%s) running." % (task_idx, self.job_id))
old_status = self.tasks[task_idx]['status']
if old_status == 'stopping':
logger.info("Task(idx:%s) of BatchJob(id:%s) has been stopped."% (task_idx, self.job_id))
return
self.tasks_cnt[old_status] -= 1
self.tasks[task_idx]['status'] = 'running'
self.tasks[task_idx]['db'] = Batchtask.query.get(self.tasks[task_idx]['id'])
self.tasks[task_idx]['db'].status = 'running'
self.tasks[task_idx]['db'].start_time = datetime.now()
self.tasks_cnt['running'] += 1
self.job_db = Batchjob.query.get(self.job_id)
self._update_job_status()
self.log_status()
# a task has finished, update dependency and return tasks without dependencies
@data_lock
def finish_task(self, task_idx, running_time, billing):
if task_idx not in self.tasks.keys():
logger.error('Task_idx %s not in job. user:%s job_name:%s job_id:%s'%(task_idx, self.user, self.job_name, self.job_id))
return []
logger.debug("Task(idx:%s) of BatchJob(id:%s) has finished(running_time=%d,billing=%d). Update dependency..." % (task_idx, self.job_id, running_time, billing))
old_status = self.tasks[task_idx]['status']
if old_status == 'stopping':
logger.info("Task(idx:%s) of BatchJob(id:%s) has been stopped."% (task_idx, self.job_id))
return
self.tasks_cnt[old_status] -= 1
self.tasks[task_idx]['status'] = 'finished'
self.tasks[task_idx]['db'] = Batchtask.query.get(self.tasks[task_idx]['id'])
self.tasks[task_idx]['db'].status = 'finished'
self.tasks[task_idx]['db'].tried_times += 1
self.tasks[task_idx]['db'].running_time = running_time
self.tasks[task_idx]['db'].end_time = datetime.now()
self.tasks[task_idx]['db'].billing = billing
self.tasks[task_idx]['db'].failed_reason = ""
self.job_db = Batchjob.query.get(self.job_id)
self.job_db.billing += billing
self.tasks_cnt['finished'] += 1
if task_idx not in self.dependency_out.keys():
self._update_job_status()
self.log_status()
return []
ret_tasks = []
for out_idx in self.dependency_out[task_idx]:
try:
self.tasks[out_idx]['dependency'].remove(task_idx)
except Exception as err:
logger.warning(traceback.format_exc())
continue
if (self.tasks[out_idx]['status'] == 'pending' and
len(self.tasks[out_idx]['dependency']) == 0):
self.tasks_cnt['pending'] -= 1
self.tasks_cnt['scheduling'] += 1
self.tasks[out_idx]['status'] = 'scheduling'
self.tasks[out_idx]['db'] = Batchtask.query.get(self.tasks[out_idx]['id'])
self.tasks[out_idx]['db'].status = 'scheduling'
task_name = self.job_id + '_' + out_idx
ret_tasks.append([task_name, self.tasks[out_idx]['config'], self.job_priority])
self._update_job_status()
self.log_status()
return ret_tasks
# update retrying status of task
@data_lock
def update_task_retrying(self, task_idx, reason, tried_times):
logger.debug("Update status of task(idx:%s) of BatchJob(id:%s) retrying. reason:%s tried_times:%d" % (task_idx, self.job_id, reason, int(tried_times)))
old_status = self.tasks[task_idx]['status']
if old_status == 'stopping':
logger.info("Task(idx:%s) of BatchJob(id:%s) has been stopped."% (task_idx, self.job_id))
return
self.tasks_cnt[old_status] -= 1
self.tasks_cnt['retrying'] += 1
self.tasks[task_idx]['db'] = Batchtask.query.get(self.tasks[task_idx]['id'])
self.tasks[task_idx]['db'].status = 'retrying'
self.tasks[task_idx]['db'].failed_reason = reason
self.tasks[task_idx]['db'].tried_times += 1
self.tasks[task_idx]['status'] = 'retrying'
self.job_db = Batchjob.query.get(self.job_id)
self._update_job_status()
self.log_status()
# update failed status of task
@data_lock
def update_task_failed(self, task_idx, reason, tried_times, running_time, billing):
logger.debug("Update status of task(idx:%s) of BatchJob(id:%s) failed. reason:%s tried_times:%d" % (task_idx, self.job_id, reason, int(tried_times)))
old_status = self.tasks[task_idx]['status']
self.tasks_cnt[old_status] -= 1
self.tasks_cnt['failed'] += 1
self.tasks[task_idx]['status'] = 'failed'
self.tasks[task_idx]['db'] = Batchtask.query.get(self.tasks[task_idx]['id'])
self.tasks[task_idx]['db'].status = 'failed'
self.tasks[task_idx]['db'].failed_reason = reason
self.tasks[task_idx]['db'].tried_times += 1
self.tasks[task_idx]['db'].end_time = datetime.now()
self.tasks[task_idx]['db'].running_time = running_time
self.tasks[task_idx]['db'].billing = billing
self.job_db = Batchjob.query.get(self.job_id)
self.job_db.billing += billing
self._update_job_status()
self.log_status()
@data_lock
def update_task_stopped(self, task_idx, running_time, billing):
logger.debug("Update status of task(idx:%s) of
gitextract_vecf3u3x/
├── .gitignore
├── CHANGES
├── LICENSE
├── README.md
├── VERSION
├── bin/
│ ├── docklet-master
│ ├── docklet-supermaster
│ └── docklet-worker
├── cloudsdk-installer.sh
├── conf/
│ ├── container/
│ │ ├── lxc2.container.batch.conf
│ │ ├── lxc2.container.conf
│ │ ├── lxc3.container.batch.conf
│ │ └── lxc3.container.conf
│ ├── docklet.conf.template
│ ├── lxc-script/
│ │ ├── lxc-ifdown
│ │ ├── lxc-ifup
│ │ ├── lxc-mount
│ │ └── lxc-prestart
│ └── nginx_docklet.conf
├── doc/
│ ├── devdoc/
│ │ ├── coding.md
│ │ ├── config_info.md
│ │ ├── network-arch.md
│ │ ├── networkmgr.md
│ │ ├── openvswitch-vlan.md
│ │ ├── proxy-control.md
│ │ └── startup.md
│ ├── devguide/
│ │ └── devguide.md
│ └── example/
│ └── example-LogisticRegression.py
├── meter/
│ ├── connector/
│ │ ├── master.py
│ │ └── minion.py
│ ├── daemon/
│ │ ├── http.py
│ │ ├── master_v1.py
│ │ └── minion_v1.py
│ ├── intra/
│ │ ├── billing.py
│ │ ├── cgroup.py
│ │ ├── smart.py
│ │ └── system.py
│ ├── main.py
│ └── policy/
│ ├── allocate.py
│ └── quota.py
├── prepare.sh
├── src/
│ ├── master/
│ │ ├── beansapplicationmgr.py
│ │ ├── bugreporter.py
│ │ ├── cloudmgr.py
│ │ ├── deploy.py
│ │ ├── httprest.py
│ │ ├── jobmgr.py
│ │ ├── lockmgr.py
│ │ ├── monitor.py
│ │ ├── network.py
│ │ ├── nodemgr.py
│ │ ├── notificationmgr.py
│ │ ├── parser.py
│ │ ├── releasemgr.py
│ │ ├── settings.py
│ │ ├── sysmgr.py
│ │ ├── taskmgr.py
│ │ ├── testTaskCtrler.py
│ │ ├── testTaskMgr.py
│ │ ├── testTaskWorker.py
│ │ ├── userManager.py
│ │ ├── userinit.sh
│ │ └── vclustermgr.py
│ ├── protos/
│ │ ├── rpc.proto
│ │ ├── rpc_pb2.py
│ │ └── rpc_pb2_grpc.py
│ ├── utils/
│ │ ├── env.py
│ │ ├── etcdlib.py
│ │ ├── gputools.py
│ │ ├── imagemgr.py
│ │ ├── log.py
│ │ ├── logs.py
│ │ ├── lvmtool.py
│ │ ├── manage.py
│ │ ├── model.py
│ │ ├── nettools.py
│ │ ├── proxytool.py
│ │ ├── tools.py
│ │ └── updatebase.py
│ └── worker/
│ ├── container.py
│ ├── monitor.py
│ ├── ossmounter.py
│ ├── taskcontroller.py
│ ├── taskworker.py
│ └── worker.py
├── tools/
│ ├── DOCKLET_NOTES.txt
│ ├── R_demo.ipynb
│ ├── alterUserTable.py
│ ├── clean-usage.py
│ ├── cloudsetting.aliyun.template.json
│ ├── dl_start_spark.sh
│ ├── dl_stop_spark.sh
│ ├── docklet-deploy.sh
│ ├── etcd-multi-nodes.sh
│ ├── etcd-one-node.sh
│ ├── nginx_config.sh
│ ├── npmrc
│ ├── pip.conf
│ ├── python_demo.ipynb
│ ├── resolv.conf
│ ├── sources.list
│ ├── start_jupyter.sh
│ ├── update-UserTable.sh
│ ├── update-basefs.sh
│ ├── update_baseurl.sh
│ ├── update_con_network.py
│ ├── update_v0.3.2.py
│ ├── upgrade.py
│ ├── upgrade_file2db.py
│ └── vimrc.local
├── user/
│ ├── stopreqmgr.py
│ └── user.py
└── web/
├── static/
│ ├── css/
│ │ └── docklet.css
│ ├── dist/
│ │ ├── css/
│ │ │ ├── AdminLTE.css
│ │ │ ├── filebox.css
│ │ │ ├── flotconfig.css
│ │ │ ├── modalconfig.css
│ │ │ └── skins/
│ │ │ ├── _all-skins.css
│ │ │ └── skin-blue.css
│ │ └── js/
│ │ └── app.js
│ └── js/
│ ├── plot_monitor.js
│ └── plot_monitorReal.js
├── templates/
│ ├── addCluster.html
│ ├── base_AdminLTE.html
│ ├── batch/
│ │ ├── batch_admin_list.html
│ │ ├── batch_create.html
│ │ ├── batch_info.html
│ │ ├── batch_list.html
│ │ └── batch_output.html
│ ├── beansapplication.html
│ ├── cloud.html
│ ├── config.html
│ ├── create_notification.html
│ ├── dashboard.html
│ ├── description.html
│ ├── error/
│ │ ├── 401.html
│ │ └── 500.html
│ ├── error.html
│ ├── home.template
│ ├── listcontainer.html
│ ├── login.html
│ ├── logs.html
│ ├── monitor/
│ │ ├── history.html
│ │ ├── historyVNode.html
│ │ ├── hosts.html
│ │ ├── hostsConAll.html
│ │ ├── hostsRealtime.html
│ │ ├── monitorUserAll.html
│ │ ├── monitorUserCluster.html
│ │ ├── status.html
│ │ └── statusRealtime.html
│ ├── notification.html
│ ├── notification_info.html
│ ├── opfailed.html
│ ├── opsuccess.html
│ ├── register.html
│ ├── saveconfirm.html
│ ├── settings.html
│ ├── user/
│ │ ├── activate.html
│ │ ├── info.html
│ │ └── mailservererror.html
│ └── user_list.html
├── web.py
└── webViews/
├── admin.py
├── authenticate/
│ ├── auth.py
│ ├── login.py
│ └── register.py
├── batch.py
├── beansapplication.py
├── checkname.py
├── cloud.py
├── cluster.py
├── cookie_tool.py
├── dashboard.py
├── dockletrequest.py
├── log.py
├── monitor.py
├── notification/
│ └── notification.py
├── reportbug.py
├── syslogs.py
├── user/
│ ├── grouplist.py
│ ├── userActivate.py
│ ├── userinfo.py
│ └── userlist.py
└── view.py
SYMBOL INDEX (1213 symbols across 84 files)
FILE: meter/connector/master.py
class master_connector (line 5) | class master_connector:
method establish_vswitch (line 13) | def establish_vswitch(ovsname):
method build_gre_conn (line 20) | def build_gre_conn(ovsname, ipaddr):
method break_gre_conn (line 24) | def break_gre_conn(ovsname, ipaddr):
method close_connection (line 28) | def close_connection(fd):
method do_message_response (line 35) | def do_message_response(input_buffer):
method start (line 39) | def start():
method run_forever (line 45) | def run_forever():
FILE: meter/connector/minion.py
class minion_connector (line 5) | class minion_connector:
method connect (line 7) | def connect(server_ip):
method start (line 39) | def start(server_ip):
FILE: meter/daemon/http.py
class base_http_handler (line 4) | class base_http_handler(BaseHTTPRequestHandler):
method load_module (line 6) | def load_module(self):
method do_POST (line 9) | def do_POST(self):
class master_http_handler (line 58) | class master_http_handler(base_http_handler):
method load_module (line 62) | def load_module(self):
class minion_http_handler (line 66) | class minion_http_handler(base_http_handler):
method load_module (line 70) | def load_module(self):
class http_daemon_listener (line 74) | class http_daemon_listener:
method __init__ (line 76) | def __init__(self, handler_class, args = None):
method listen (line 80) | def listen(self):
FILE: meter/daemon/master_v1.py
function http_client_post (line 3) | def http_client_post(ip, port, url, entries = {}):
class case_handler (line 11) | class case_handler:
method minions_list (line 15) | def minions_list(form, args):
method resource_allocation (line 22) | def resource_allocation(form, args):
method user_live_add (line 41) | def user_live_add(form, args):
method user_live_remove (line 48) | def user_live_remove(form, args):
method user_live_list (line 53) | def user_live_list(form, args):
FILE: meter/daemon/minion_v1.py
class case_handler (line 7) | class case_handler:
method billing_increment (line 11) | def billing_increment(form, args):
method cgroup_container_list (line 15) | def cgroup_container_list(form, args):
method smart_quota_policy (line 19) | def smart_quota_policy(form, args):
method cgroup_container_limit (line 28) | def cgroup_container_limit(form, args):
method cgroup_container_sample (line 32) | def cgroup_container_sample(form, args):
method system_loads (line 36) | def system_loads(form, args):
method system_memsw_available (line 40) | def system_memsw_available(form, args):
method system_swap_extend (line 44) | def system_swap_extend(form, args):
method system_swap_clear (line 48) | def system_swap_clear(form, args):
method system_total_physical_memory (line 52) | def system_total_physical_memory(form, args):
FILE: meter/intra/billing.py
class billing_manager (line 5) | class billing_manager:
method on_lxc_acct_usage (line 9) | def on_lxc_acct_usage(uuid, prev, curr, interval):
method add_usage_sample (line 19) | def add_usage_sample(uuid, sample, interval):
method clean_dead_node (line 24) | def clean_dead_node(uuid):
method fetch_increment_and_clean (line 28) | def fetch_increment_and_clean(uuid):
FILE: meter/intra/cgroup.py
class cgroup_controller (line 3) | class cgroup_controller:
method read_value (line 5) | def read_value(group, uuid, item):
method write_value (line 13) | def write_value(group, uuid, item, value):
class cgroup_manager (line 23) | class cgroup_manager:
method set_default_memory_limit (line 29) | def set_default_memory_limit(limit):
method set_cgroup_prefix (line 32) | def set_cgroup_prefix(prefix = __prefix_lxc__):
method auto_detect_prefix (line 35) | def auto_detect_prefix():
method get_cgroup_containers (line 47) | def get_cgroup_containers():
method get_container_pid (line 57) | def get_container_pid(uuid):
method get_container_sample (line 60) | def get_container_sample(uuid):
method get_container_limit (line 71) | def get_container_limit(uuid):
method get_container_oom_status (line 82) | def get_container_oom_status(uuid):
method set_container_oom_idle (line 86) | def set_container_oom_idle(uuid, idle):
method protect_container_oom (line 89) | def protect_container_oom(uuid):
method set_container_physical_memory_limit (line 102) | def set_container_physical_memory_limit(uuid, Mbytes, freeze = False):
method set_container_cpu_priority_limit (line 110) | def set_container_cpu_priority_limit(uuid, ceof):
FILE: meter/intra/smart.py
class smart_controller (line 7) | class smart_controller:
method set_policy (line 9) | def set_policy(policy):
method start (line 12) | def start(interval = 4):
method smart_control_forever (line 18) | def smart_control_forever(interval):
FILE: meter/intra/system.py
class system_manager (line 5) | class system_manager:
method set_db_prefix (line 9) | def set_db_prefix(prefix):
method clear_all_swaps (line 16) | def clear_all_swaps():
method extend_swap (line 20) | def extend_swap(size):
method get_cpu_sample (line 41) | def get_cpu_sample():
method get_memory_sample (line 46) | def get_memory_sample():
method get_swap_sample (line 51) | def get_swap_sample():
method get_system_loads (line 56) | def get_system_loads():
method get_proc_etime (line 69) | def get_proc_etime(pid):
method get_available_memsw (line 83) | def get_available_memsw():
method get_total_physical_memory_for_containers (line 105) | def get_total_physical_memory_for_containers():
FILE: meter/main.py
function signal_handler (line 29) | def signal_handler(signal, frame):
FILE: meter/policy/allocate.py
class candidates_selector (line 1) | class candidates_selector:
method select (line 3) | def select(candidates):
FILE: meter/policy/quota.py
class identify_policy (line 5) | class identify_policy:
method get_score_by_uuid (line 7) | def get_score_by_uuid(uuid):
class etime_rev_policy (line 10) | class etime_rev_policy(identify_policy):
method get_score_by_uuid (line 12) | def get_score_by_uuid(uuid):
class mem_usage_policy (line 17) | class mem_usage_policy(identify_policy):
method get_score_by_uuid (line 19) | def get_score_by_uuid(uuid):
class mem_quota_policy (line 23) | class mem_quota_policy(identify_policy):
method get_score_by_uuid (line 25) | def get_score_by_uuid(uuid):
class cpu_usage_policy (line 29) | class cpu_usage_policy(identify_policy):
method get_score_by_uuid (line 31) | def get_score_by_uuid(uuid):
class cpu_usage_rev_policy (line 35) | class cpu_usage_rev_policy(identify_policy):
method get_score_by_uuid (line 37) | def get_score_by_uuid(uuid):
class cpu_speed_policy (line 41) | class cpu_speed_policy(identify_policy):
method get_score_by_uuid (line 43) | def get_score_by_uuid(uuid):
class user_state_policy (line 49) | class user_state_policy(identify_policy):
method get_score_by_uuid (line 51) | def get_score_by_uuid(uuid):
FILE: src/master/beansapplicationmgr.py
function send_beans_email (line 25) | def send_beans_email(to_address, username, beans):
class ApplicationMgr (line 55) | class ApplicationMgr:
method __init__ (line 57) | def __init__(self):
method apply (line 65) | def apply(self,username,number,reason):
method query (line 82) | def query(self,username):
method queryUnRead (line 91) | def queryUnRead(self,*,cur_user):
method agree (line 100) | def agree(self,msgid,*,cur_user):
method reject (line 114) | def reject(self,msgid,*,cur_user):
class ApprovalRobot (line 123) | class ApprovalRobot(threading.Thread):
method __init__ (line 125) | def __init__(self,maxtime=3600):
method stop (line 131) | def stop(self):
method run (line 134) | def run(self):
FILE: src/master/bugreporter.py
function send_bug_mail (line 11) | def send_bug_mail(username, bugmessage):
FILE: src/master/cloudmgr.py
class AliyunMgr (line 15) | class AliyunMgr():
method __init__ (line 16) | def __init__(self):
method loadClient (line 28) | def loadClient(self):
method createInstance (line 46) | def createInstance(self):
method startInstance (line 67) | def startInstance(self, instanceid):
method createEIP (line 75) | def createEIP(self):
method associateEIP (line 89) | def associateEIP(self, instanceid, eipid):
method getInnerIP (line 98) | def getInnerIP(self, instanceid):
method isStarted (line 108) | def isStarted(self, instanceids):
method rentServers (line 119) | def rentServers(self,number):
method addNode (line 146) | def addNode(self):
method addNodeAsync (line 160) | def addNodeAsync(self):
class EmptyMgr (line 165) | class EmptyMgr():
method addNodeAsync (line 166) | def addNodeAsync(self):
class CloudMgr (line 170) | class CloudMgr():
method getSettingFile (line 172) | def getSettingFile(self):
method modifySettingFile (line 182) | def modifySettingFile(self, setting):
method __init__ (line 192) | def __init__(self):
FILE: src/master/deploy.py
function myexec (line 7) | def myexec(ssh,command):
function deploy (line 22) | def deploy(ipaddr,masterip,account,password,volumename):
FILE: src/master/httprest.py
function post_to_user (line 46) | def post_to_user(url = '/', data={}):
function login_required (line 54) | def login_required(func):
function auth_key_required (line 74) | def auth_key_required(func):
function beans_check (line 87) | def beans_check(func):
function isalive (line 100) | def isalive(user, beans, form):
function logs_list (line 107) | def logs_list(user, beans, form):
function logs_get (line 113) | def logs_get(user, beans, form):
function create_cluster (line 121) | def create_cluster(user, beans, form):
function scaleout_cluster (line 161) | def scaleout_cluster(user, beans, form):
function scalein_cluster (line 201) | def scalein_cluster(user, beans, form):
function start_cluster (line 228) | def start_cluster(user, beans, form):
function stop_cluster (line 251) | def stop_cluster(user, beans, form):
function delete_cluster (line 273) | def delete_cluster(user, beans, form):
function info_cluster (line 300) | def info_cluster(user, beans, form):
function list_cluster (line 315) | def list_cluster(user, beans, form):
function stopall_cluster (line 326) | def stopall_cluster():
function flush_cluster (line 350) | def flush_cluster(user, beans, form):
function save_cluster (line 361) | def save_cluster(user, beans, form):
function release_ulock (line 394) | def release_ulock(user, beans, form):
function migrate_cluster (line 411) | def migrate_cluster():
function migrate_host (line 454) | def migrate_host(user, beans, form):
function list_image (line 472) | def list_image(user, beans, form):
function update_base (line 479) | def update_base(user, beans, form):
function description_image (line 487) | def description_image(user, beans, form):
function share_image (line 498) | def share_image(user, beans, form):
function unshare_image (line 513) | def unshare_image(user, beans, form):
function delete_image (line 528) | def delete_image(user, beans, form):
function copy_image (line 543) | def copy_image(user, beans, form):
function copytarget_image (line 562) | def copytarget_image(user, beans, form):
function query_account_cloud (line 579) | def query_account_cloud(cur_user, user, form):
function modify_account_cloud (line 587) | def modify_account_cloud(cur_user, user, form):
function add_node_cloud (line 595) | def add_node_cloud(user, beans, form):
function addproxy (line 604) | def addproxy(user, beans, form):
function deleteproxy (line 618) | def deleteproxy(user, beans, form):
function add_port_mapping (line 627) | def add_port_mapping(user, beans, form):
function delete_port_mapping (line 653) | def delete_port_mapping(user, beans, form):
function hosts_monitor (line 677) | def hosts_monitor(user, beans, form, com_id, issue):
function vnodes_monitor (line 721) | def vnodes_monitor(user, beans, form, con_id, issue):
function user_quotainfo_monitor (line 757) | def user_quotainfo_monitor(user, beans, form, issue):
function listphynodes_monitor (line 777) | def listphynodes_monitor(user, beans, form):
function pending_gpu_tasks_monitor (line 786) | def pending_gpu_tasks_monitor(user, beans, form):
function billing_beans (line 795) | def billing_beans():
function parmList_system (line 804) | def parmList_system(user, beans, form):
function modify_system (line 812) | def modify_system(user, beans, form):
function clear_system (line 827) | def clear_system(user, beans, form):
function add_system (line 841) | def add_system(user, beans, form):
function delete_system (line 856) | def delete_system(user, beans, form):
function resetall_system (line 870) | def resetall_system(user, beans, form):
function add_job (line 884) | def add_job(user,beans,form):
function list_job (line 963) | def list_job(user,beans,form):
function list_all_job (line 973) | def list_all_job(user,beans,form):
function info_job (line 983) | def info_job(user,beans,form):
function stop_job (line 994) | def stop_job(user,beans,form):
function get_output (line 1005) | def get_output(user,beans,form):
function info_task (line 1019) | def info_task(user,beans,form):
function batch_vnodes_list (line 1024) | def batch_vnodes_list(user,beans,form):
function internal_server_error (line 1062) | def internal_server_error(error):
FILE: src/master/jobmgr.py
function db_commit (line 11) | def db_commit():
class BatchJob (line 19) | class BatchJob(object):
method __init__ (line 20) | def __init__(self, jobid, user, job_info, old_job_db=None):
method data_lock (line 79) | def data_lock(f):
method get_tasks_no_dependency (line 94) | def get_tasks_no_dependency(self,update_status=False):
method stop_job (line 113) | def stop_job(self):
method _update_job_status (line 119) | def _update_job_status(self):
method update_task_running (line 139) | def update_task_running(self, task_idx):
method finish_task (line 157) | def finish_task(self, task_idx, running_time, billing):
method update_task_retrying (line 205) | def update_task_retrying(self, task_idx, reason, tried_times):
method update_task_failed (line 224) | def update_task_failed(self, task_idx, reason, tried_times, running_ti...
method update_task_stopped (line 243) | def update_task_stopped(self, task_idx, running_time, billing):
method log_status (line 264) | def log_status(self):
class JobMgr (line 275) | class JobMgr():
method __init__ (line 278) | def __init__(self, taskmgr):
method recover_jobs (line 293) | def recover_jobs(self):
method charge_beans (line 307) | def charge_beans(self,username,billing):
method add_lock (line 313) | def add_lock(f):
method create_job (line 327) | def create_job(self, user, job_info):
method add_job (line 335) | def add_job(self, user, job_info):
method stop_job (line 351) | def stop_job(self, user, job_id):
method list_jobs (line 373) | def list_jobs(self,user):
method list_all_jobs (line 388) | def list_all_jobs(self):
method get_job (line 405) | def get_job(self, user, job_id):
method is_job_exist (line 424) | def is_job_exist(self, job_id):
method gen_jobid (line 428) | def gen_jobid(self):
method add_task_taskmgr (line 436) | def add_task_taskmgr(self, user, tasks):
method process_job (line 447) | def process_job(self, job):
method report (line 456) | def report(self, user, task_name, status, reason="", tried_times=1, ru...
method get_output (line 502) | def get_output(self, username, jobid, taskid, vnodeid, issue):
FILE: src/master/lockmgr.py
class LockMgr (line 11) | class LockMgr:
method __init__ (line 13) | def __init__(self):
method acquire (line 20) | def acquire(self, lock_name):
method release (line 29) | def release(self, lock_name):
FILE: src/master/monitor.py
function get_owner (line 35) | def get_owner(container_name):
class Master_Collector (line 40) | class Master_Collector(threading.Thread):
method __init__ (line 42) | def __init__(self,nodemgr,master_ip):
method net_billings (line 51) | def net_billings(self, username, now_bytes_total):
method run (line 68) | def run(self):
method stop (line 103) | def stop(self):
class Container_Fetcher (line 108) | class Container_Fetcher:
method __init__ (line 109) | def __init__(self,container_name):
method get_info (line 114) | def get_info(self):
method get_cpu_use (line 123) | def get_cpu_use(self):
method get_mem_use (line 134) | def get_mem_use(self):
method get_disk_use (line 145) | def get_disk_use(self):
method get_net_stats (line 155) | def get_net_stats(self):
method get_user_net_stats (line 167) | def get_user_net_stats(owner):
method get_basic_info (line 177) | def get_basic_info(self):
class Fetcher (line 188) | class Fetcher:
method __init__ (line 190) | def __init__(self,host):
method get_meminfo (line 204) | def get_meminfo(self):
method get_gpuinfo (line 213) | def get_gpuinfo(self):
method get_cpuinfo (line 222) | def get_cpuinfo(self):
method get_cpuconfig (line 231) | def get_cpuconfig(self):
method get_diskinfo (line 240) | def get_diskinfo(self):
method get_osinfo (line 249) | def get_osinfo(self):
method get_concpuinfo (line 258) | def get_concpuinfo(self):
method get_containers (line 267) | def get_containers(self):
method get_status (line 276) | def get_status(self):
method get_containerslist (line 288) | def get_containerslist(self):
FILE: src/master/network.py
function getip (line 10) | def getip(ifname):
function ip_to_int (line 20) | def ip_to_int(addr):
function int_to_ip (line 24) | def int_to_ip(num):
function fix_ip (line 28) | def fix_ip(addr, cidr):
function next_interval (line 33) | def next_interval(addr, cidr):
function before_interval (line 38) | def before_interval(addr, cidr):
class IntervalPool (line 57) | class IntervalPool(object):
method __init__ (line 59) | def __init__(self, addr_cidr=None, copy=None):
method __str__ (line 81) | def __str__(self):
method printpool (line 84) | def printpool(self):
method allocate (line 92) | def allocate(self, thiscidr):
method overlap (line 113) | def overlap(self, addr, cidr):
method inrange (line 130) | def inrange(self, addr, cidr):
method free (line 138) | def free(self, addr, cidr):
class EnumPool (line 177) | class EnumPool(object):
method __init__ (line 178) | def __init__(self, addr_cidr=None, copy=None):
method __str__ (line 196) | def __str__(self):
method printpool (line 199) | def printpool(self):
method acquire (line 202) | def acquire(self, num=1):
method acquire_cidr (line 210) | def acquire_cidr(self, num=1):
method inrange (line 216) | def inrange(self, ip):
method release (line 224) | def release(self, ip_or_ips):
class UserPool (line 241) | class UserPool(EnumPool):
method __init__ (line 242) | def __init__(self, addr_cidr=None, copy=None):
method get_gateway (line 256) | def get_gateway(self):
method get_gateway_cidr (line 259) | def get_gateway_cidr(self):
method inrange (line 262) | def inrange(self, ip):
method printpool (line 270) | def printpool(self):
class NetworkMgr (line 278) | class NetworkMgr(object):
method __init__ (line 279) | def __init__(self, addr_cidr, etcdclient, mode, masterip):
method load_center (line 370) | def load_center(self):
method dump_center (line 375) | def dump_center(self):
method load_system (line 378) | def load_system(self):
method dump_system (line 383) | def dump_system(self):
method load_user (line 386) | def load_user(self, username):
method dump_user (line 393) | def dump_user(self, username):
method load_usrgw (line 397) | def load_usrgw(self,username):
method dump_usrgw (line 402) | def dump_usrgw(self, username):
method printpools (line 405) | def printpools(self):
method has_usrgw (line 452) | def has_usrgw(self, username):
method setup_usrgw (line 456) | def setup_usrgw(self, input_rate_limit, output_rate_limit, username, u...
method add_user (line 480) | def add_user(self, username, cidr, isshared = False):
method del_usrgwbr (line 509) | def del_usrgwbr(self, username, uid, nodemgr):
method del_user (line 526) | def del_user(self, username):
method check_usergw (line 548) | def check_usergw(self, input_rate_limit, output_rate_limit, username, ...
method check_usergre (line 571) | def check_usergre(self, username, uid, remote, nodemgr, distributedgw=...
method has_user (line 586) | def has_user(self, username):
method acquire_userips (line 590) | def acquire_userips(self, username, num=1):
method acquire_userips_cidr (line 600) | def acquire_userips_cidr(self, username, num=1):
method release_userips (line 612) | def release_userips(self, username, ip_or_ips):
method get_usergw (line 622) | def get_usergw(self, username):
method get_usergw_cidr (line 631) | def get_usergw_cidr(self, username):
method acquire_sysips (line 649) | def acquire_sysips(self, num=1):
method acquire_sysips_cidr (line 655) | def acquire_sysips_cidr(self, num=1):
method release_sysips (line 661) | def release_sysips(self, ip_or_ips):
FILE: src/master/nodemgr.py
class NodeMgr (line 19) | class NodeMgr(object):
method __init__ (line 20) | def __init__(self, networkmgr, etcdclient, addr, mode):
method _nodelist_etcd (line 84) | def _nodelist_etcd(self, which):
method _watchnewnode (line 95) | def _watchnewnode(self):
method recover_node (line 158) | def recover_node(self,ip,tasks):
method get_nodeips (line 168) | def get_nodeips(self):
method get_batch_nodeips (line 171) | def get_batch_nodeips(self):
method get_base_nodeips (line 174) | def get_base_nodeips(self):
method get_allnodes (line 177) | def get_allnodes(self):
method ip_to_rpc (line 180) | def ip_to_rpc(self,ip):
method call_rpc_function (line 190) | def call_rpc_function(self, worker, function, args):
FILE: src/master/notificationmgr.py
class NotificationMgr (line 14) | class NotificationMgr:
method __init__ (line 15) | def __init__(self):
method query_user_notifications (line 31) | def query_user_notifications(self, user):
method mail_notification (line 39) | def mail_notification(self, notify_id):
method create_notification (line 88) | def create_notification(self, *args, **kwargs):
method list_notifications (line 123) | def list_notifications(self, *args, **kwargs):
method modify_notification (line 142) | def modify_notification(self, *args, **kwargs):
method delete_notification (line 166) | def delete_notification(self, *args, **kwargs):
method query_self_notification_simple_infos (line 183) | def query_self_notification_simple_infos(self, *args, **kwargs):
method query_self_notifications_infos (line 209) | def query_self_notifications_infos(self, *args, **kwargs):
method query_notification (line 236) | def query_notification(self, *args, **kwargs):
FILE: src/master/parser.py
function parse (line 6) | def parse(job_data):
FILE: src/master/releasemgr.py
function post_to_user (line 12) | def post_to_user(url = '/', data={}):
class ReleaseMgr (line 17) | class ReleaseMgr(threading.Thread):
method __init__ (line 19) | def __init__(self, vclustermgr, ulockmgr, check_interval=_ONE_DAY_IN_S...
method _send_email (line 31) | def _send_email(self, to_address, username, vcluster, days, is_release...
method run (line 74) | def run(self):
method stop (line 136) | def stop(self):
FILE: src/master/settings.py
class settingsClass (line 9) | class settingsClass:
method __init__ (line 11) | def __init__(self):
method get (line 24) | def get(self, arg):
method list (line 27) | def list(*args, **kwargs):
method update (line 35) | def update(*args, **kwargs):
FILE: src/master/sysmgr.py
function parse_line (line 13) | def parse_line(line):
class SystemManager (line 29) | class SystemManager():
method getParmList (line 31) | def getParmList(*args, **kwargs):
method modify (line 83) | def modify(self, field, parm, val):
method clear (line 118) | def clear(self, field, parm):
method add (line 134) | def add(self, field, parm, val):
method delete (line 140) | def delete(self, field, parm):
method reset_all (line 154) | def reset_all(self, field):
FILE: src/master/taskmgr.py
function ip_to_int (line 20) | def ip_to_int(addr):
function int_to_ip (line 24) | def int_to_ip(num):
class Task (line 27) | class Task():
method __init__ (line 28) | def __init__(self, taskmgr, task_id, username, at_same_time, priority,...
method get_billing (line 54) | def get_billing(self):
method __lt__ (line 78) | def __lt__(self, other):
method gen_ips_from_base (line 81) | def gen_ips_from_base(self,base_ip):
method gen_hosts (line 88) | def gen_hosts(self):
class SubTask (line 106) | class SubTask():
method __init__ (line 107) | def __init__(self, idx, root_task, vnode_info, command_info, max_retry...
method waiting_for_retry (line 127) | def waiting_for_retry(self,reason=""):
class TaskReporter (line 135) | class TaskReporter(MasterServicer):
method __init__ (line 137) | def __init__(self, taskmgr):
method report (line 140) | def report(self, request, context):
class TaskMgr (line 146) | class TaskMgr(threading.Thread):
method __init__ (line 151) | def __init__(self, nodemgr, monitor_fetcher, master_ip, scheduler_inte...
method data_lock (line 196) | def data_lock(lockname):
method subtask_lock (line 212) | def subtask_lock(f):
method run (line 225) | def run(self):
method serve (line 235) | def serve(self):
method stop (line 242) | def stop(self):
method sort_out_task_queue (line 250) | def sort_out_task_queue(self):
method start_vnode (line 295) | def start_vnode(self, subtask):
method stop_vnode (line 314) | def stop_vnode(self, subtask):
method start_subtask (line 335) | def start_subtask(self, subtask):
method stop_subtask (line 350) | def stop_subtask(self, subtask):
method acquire_task_ips (line 367) | def acquire_task_ips(self, task):
method release_task_ips (line 374) | def release_task_ips(self, task):
method setup_tasknet (line 382) | def setup_tasknet(self, task, workers=None):
method remove_tasknet (line 399) | def remove_tasknet(self, task):
method task_processor (line 405) | def task_processor(self, task, sub_task_list):
method clear_sub_tasks (line 468) | def clear_sub_tasks(self, sub_task_list):
method clear_sub_task (line 472) | def clear_sub_task(self, sub_task):
method lazy_stop_task (line 481) | def lazy_stop_task(self, taskid):
method stop_remove_task (line 484) | def stop_remove_task(self, task):
method check_task_completed (line 492) | def check_task_completed(self, task):
method on_task_report (line 510) | def on_task_report(self, report):
method task_scheduler (line 548) | def task_scheduler(self):
method has_waiting (line 604) | def has_waiting(self, sub_task_list):
method find_proper_workers (line 610) | def find_proper_workers(self, sub_task_list, all_res=False):
method get_all_nodes (line 665) | def get_all_nodes(self):
method is_alive (line 674) | def is_alive(self, worker):
method get_worker_resource_info (line 678) | def get_worker_resource_info(self, worker_ip):
method get_cpu_usage (line 690) | def get_cpu_usage(self, worker_ip):
method get_gpu_usage (line 698) | def get_gpu_usage(self, worker_ip):
method add_task (line 708) | def add_task(self, username, taskid, json_task, task_priority=1):
method get_task_list (line 793) | def get_task_list(self):
method get_pending_gpu_tasks_info (line 797) | def get_pending_gpu_tasks_info(self):
method get_task_order (line 800) | def get_task_order(self, taskid):
method get_task (line 807) | def get_task(self, taskid):
method set_jobmgr (line 814) | def set_jobmgr(self, jobmgr):
method get_user_batch_containers (line 819) | def get_user_batch_containers(self,username):
FILE: src/master/testTaskCtrler.py
function run (line 10) | def run():
function stop_task (line 27) | def stop_task():
FILE: src/master/testTaskMgr.py
class SimulatedNodeMgr (line 10) | class SimulatedNodeMgr():
method get_batch_nodeips (line 11) | def get_batch_nodeips(self):
class SimulatedMonitorFetcher (line 15) | class SimulatedMonitorFetcher():
method __init__ (line 16) | def __init__(self, ip):
class SimulatedTaskController (line 29) | class SimulatedTaskController(WorkerServicer):
method __init__ (line 31) | def __init__(self, worker):
method start_vnode (line 34) | def start_vnode(self, vnodeinfo, context):
method stop_vnode (line 38) | def stop_vnode(self, vnodeinfo, context):
method start_task (line 42) | def start_task(self, taskinfo, context):
method stop_task (line 47) | def stop_task(self, taskinfo, context):
class SimulatedWorker (line 52) | class SimulatedWorker(threading.Thread):
method __init__ (line 54) | def __init__(self):
method run (line 59) | def run(self):
method stop (line 83) | def stop(self):
method process (line 86) | def process(self, task):
class SimulatedJobMgr (line 90) | class SimulatedJobMgr(threading.Thread):
method __init__ (line 92) | def __init__(self):
method run (line 96) | def run(self):
method stop (line 101) | def stop(self):
method report (line 104) | def report(self, task):
method assignTask (line 107) | def assignTask(self, taskmgr, taskid, instance_count, retry_count, tim...
class SimulatedLogger (line 129) | class SimulatedLogger():
method info (line 130) | def info(self, msg):
method warning (line 133) | def warning(self, msg):
method error (line 136) | def error(self, msg):
function test (line 140) | def test():
function test2 (line 157) | def test2():
function add (line 171) | def add(taskid, instance_count, retry_count, timeout, cpu, memory, disk,...
function report (line 177) | def report(taskid, instanceid, status, token):
function stop (line 186) | def stop():
FILE: src/master/testTaskWorker.py
function run (line 10) | def run():
function stop_task (line 29) | def stop_task():
function stop_vnode (line 39) | def stop_vnode():
function start_task (line 48) | def start_task():
FILE: src/master/userManager.py
function administration_required (line 36) | def administration_required(func):
function administration_or_self_required (line 49) | def administration_or_self_required(func):
function token_required (line 63) | def token_required(func):
function send_activated_email (line 72) | def send_activated_email(to_address, username):
function send_remind_activating_email (line 99) | def send_remind_activating_email(username):
class userManager (line 139) | class userManager:
method __init__ (line 140) | def __init__(self, username = 'root', password = None):
method auth_local (line 209) | def auth_local(self, username, password):
method auth_pam (line 230) | def auth_pam(self, username, password):
method auth_external (line 259) | def auth_external(self, form, userip=""):
method auth (line 325) | def auth(self, username, password, userip=""):
method auth_token (line 374) | def auth_token(self, token):
method set_nfs_quota_bygroup (line 383) | def set_nfs_quota_bygroup(self,groupname, quota):
method set_nfs_quota (line 390) | def set_nfs_quota(self, username, quota):
method query (line 402) | def query(*args, **kwargs):
method selfQuery (line 466) | def selfQuery(*args, **kwargs):
method selfModify (line 510) | def selfModify(*args, **kwargs):
method usageQuery (line 546) | def usageQuery(self, *args, **kwargs):
method usageInc (line 579) | def usageInc(self, *args, **kwargs):
method usageRecover (line 616) | def usageRecover(self, *args, **kwargs):
method usageRelease (line 647) | def usageRelease(self, *args, **kwargs):
method initUsage (line 673) | def initUsage(*args, **kwargs):
method userList (line 686) | def userList(*args, **kwargs):
method groupList (line 713) | def groupList(*args, **kwargs):
method change_default_group (line 733) | def change_default_group(*args, **kwargs):
method groupQuery (line 746) | def groupQuery(self, *args, **kwargs):
method groupListName (line 765) | def groupListName(*args, **kwargs):
method groupModify (line 781) | def groupModify(self, *args, **kwargs):
method modify (line 815) | def modify(self, *args, **kwargs):
method chpassword (line 869) | def chpassword(*args, **kwargs):
method newuser (line 878) | def newuser(*args, **kwargs):
method register (line 892) | def register(self, *args, **kwargs):
method quotaadd (line 923) | def quotaadd(*args, **kwargs):
method groupadd (line 950) | def groupadd(*args, **kwargs):
method groupdel (line 979) | def groupdel(*args, **kwargs):
method lxcsettingList (line 996) | def lxcsettingList(*args, **kwargs):
method chlxcsetting (line 1003) | def chlxcsetting(*args, **kwargs):
method queryForDisplay (line 1015) | def queryForDisplay(*args, **kwargs):
FILE: src/master/vclustermgr.py
function post_to_user (line 14) | def post_to_user(url = '/', data={}):
function db_commit (line 23) | def db_commit():
class VclusterMgr (line 32) | class VclusterMgr(object):
method __init__ (line 33) | def __init__(self, nodemgr, networkmgr, etcdclient, addr, mode, distri...
method _watchrecovering (line 78) | def _watchrecovering(self):
method recover_allclusters (line 85) | def recover_allclusters(self):
method mount_allclusters (line 93) | def mount_allclusters(self):
method stop_allclusters (line 102) | def stop_allclusters(self):
method detach_allclusters (line 111) | def detach_allclusters(self):
method create_cluster (line 120) | def create_cluster(self, clustername, username, image, user_info, sett...
method scale_out_cluster (line 208) | def scale_out_cluster(self,clustername,username, image,user_info, sett...
method addproxy (line 264) | def addproxy(self,username,clustername,ip,port):
method deleteproxy (line 280) | def deleteproxy(self, username, clustername):
method count_port_mapping (line 295) | def count_port_mapping(self, username):
method add_port_mapping (line 298) | def add_port_mapping(self,username,clustername,node_name,node_ip,port,...
method recover_port_mapping (line 323) | def recover_port_mapping(self,username,clustername):
method delete_all_port_mapping (line 335) | def delete_all_port_mapping(self, username, clustername, node_name):
method delete_port_mapping (line 365) | def delete_port_mapping(self, username, clustername, node_name, node_p...
method flush_cluster (line 386) | def flush_cluster(self,username,clustername,containername):
method image_check (line 424) | def image_check(self,username,imagename):
method create_image (line 431) | def create_image(self,username,clustername,containername,imagename,des...
method delete_cluster (line 452) | def delete_cluster(self, clustername, username, user_info):
method scale_in_cluster (line 486) | def scale_in_cluster(self, clustername, username, containername):
method get_clustersetting (line 525) | def get_clustersetting(self, clustername, username, containername, all...
method update_cluster_baseurl (line 546) | def update_cluster_baseurl(self, clustername, username, oldip, newip):
method check_public_ip (line 558) | def check_public_ip(self, clustername, username):
method start_cluster (line 570) | def start_cluster(self, clustername, username, user_info):
method mount_cluster (line 626) | def mount_cluster(self, clustername, username):
method recover_cluster_on (line 637) | def recover_cluster_on(self, host):
method recover_clusters (line 646) | def recover_clusters(self, clusters_users):
method recover_cluster (line 704) | def recover_cluster(self, clustername, username, uid, input_rate_limit...
method stop_cluster (line 760) | def stop_cluster(self, clustername, username):
method detach_cluster (line 785) | def detach_cluster(self, clustername, username):
method list_clusters (line 798) | def list_clusters(self, user):
method migrate_container (line 813) | def migrate_container(self, clustername, username, containername, new_...
method migrate_cluster (line 881) | def migrate_cluster(self, clustername, username, src_host, new_host_li...
method migrate_host (line 908) | def migrate_host(self, src_host, new_host_list, ulockmgr):
method is_cluster (line 937) | def is_cluster(self, clustername, username):
method get_clusterid (line 945) | def get_clusterid(self, clustername, username):
method update_proxy_ipAndurl (line 954) | def update_proxy_ipAndurl(self, clustername, username, proxy_server_ip):
method get_clusterinfo (line 970) | def get_clusterinfo(self, clustername, username):
method get_vcluster (line 977) | def get_vcluster(self, clustername, username):
method get_all_clusterinfo (line 984) | def get_all_clusterinfo(self):
method _acquire_id (line 993) | def _acquire_id(self):
FILE: src/protos/rpc_pb2_grpc.py
class MasterStub (line 7) | class MasterStub(object):
method __init__ (line 11) | def __init__(self, channel):
class MasterServicer (line 24) | class MasterServicer(object):
method report (line 28) | def report(self, request, context):
function add_MasterServicer_to_server (line 36) | def add_MasterServicer_to_server(servicer, server):
class WorkerStub (line 49) | class WorkerStub(object):
method __init__ (line 53) | def __init__(self, channel):
class WorkerServicer (line 81) | class WorkerServicer(object):
method start_vnode (line 85) | def start_vnode(self, request, context):
method start_task (line 92) | def start_task(self, request, context):
method stop_task (line 99) | def stop_task(self, request, context):
method stop_vnode (line 106) | def stop_vnode(self, request, context):
function add_WorkerServicer_to_server (line 114) | def add_WorkerServicer_to_server(servicer, server):
FILE: src/utils/env.py
function getenv (line 3) | def getenv(key):
FILE: src/utils/etcdlib.py
function dorequest (line 17) | def dorequest(url, data = "", method = 'GET'):
class Client (line 40) | class Client(object):
method __init__ (line 42) | def __init__(self, server, prefix = ""):
method getmembers (line 52) | def getmembers(self):
method listmembers (line 60) | def listmembers(self):
method clean (line 63) | def clean(self):
method getkey (line 81) | def getkey(self, key):
method setkey (line 89) | def setkey(self, key, value, ttl=0):
method delkey (line 100) | def delkey(self, key):
method isdir (line 108) | def isdir(self, dirname):
method createdir (line 117) | def createdir(self, dirname):
method listdir (line 127) | def listdir(self, dirname):
method deldir (line 146) | def deldir(self, dirname):
method watch (line 156) | def watch(self, key):
method atomiccreate (line 165) | def atomiccreate(self, key, value='atom'):
method lockref (line 181) | def lockref(self, key):
method acquire (line 185) | def acquire(self, lockref):
method release (line 192) | def release(self, lockref):
FILE: src/utils/gputools.py
function add_device (line 10) | def add_device(container_name, device_path):
function remove_device (line 15) | def remove_device(container_name, device_path):
function nvidia_smi (line 42) | def nvidia_smi(args=[]):
function get_gpu_driver_version (line 56) | def get_gpu_driver_version():
function get_gpu_names (line 66) | def get_gpu_names():
function get_gpu_status (line 80) | def get_gpu_status():
function get_gpu_processes (line 98) | def get_gpu_processes():
function get_container_name_by_pid (line 118) | def get_container_name_by_pid(pid):
function clean_up_processes_in_gpu (line 128) | def clean_up_processes_in_gpu(gpu_id):
FILE: src/utils/imagemgr.py
class ImageMgr (line 32) | class ImageMgr():
method sys_return (line 37) | def sys_return(self,command):
method __init__ (line 41) | def __init__(self):
method datetime_toString (line 47) | def datetime_toString(self,dt):
method string_toDatetime (line 50) | def string_toDatetime(self,string):
method updateinfo (line 53) | def updateinfo(self,user,imagename,description):
method dealpath (line 67) | def dealpath(self,fspath):
method createImage (line 73) | def createImage(self,user,image,lxc,description="Not thing", imagenum=...
method prepareImage (line 103) | def prepareImage(self,user,image,fspath):
method prepareFS (line 130) | def prepareFS(self,user,image,lxc,size="1000",vgname="docklet-group"):
method deleteFS (line 175) | def deleteFS(self,lxc,vgname="docklet-group"):
method detachFS (line 197) | def detachFS(self, lxc, vgname="docklet-group"):
method checkFS (line 205) | def checkFS(self, lxc, vgname="docklet-group"):
method removeImage (line 220) | def removeImage(self,user,imagename):
method shareImage (line 234) | def shareImage(self,user,imagename):
method unshareImage (line 260) | def unshareImage(self,user,imagename):
method copyImage (line 284) | def copyImage(self,user,image,token,target):
method update_basefs (line 317) | def update_basefs(self,imagename):
method update_base_image (line 333) | def update_base_image(self, user, vclustermgr, image):
method get_image_info (line 350) | def get_image_info(self, user, imagename, imagetype):
method get_image_description (line 370) | def get_image_description(self, user, image):
method get_image_size (line 383) | def get_image_size(self, image):
method format_size (line 396) | def format_size(self, size_in_byte):
method list_images (line 406) | def list_images(self,user):
method isshared (line 463) | def isshared(self,user,imagename):
FILE: src/utils/log.py
function initlogging (line 14) | def initlogging(name='docklet'):
class RedirectLogger (line 55) | class RedirectLogger(object):
method __init__ (line 56) | def __init__(self, logger, level):
method write (line 61) | def write(self, message):
method flush (line 66) | def flush(self):
FILE: src/utils/logs.py
class logsClass (line 10) | class logsClass:
method list (line 13) | def list(*args, **kwargs):
method get (line 26) | def get(*args, **kwargs):
FILE: src/utils/lvmtool.py
function sys_run (line 7) | def sys_run(command,check=False):
function new_group (line 11) | def new_group(group_name, size = "5000", file_path = "/opt/docklet/local...
function recover_group (line 73) | def recover_group(group_name,file_path="/opt/docklet/local/docklet-stora...
function new_volume (line 110) | def new_volume(group_name,volume_name,size):
function check_group (line 125) | def check_group(group_name):
function check_volume (line 132) | def check_volume(group_name,volume_name):
function delete_group (line 139) | def delete_group(group_name):
function delete_volume (line 153) | def delete_volume(group_name, volume_name):
FILE: src/utils/model.py
class User (line 66) | class User(db.Model):
method __init__ (line 84) | def __init__(self, username, password, avatar="default.png", nickname ...
method __repr__ (line 110) | def __repr__(self):
method generate_auth_token (line 114) | def generate_auth_token(self, expiration = 3600):
method verify_auth_token (line 120) | def verify_auth_token(token):
class UserGroup (line 131) | class UserGroup(db.Model):
method __init__ (line 139) | def __init__(self, name):
method __repr__ (line 146) | def __repr__(self):
class UserUsage (line 149) | class UserUsage(db.Model):
method __init__ (line 156) | def __init__(self, name):
method __repr__ (line 162) | def __repr__(self):
class Notification (line 165) | class Notification(db.Model):
method __init__ (line 173) | def __init__(self, title, content=''):
method __repr__ (line 179) | def __repr__(self):
class NotificationGroups (line 183) | class NotificationGroups(db.Model):
method __init__ (line 189) | def __init__(self, notification_id, group_name):
method __repr__ (line 193) | def __repr__(self):
class UserNotificationPair (line 196) | class UserNotificationPair(db.Model):
method __init__ (line 202) | def __init__(self, username, notifyid):
method __repr__ (line 207) | def __repr__(self):
class LoginMsg (line 210) | class LoginMsg(db.Model):
method __init__ (line 217) | def __init__(self, username, userip):
method __repr__ (line 222) | def __repr__(self):
class LoginFailMsg (line 225) | class LoginFailMsg(db.Model):
method __init__ (line 231) | def __init__(self, username):
method __repr__ (line 236) | def __repr__(self):
class VNode (line 239) | class VNode(db.Model):
method __init__ (line 247) | def __init__(self, vnode_name):
method __repr__ (line 253) | def __repr__(self):
class History (line 256) | class History(db.Model):
method __init__ (line 266) | def __init__(self, action, runningtime, cputime, billing):
method __repr__ (line 273) | def __repr__(self):
class ApplyMsg (line 276) | class ApplyMsg(db.Model):
method __init__ (line 285) | def __init__(self,username, number, reason):
method ch2dict (line 292) | def ch2dict(self):
method __repr__ (line 302) | def __repr__(self):
class Container (line 305) | class Container(db.Model):
method __init__ (line 318) | def __init__(self, containername, hostname, ip, host, image, lastsave,...
method __repr__ (line 329) | def __repr__(self):
class PortMapping (line 332) | class PortMapping(db.Model):
method __init__ (line 341) | def __init__(self, node_name, node_ip, node_port, host_port):
method __repr__ (line 347) | def __repr__(self):
class BillingHistory (line 350) | class BillingHistory(db.Model):
method __init__ (line 359) | def __init__(self,node_name,cpu,mem,disk,port):
method __repr__ (line 366) | def __repr__(self):
class VCluster (line 370) | class VCluster(db.Model):
method __init__ (line 388) | def __init__(self, clusterid, clustername, ownername, status, size, ne...
method __repr__ (line 405) | def __repr__(self):
class Image (line 428) | class Image(db.Model):
method __init__ (line 438) | def __init__(self,imagename,hasPrivate,hasPublic,ownername,description):
method __repr__ (line 446) | def __repr__(self):
class Batchjob (line 449) | class Batchjob(db.Model):
method __init__ (line 462) | def __init__(self,id,username,name,priority):
method clear (line 473) | def clear(self):
method __repr__ (line 479) | def __repr__(self):
class Batchtask (line 495) | class Batchtask(db.Model):
method __init__ (line 509) | def __init__(self, id, idx, config):
method clear (line 521) | def clear(self):
method __repr__ (line 530) | def __repr__(self):
FILE: src/utils/nettools.py
class ipcontrol (line 7) | class ipcontrol(object):
method parse (line 9) | def parse(cmdout):
method list_links (line 38) | def list_links():
method link_exist (line 47) | def link_exist(linkname):
method link_info (line 55) | def link_info(linkname):
method link_state (line 63) | def link_state(linkname):
method link_ips (line 71) | def link_ips(linkname):
method up_link (line 82) | def up_link(linkname):
method down_link (line 90) | def down_link(linkname):
method add_addr (line 98) | def add_addr(linkname, address):
method del_addr (line 106) | def del_addr(linkname, address):
class ovscontrol (line 126) | class ovscontrol(object):
method list_bridges (line 128) | def list_bridges():
method bridge_exist (line 136) | def bridge_exist(bridge):
method port_tobridge (line 144) | def port_tobridge(port):
method port_exists (line 152) | def port_exists(port):
method add_bridge (line 156) | def add_bridge(bridge):
method del_bridge (line 164) | def del_bridge(bridge):
method list_ports (line 172) | def list_ports(bridge):
method del_port (line 180) | def del_port(bridge, port):
method add_port (line 188) | def add_port(bridge, port):
method add_port_internal (line 196) | def add_port_internal(bridge, port):
method add_port_internal_withtag (line 204) | def add_port_internal_withtag(bridge, port, tag):
method add_port_gre (line 212) | def add_port_gre(bridge, port, remote):
method add_port_gre_withkey (line 220) | def add_port_gre_withkey(bridge, port, remote, key):
method set_port_tag (line 228) | def set_port_tag(port, tag):
method set_port_input_qos (line 236) | def set_port_input_qos(port, input_rate_limit):
method del_port_input_qos (line 248) | def del_port_input_qos(port):
method set_port_output_qos (line 258) | def set_port_output_qos(port, output_rate_limit):
method del_port_output_qos (line 267) | def del_port_output_qos(port):
method destroy_all_qos (line 276) | def destroy_all_qos():
class netcontrol (line 283) | class netcontrol(object):
method bridge_exists (line 285) | def bridge_exists(bridge):
method del_bridge (line 289) | def del_bridge(bridge):
method new_bridge (line 293) | def new_bridge(bridge):
method gre_exists (line 297) | def gre_exists(bridge, remote):
method setup_gre (line 302) | def setup_gre(bridge, remote):
method gw_exists (line 306) | def gw_exists(bridge, gwport):
method setup_gw (line 310) | def setup_gw(bridge, gwport, addr, input_rate_limit, output_rate_limit):
method del_gw (line 326) | def del_gw(bridge, gwport):
method check_gw (line 336) | def check_gw(bridge, gwport, uid, addr, input_rate_limit, output_rate_...
method recover_usernet (line 354) | def recover_usernet(portname, uid, GatewayHost, isGatewayHost):
class portcontrol (line 369) | class portcontrol(object):
method init_new (line 372) | def init_new():
method init_recovery (line 393) | def init_recovery(Free_Ports_str):
method acquire_port_mapping (line 398) | def acquire_port_mapping(container_name, container_ip, container_port,...
method release_port_mapping (line 442) | def release_port_mapping(container_name, container_ip, container_port):
FILE: src/utils/proxytool.py
function get_routes (line 9) | def get_routes():
function set_route (line 16) | def set_route(path, target):
function delete_route (line 26) | def delete_route(path):
FILE: src/utils/tools.py
function loadenv (line 7) | def loadenv(configpath):
function gen_token (line 22) | def gen_token():
FILE: src/utils/updatebase.py
function aufs_remove (line 6) | def aufs_remove(basefs):
function aufs_clean (line 15) | def aufs_clean(basefs):
function aufs_merge (line 22) | def aufs_merge(image, basefs):
function aufs_update_base (line 70) | def aufs_update_base(image, basefs):
FILE: src/worker/container.py
class Container (line 11) | class Container(object):
method __init__ (line 12) | def __init__(self, addr, etcdclient):
method prepare_hook_conf (line 27) | def prepare_hook_conf(self, conf_path, env_dict):
method create_container (line 38) | def create_container(self, lxc_name, proxy_server_ip, username, uid, s...
method delete_container (line 156) | def delete_container(self, lxc_name):
method start_container (line 175) | def start_container(self, lxc_name):
method start_services (line 192) | def start_services(self, lxc_name, services=[]):
method mount_container (line 213) | def mount_container(self,lxc_name):
method recover_container (line 222) | def recover_container(self, lxc_name):
method update_baseurl (line 246) | def update_baseurl(self, lxc_name, old_ip, new_ip):
method stop_container (line 260) | def stop_container(self, lxc_name):
method detach_container (line 278) | def detach_container(self, lxc_name):
method check_container (line 289) | def check_container(self, lxc_name):
method is_container (line 299) | def is_container(self, lxc_name):
method container_status (line 305) | def container_status(self, lxc_name):
method list_containers (line 314) | def list_containers(self):
method delete_allcontainers (line 320) | def delete_allcontainers(self):
method diff_containers (line 338) | def diff_containers(self):
method create_image (line 358) | def create_image(self,username,imagename,containername,description="no...
method update_basefs (line 361) | def update_basefs(self,imagename):
method check_allcontainers (line 365) | def check_allcontainers(self):
FILE: src/worker/monitor.py
function request_master (line 68) | def request_master(url,data):
class Container_Collector (line 76) | class Container_Collector(threading.Thread):
method __init__ (line 78) | def __init__(self,test=False):
method list_container (line 113) | def list_container(self):
method get_proc_etime (line 120) | def get_proc_etime(self,pid):
method billing_increment (line 138) | def billing_increment(cls,vnode_name,isreal=True):
method collect_net_stats (line 229) | def collect_net_stats(self):
method collect_containerinfo (line 256) | def collect_containerinfo(self,container_name):
method run (line 393) | def run(self):
method stop (line 429) | def stop(self):
class Collector (line 433) | class Collector(threading.Thread):
method __init__ (line 435) | def __init__(self,test=False):
method collect_meminfo (line 447) | def collect_meminfo(self):
method collect_cpuinfo (line 461) | def collect_cpuinfo(self):
method collect_gpuinfo (line 487) | def collect_gpuinfo(self):
method collect_diskinfo (line 513) | def collect_diskinfo(self):
method collect_osinfo (line 547) | def collect_osinfo(self):
method run (line 560) | def run(self):
method stop (line 577) | def stop(self):
function workerFetchInfo (line 581) | def workerFetchInfo(master_ip):
function get_owner (line 591) | def get_owner(container_name):
function get_cluster (line 596) | def get_cluster(container_name):
function count_port_mapping (line 600) | def count_port_mapping(vnode_name):
function save_billing_history (line 604) | def save_billing_history(vnode_name, billing_history):
function get_billing_history (line 624) | def get_billing_history(vnode_name):
class History_Manager (line 637) | class History_Manager:
method __init__ (line 639) | def __init__(self):
method getAll (line 646) | def getAll(self):
method log (line 651) | def log(self,vnode_name,action):
method getHistory (line 696) | def getHistory(self,vnode_name):
method getCreatedVNodes (line 705) | def getCreatedVNodes(self,owner):
FILE: src/worker/ossmounter.py
class OssMounter (line 5) | class OssMounter(object):
method execute_cmd (line 9) | def execute_cmd(cmd):
method mount_oss (line 20) | def mount_oss(datapath, mount_info):
method umount_oss (line 26) | def umount_oss(datapath, mount_info):
class AliyunOssMounter (line 30) | class AliyunOssMounter(OssMounter):
method mount_oss (line 33) | def mount_oss(datapath, mount_info):
method umount_oss (line 60) | def umount_oss(datapath, mount_info):
FILE: src/worker/taskcontroller.py
function ip_to_int (line 26) | def ip_to_int(addr):
function int_to_ip (line 30) | def int_to_ip(num):
class TaskController (line 33) | class TaskController(rpc_pb2_grpc.WorkerServicer):
method __init__ (line 35) | def __init__(self):
method acquire_ip (line 94) | def acquire_ip(self):
method release_ip (line 105) | def release_ip(self,ipstr):
method add_gpu_device (line 112) | def add_gpu_device(self, lxcname, gpu_need):
method release_gpu_device (line 142) | def release_gpu_device(self, lxcname):
method mount_oss (line 150) | def mount_oss(self, datapath, mount_info):
method umount_oss (line 172) | def umount_oss(self, datapath, mount_info):
method create_container (line 186) | def create_container(self,instanceid,username,image,lxcname,quota):
method process_task (line 232) | def process_task(self, request, context):
method write_output (line 303) | def write_output(self,lxcname,tmplogpath,filepath):
method execute_task (line 315) | def execute_task(self,username,taskid,instanceid,envs,lxcname,pkgpath,...
method stop_tasks (line 404) | def stop_tasks(self, request, context):
method add_msg (line 411) | def add_msg(self,taskid,username,instanceid,status,token,errmsg):
method report_msg (line 420) | def report_msg(self):
method start_report (line 435) | def start_report(self):
function TaskControllerServe (line 442) | def TaskControllerServe():
FILE: src/worker/taskworker.py
class TaskWorker (line 28) | class TaskWorker(rpc_pb2_grpc.WorkerServicer):
method __init__ (line 30) | def __init__(self):
method stop_and_rm_containers (line 82) | def stop_and_rm_containers(self,lxcname):
method rm_all_batch_containers (line 97) | def rm_all_batch_containers(self):
method add_gpu_device (line 107) | def add_gpu_device(self, lxcname, gpu_need):
method release_gpu_device (line 137) | def release_gpu_device(self, lxcname):
method mount_oss (line 145) | def mount_oss(self, datapath, mount_info):
method umount_oss (line 167) | def umount_oss(self, datapath, mount_info):
method start_vnode (line 181) | def start_vnode(self, request, context):
method start_task (line 259) | def start_task(self, request, context):
method stop_task (line 282) | def stop_task(self, request, context):
method stop_vnode (line 293) | def stop_vnode(self, request, context):
method prepare_hook_conf (line 327) | def prepare_hook_conf(self, conf_path, env_dict):
method create_container (line 339) | def create_container(self,taskid,vnodeid,username,image,lxcname,quota,...
method write_output (line 385) | def write_output(self,lxcname,tmplogpath,filepath):
method execute_task (line 397) | def execute_task(self,username,taskid,vnodeid,envs,lxcname,pkgpath,com...
method add_msg (line 468) | def add_msg(self,taskid,username,vnodeid,status,token,errmsg):
method report_msg (line 476) | def report_msg(self):
method start_report (line 491) | def start_report(self):
function TaskWorkerServe (line 497) | def TaskWorkerServe():
FILE: src/worker/worker.py
function generatekey (line 42) | def generatekey(path):
class ThreadXMLRPCServer (line 46) | class ThreadXMLRPCServer(ThreadingMixIn,xmlrpc.server.SimpleXMLRPCServer):
class Worker (line 49) | class Worker(object):
method __init__ (line 50) | def __init__(self, etcdclient, addr, port):
method start (line 197) | def start(self):
method sendheartbeat (line 214) | def sendheartbeat(self):
FILE: tools/clean-usage.py
function clean_usage (line 7) | def clean_usage(username,alluser=False):
FILE: tools/update_con_network.py
function post_to_user (line 12) | def post_to_user(url = '/', data={}):
FILE: tools/update_v0.3.2.py
function isexist (line 3) | def isexist(quotas, key):
FILE: tools/upgrade.py
function update_quotainfo (line 9) | def update_quotainfo():
function name_error (line 70) | def name_error():
function allquota (line 96) | def allquota():
function quotaquery (line 106) | def quotaquery(quotaname,quotas):
function enable_gluster_quota (line 112) | def enable_gluster_quota():
function update_image (line 153) | def update_image():
FILE: user/stopreqmgr.py
function request_master (line 14) | def request_master(url,data):
class StopAllReqMgr (line 23) | class StopAllReqMgr(threading.Thread):
method __init__ (line 24) | def __init__(self, maxsize=100, interval=1):
method add_request (line 30) | def add_request(self,username):
method run (line 33) | def run(self):
method stop (line 44) | def stop(self):
FILE: user/user.py
function login_required (line 43) | def login_required(func):
function auth_key_required (line 58) | def auth_key_required(func):
function login (line 72) | def login():
function external_login (line 88) | def external_login():
function register (line 100) | def register():
function auth_token (line 150) | def auth_token(cur_user, user, form):
function modify_user (line 159) | def modify_user(cur_user, user, form):
function groupModify_user (line 167) | def groupModify_user(cur_user, user, form):
function query_user (line 178) | def query_user(cur_user, user, form):
function add_user (line 195) | def add_user(cur_user, user, form):
function groupadd_user (line 209) | def groupadd_user(cur_user, user, form):
function chdefault (line 220) | def chdefault(cur_user, user, form):
function quotaadd_user (line 231) | def quotaadd_user(cur_user, user, form):
function groupdel_user (line 242) | def groupdel_user(cur_user, user, form):
function data_user (line 253) | def data_user(cur_user, user, form):
function groupNameList_user (line 262) | def groupNameList_user(cur_user, user, form):
function groupList_user (line 271) | def groupList_user(cur_user, user, form):
function groupQuery_user (line 280) | def groupQuery_user(cur_user, user, form):
function selfQuery_user (line 289) | def selfQuery_user(cur_user, user, form):
function get_master_recoverinfo (line 297) | def get_master_recoverinfo():
function get_master_groupinfo (line 307) | def get_master_groupinfo():
function usageRelease_master (line 316) | def usageRelease_master():
function selfModify_user (line 331) | def selfModify_user(cur_user, user, form):
function usageQuery_user (line 339) | def usageQuery_user(cur_user, user, form):
function usageInc_user (line 347) | def usageInc_user(cur_user, user, form):
function usageRelease_user (line 358) | def usageRelease_user(cur_user, user, form):
function usageRecover_user (line 368) | def usageRecover_user(cur_user, user, form):
function lxcsettingList_user (line 378) | def lxcsettingList_user(cur_user, user, form):
function chlxcsetting_user (line 386) | def chlxcsetting_user(cur_user, user, form):
function settings_list (line 396) | def settings_list(cur_user, user, form):
function settings_update (line 401) | def settings_update(user, beans, form):
function list_notifications (line 411) | def list_notifications(cur_user, user, form):
function create_notification (line 420) | def create_notification(cur_user, user, form):
function modify_notification (line 431) | def modify_notification(cur_user, user, form):
function delete_notification (line 442) | def delete_notification(cur_user, user, form):
function query_self_notification_simple_infos (line 453) | def query_self_notification_simple_infos(cur_user, user, form):
function query_notification (line 462) | def query_notification(cur_user, user, form):
function query_self_notifications_infos (line 471) | def query_self_notifications_infos(cur_user, user, form):
function report_bug (line 479) | def report_bug(cur_user, user, form):
function billing_beans (line 486) | def billing_beans():
function beans_apply (line 526) | def beans_apply(cur_user,user,form,issue):
function beans_admin (line 551) | def beans_admin(cur_user,user,form,issue):
function internal_server_error (line 580) | def internal_server_error(error):
FILE: web/static/dist/js/app.js
function _init (line 229) | function _init() {
function start (line 657) | function start(box) {
function done (line 664) | function done(box) {
FILE: web/static/js/plot_monitor.js
function processMemData (line 9) | function processMemData(data)
function getMemY (line 12) | function getMemY()
function processCpuData (line 16) | function processCpuData(data)
function getCpuY (line 19) | function getCpuY()
function processRate (line 24) | function processRate(data)
function getIngressRateP (line 27) | function getIngressRateP()
function getEgressRateP (line 32) | function getEgressRateP()
function plot_graph (line 37) | function plot_graph(container,url,processData,getY,fetchdata=true, maxy=...
function num2human (line 164) | function num2human(data)
function processInfo (line 179) | function processInfo()
function plot_net (line 278) | function plot_net(host,monitorurl)
FILE: web/static/js/plot_monitorReal.js
function processMemData (line 10) | function processMemData(data)
function getMemY (line 31) | function getMemY()
function processCpuData (line 38) | function processCpuData(data)
function getCpuY (line 60) | function getCpuY()
function processDiskData (line 69) | function processDiskData(data)
function getDiskY (line 87) | function getDiskY()
function plot_graph (line 92) | function plot_graph(container,url,processData,getY) {
function processStatus (line 216) | function processStatus()
FILE: web/web.py
function home (line 70) | def home():
function login (line 74) | def login():
function external_login_func (line 79) | def external_login_func():
function external_login_callback (line 86) | def external_login_callback():
function logout (line 94) | def logout():
function register (line 101) | def register():
function activate (line 108) | def activate():
function dashboard (line 113) | def dashboard():
function redirect_dochome (line 117) | def redirect_dochome():
function config_view (line 122) | def config_view():
function reportBug (line 127) | def reportBug():
function batch_admin_job (line 133) | def batch_admin_job():
function batch_job (line 138) | def batch_job():
function create_batch_job (line 143) | def create_batch_job():
function add_batch_job (line 148) | def add_batch_job(masterip):
function stop_batch_job (line 155) | def stop_batch_job(masterip,jobid):
function admin_stop_batch_job (line 162) | def admin_stop_batch_job(masterip,jobid):
function info_batch_job (line 169) | def info_batch_job(masterip,jobid):
function output_batch_job (line 176) | def output_batch_job(masterip, jobid, taskid, vnodeid, issue):
function output_batch_job_request (line 186) | def output_batch_job_request(masterip, jobid, taskid, vnodeid, issue):
function addCluster (line 198) | def addCluster():
function listCluster (line 203) | def listCluster(masterip):
function createCluster (line 209) | def createCluster(masterip):
function scaleout (line 217) | def scaleout(clustername,masterip):
function scalein (line 225) | def scalein(clustername,containername,masterip):
function startClustet (line 233) | def startClustet(clustername,masterip):
function stopClustet (line 240) | def stopClustet(clustername,masterip):
function deleteClustet (line 247) | def deleteClustet(clustername,masterip):
function detailCluster (line 254) | def detailCluster(clustername,masterip):
function flushCluster (line 261) | def flushCluster(clustername,containername):
function saveImage (line 268) | def saveImage(clustername,containername,masterip):
function saveImage_force (line 279) | def saveImage_force(clustername,containername,masterip):
function addPortMapping (line 306) | def addPortMapping(masterip):
function delPortMapping (line 312) | def delPortMapping(masterip,clustername,node_name,node_port):
function getmasterdesc (line 321) | def getmasterdesc(mastername):
function masterdesc (line 326) | def masterdesc(mastername):
function image_list (line 333) | def image_list(masterip):
function descriptionImage (line 345) | def descriptionImage(image,masterip):
function shareImage (line 352) | def shareImage(image,masterip):
function unshareImage (line 359) | def unshareImage(image,masterip):
function deleteImage (line 366) | def deleteImage(image,masterip):
function copyImage (line 373) | def copyImage(image,masterip):
function updatebaseImage (line 382) | def updatebaseImage(image,masterip):
function hosts (line 389) | def hosts():
function hostMigrate (line 394) | def hostMigrate(hostip, masterip):
function hostsRealtime (line 402) | def hostsRealtime(com_ip,masterip):
function hostsConAll (line 409) | def hostsConAll(com_ip,masterip):
function hostsConRealtime (line 416) | def hostsConRealtime(com_ip,node_name,masterip):
function status (line 423) | def status():
function statusRealtime (line 428) | def statusRealtime(vcluster_name,node_name,masterip):
function history (line 435) | def history():
function historyVNode (line 441) | def historyVNode(vnode_name,masterip):
function monitor_request (line 449) | def monitor_request(comid,infotype,masterip):
function monitor_user_request (line 463) | def monitor_user_request(issue,masterip):
function beansapplication (line 475) | def beansapplication():
function beansapply (line 480) | def beansapply():
function beansadmin (line 486) | def beansadmin(username,msgid,cmd):
function logs (line 503) | def logs():
function logs_get (line 508) | def logs_get(filename):
function userlist (line 519) | def userlist():
function userLockRelease (line 524) | def userLockRelease(ulockname):
function grouplist (line 534) | def grouplist():
function groupdetail (line 539) | def groupdetail():
function groupquery (line 544) | def groupquery():
function groupmodify (line 549) | def groupmodify(groupname):
function userdata (line 554) | def userdata():
function useradd (line 559) | def useradd():
function usermodify (line 564) | def usermodify():
function userchange (line 569) | def userchange():
function quotaadd (line 574) | def quotaadd():
function chdefault (line 579) | def chdefault():
function chlxcsetting (line 584) | def chlxcsetting():
function groupadd (line 589) | def groupadd():
function groupdel (line 594) | def groupdel(groupname):
function userinfo (line 600) | def userinfo():
function userselfQuery (line 605) | def userselfQuery():
function userquery (line 611) | def userquery():
function cloud (line 616) | def cloud():
function cloud_setting_modify (line 621) | def cloud_setting_modify(masterip):
function cloud_node_add (line 627) | def cloud_node_add(masterip):
function notification_list (line 634) | def notification_list():
function create_notification (line 640) | def create_notification():
function modify_notification (line 646) | def modify_notification():
function delete_notification (line 652) | def delete_notification():
function query_self_notifications (line 658) | def query_self_notifications():
function query_notification_detail (line 664) | def query_notification_detail(notify_id):
function systemmodify (line 670) | def systemmodify():
function systemclearhistory (line 675) | def systemclearhistory():
function systemadd (line 680) | def systemadd():
function systemdelete (line 685) | def systemdelete():
function systemresetall (line 690) | def systemresetall():
function adminpage (line 695) | def adminpage():
function updatesettings (line 700) | def updatesettings():
function jupyter_control (line 704) | def jupyter_control():
function jupyter_prefix (line 717) | def jupyter_prefix():
function jupyter_home (line 724) | def jupyter_home():
function jupyter_login (line 728) | def jupyter_login():
function jupyter_logout (line 732) | def jupyter_logout():
function jupyter_auth (line 736) | def jupyter_auth(cookie_name, cookie_content):
function not_authorized (line 745) | def not_authorized(error):
function internal_server_error (line 757) | def internal_server_error(error):
FILE: web/webViews/admin.py
class adminView (line 7) | class adminView(normalView):
method get (line 11) | def get(self):
class updatesettingsView (line 22) | class updatesettingsView(normalView):
method post (line 25) | def post(self):
class groupaddView (line 30) | class groupaddView(normalView):
method post (line 32) | def post(self):
class systemmodifyView (line 36) | class systemmodifyView(normalView):
method post (line 38) | def post(self):
class systemclearView (line 42) | class systemclearView(normalView):
method post (line 44) | def post(self):
class systemaddView (line 48) | class systemaddView(normalView):
method post (line 50) | def post(self):
class systemdeleteView (line 54) | class systemdeleteView(normalView):
method post (line 56) | def post(self):
class systemresetallView (line 60) | class systemresetallView(normalView):
method post (line 62) | def post(self):
class quotaaddView (line 66) | class quotaaddView(normalView):
method post (line 68) | def post(self):
class chdefaultView (line 72) | class chdefaultView(normalView):
method post (line 74) | def post(self):
class chlxcsettingView (line 78) | class chlxcsettingView(normalView):
method post (line 80) | def post(self):
class groupdelView (line 84) | class groupdelView(normalView):
method post (line 86) | def post(self):
method get (line 94) | def get(self):
class chparmView (line 97) | class chparmView(normalView):
method post (line 99) | def post(self):
class historydelView (line 102) | class historydelView(normalView):
method post (line 104) | def post(self):
class updatebaseImageView (line 108) | class updatebaseImageView(normalView):
method get (line 110) | def get(self):
class hostMigrateView (line 117) | class hostMigrateView(normalView):
method post (line 119) | def post(self):
FILE: web/webViews/authenticate/auth.py
function login_required (line 5) | def login_required(func):
function administration_required (line 21) | def administration_required(func):
function activated_required (line 32) | def activated_required(func):
function is_authenticated (line 43) | def is_authenticated():
function is_admin (line 48) | def is_admin():
function is_activated (line 55) | def is_activated():
FILE: web/webViews/authenticate/login.py
function refreshInfo (line 22) | def refreshInfo():
class loginView (line 36) | class loginView(normalView):
method get (line 40) | def get(self):
method post (line 53) | def post(self):
class logoutView (line 84) | class logoutView(normalView):
method get (line 87) | def get(self):
class external_login_callbackView (line 100) | class external_login_callbackView(normalView):
method get (line 102) | def get(self):
method post (line 125) | def post(self):
class external_loginView (line 147) | class external_loginView(normalView):
method post (line 152) | def post(self):
method get (line 156) | def get(self):
FILE: web/webViews/authenticate/register.py
class registerView (line 5) | class registerView(normalView):
method post (line 9) | def post(self):
method get (line 17) | def get(self):
FILE: web/webViews/batch.py
class batchAdminListView (line 9) | class batchAdminListView(normalView):
method get (line 13) | def get(self):
class batchJobListView (line 26) | class batchJobListView(normalView):
method get (line 30) | def get(self):
class createBatchJobView (line 43) | class createBatchJobView(normalView):
method get (line 47) | def get(self):
class infoBatchJobView (line 74) | class infoBatchJobView(normalView):
method get (line 81) | def get(self):
class addBatchJobView (line 94) | class addBatchJobView(normalView):
method post (line 99) | def post(self):
class stopBatchJobView (line 107) | class stopBatchJobView(normalView):
method get (line 112) | def get(self):
class adminStopBatchJobView (line 121) | class adminStopBatchJobView(normalView):
method get (line 126) | def get(self):
class outputBatchJobView (line 135) | class outputBatchJobView(normalView):
method get (line 144) | def get(self):
FILE: web/webViews/beansapplication.py
class beansapplicationView (line 6) | class beansapplicationView(normalView):
method get (line 10) | def get(self):
method post (line 15) | def post(self):
class beansapplyView (line 18) | class beansapplyView(normalView):
method post (line 22) | def post(self):
method get (line 32) | def get(self):
class beansadminView (line 35) | class beansadminView(normalView):
method get (line 42) | def get(self):
FILE: web/webViews/checkname.py
function checkname (line 9) | def checkname(str):
FILE: web/webViews/cloud.py
class cloudView (line 7) | class cloudView(normalView):
method post (line 11) | def post(self):
method get (line 16) | def get(self):
class cloudSettingModifyView (line 19) | class cloudSettingModifyView(normalView):
method post (line 21) | def post(self):
class cloudNodeAddView (line 25) | class cloudNodeAddView(normalView):
method post (line 27) | def post(self):
method get (line 33) | def get(self):
FILE: web/webViews/cluster.py
class addClusterView (line 8) | class addClusterView(normalView):
method get (line 12) | def get(self):
class createClusterView (line 54) | class createClusterView(normalView):
method post (line 59) | def post(self):
class descriptionMasterView (line 77) | class descriptionMasterView(normalView):
method get (line 81) | def get(self):
class descriptionImageView (line 84) | class descriptionImageView(normalView):
method get (line 88) | def get(self):
class scaleoutView (line 104) | class scaleoutView(normalView):
method post (line 108) | def post(self):
class scaleinView (line 124) | class scaleinView(normalView):
method get (line 128) | def get(self):
class listClusterView (line 140) | class listClusterView(normalView):
method get (line 144) | def get(self):
class startClusterView (line 153) | class startClusterView(normalView):
method get (line 158) | def get(self):
class stopClusterView (line 170) | class stopClusterView(normalView):
method get (line 175) | def get(self):
class flushClusterView (line 186) | class flushClusterView(normalView):
method get (line 191) | def get(self):
class deleteClusterView (line 206) | class deleteClusterView(normalView):
method get (line 211) | def get(self):
class detailClusterView (line 222) | class detailClusterView(normalView):
method get (line 226) | def get(self):
class saveImageView (line 240) | class saveImageView(normalView):
method post (line 246) | def post(self):
class shareImageView (line 271) | class shareImageView(normalView):
method get (line 275) | def get(self):
class unshareImageView (line 286) | class unshareImageView(normalView):
method get (line 290) | def get(self):
class copyImageView (line 301) | class copyImageView(normalView):
method post (line 305) | def post(self):
class deleteImageView (line 320) | class deleteImageView(normalView):
method get (line 324) | def get(self):
class addproxyView (line 335) | class addproxyView(normalView):
method post (line 338) | def post(self):
class deleteproxyView (line 351) | class deleteproxyView(normalView):
method get (line 354) | def get(self):
method post (line 366) | def post(self):
class configView (line 369) | class configView(normalView):
method get (line 371) | def get(self):
method post (line 424) | def post(self):
class addPortMappingView (line 427) | class addPortMappingView(normalView):
method post (line 431) | def post(self):
method get (line 441) | def get(self):
class delPortMappingView (line 444) | class delPortMappingView(normalView):
method post (line 448) | def post(self):
method get (line 458) | def get(self):
FILE: web/webViews/cookie_tool.py
function generate_cookie (line 22) | def generate_cookie(name, securekey):
function parse_cookie (line 33) | def parse_cookie(cookie, securekey):
FILE: web/webViews/dashboard.py
class dashboardView (line 6) | class dashboardView(normalView):
method get (line 10) | def get(self):
method post (line 37) | def post(self):
FILE: web/webViews/dockletrequest.py
function getip (line 18) | def getip(masterip):
function getname (line 21) | def getname(masterip):
class dockletRequest (line 25) | class dockletRequest():
method post (line 28) | def post(self, url = '/', data = {}, endpoint = "http://0.0.0.0:9000"):
method getdesc (line 66) | def getdesc(self,mastername):
method getalldesc (line 70) | def getalldesc(self):
method post_to_all (line 79) | def post_to_all(self, url = '/', data={}):
method unauthorizedpost (line 109) | def unauthorizedpost(self, url = '/', data = None):
FILE: web/webViews/log.py
function initlogging (line 20) | def initlogging(name='docklet'):
class RedirectLogger (line 62) | class RedirectLogger(object):
method __init__ (line 63) | def __init__(self, logger, level):
method write (line 68) | def write(self, message):
method flush (line 73) | def flush(self):
FILE: web/webViews/monitor.py
class statusView (line 6) | class statusView(normalView):
method get (line 10) | def get(self):
class statusRealtimeView (line 50) | class statusRealtimeView(normalView):
method get (line 55) | def get(self):
class historyView (line 64) | class historyView(normalView):
method get (line 68) | def get(self):
class historyVNodeView (line 78) | class historyVNodeView(normalView):
method get (line 83) | def get(self):
class hostsRealtimeView (line 92) | class hostsRealtimeView(normalView):
method get (line 97) | def get(self):
class hostsConAllView (line 111) | class hostsConAllView(normalView):
method get (line 116) | def get(self):
class hostsView (line 133) | class hostsView(normalView):
method get (line 137) | def get(self):
class monitorUserAllView (line 156) | class monitorUserAllView(normalView):
method get (line 160) | def get(self):
FILE: web/webViews/notification/notification.py
class NotificationView (line 8) | class NotificationView(normalView):
method get (line 12) | def get(cls):
class CreateNotificationView (line 19) | class CreateNotificationView(normalView):
method get (line 23) | def get(cls):
method post (line 28) | def post(cls):
class QuerySelfNotificationsView (line 34) | class QuerySelfNotificationsView(normalView):
method post (line 36) | def post(cls):
class QueryNotificationView (line 41) | class QueryNotificationView(normalView):
method get_by_id (line 45) | def get_by_id(cls, notify_id):
class ModifyNotificationView (line 54) | class ModifyNotificationView(normalView):
method post (line 56) | def post(cls):
class DeleteNotificationView (line 61) | class DeleteNotificationView(normalView):
method post (line 63) | def post(cls):
FILE: web/webViews/reportbug.py
class reportBugView (line 6) | class reportBugView(normalView):
method get (line 10) | def get(self):
method post (line 15) | def post(self):
FILE: web/webViews/syslogs.py
class logsView (line 5) | class logsView(normalView):
method get (line 9) | def get(self):
FILE: web/webViews/user/grouplist.py
class grouplistView (line 6) | class grouplistView(normalView):
class groupdetailView (line 9) | class groupdetailView(normalView):
method post (line 11) | def post(self):
class groupqueryView (line 14) | class groupqueryView(normalView):
method post (line 16) | def post(self):
class groupmodifyView (line 19) | class groupmodifyView(normalView):
method post (line 21) | def post(self):
FILE: web/webViews/user/userActivate.py
class userActivateView (line 6) | class userActivateView(normalView):
method get (line 10) | def get(self):
method post (line 18) | def post(self):
FILE: web/webViews/user/userinfo.py
class userinfoView (line 7) | class userinfoView(normalView):
method get (line 11) | def get(self):
method post (line 17) | def post(self):
FILE: web/webViews/user/userlist.py
class userlistView (line 6) | class userlistView(normalView):
method get (line 10) | def get(self):
method post (line 16) | def post(self):
class useraddView (line 20) | class useraddView(normalView):
method post (line 22) | def post(self):
class userdataView (line 26) | class userdataView(normalView):
method get (line 28) | def get(self):
method post (line 32) | def post(self):
class userqueryView (line 35) | class userqueryView(normalView):
method get (line 37) | def get(self):
method post (line 41) | def post(self):
class usermodifyView (line 44) | class usermodifyView(normalView):
method post (line 46) | def post(self):
FILE: web/webViews/view.py
class normalView (line 11) | class normalView():
method get (line 15) | def get(self):
method post (line 19) | def post(self):
method error (line 23) | def error(self):
method as_view (line 27) | def as_view(self):
method render (line 36) | def render(self, *args, **kwargs):
Condensed preview — 185 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,398K chars).
[
{
"path": ".gitignore",
"chars": 125,
"preview": "__pycache__\n*.pyc\n*.swp\n__temp\n*~\n.DS_Store\ndocklet.conf\nhome.html\nsrc/utils/migrations/\ncontainer.conf\ncontainer.batch."
},
{
"path": "CHANGES",
"chars": 4162,
"preview": "v0.4.0, May 26, 2019\n--------------------\n\n**Bug Fix**\n * Fix a bug of update base image.\n * Fix a bug of port contro"
},
{
"path": "LICENSE",
"chars": 1512,
"preview": "Copyright (c) 2016, Peking University (PKU).\nAll rights reserved.\n\nRedistribution and use in source and binary forms, wi"
},
{
"path": "README.md",
"chars": 7441,
"preview": "# Docklet\n\nhttps://unias.github.io/docklet\n\n## Intro\n\nDocklet is an operating system for virtual private cloud. Its goal"
},
{
"path": "VERSION",
"chars": 6,
"preview": "0.4.0\n"
},
{
"path": "bin/docklet-master",
"chars": 7749,
"preview": "#!/bin/sh\n\n[ $(id -u) != '0' ] && echo \"root is needed\" && exit 1\n\n# get some path of docklet\nbindir=${0%/*}\n# $bindir m"
},
{
"path": "bin/docklet-supermaster",
"chars": 8312,
"preview": "#!/bin/sh\n\n[ $(id -u) != '0' ] && echo \"root is needed\" && exit 1\n\n# get some path of docklet\nbindir=${0%/*}\n# $bindir m"
},
{
"path": "bin/docklet-worker",
"chars": 7217,
"preview": "#!/bin/sh\n\n[ $(id -u) != '0' ] && echo \"root is needed\" && exit 1\n\n# get some path of docklet\n\nbindir=${0%/*}\n# $bindir "
},
{
"path": "cloudsdk-installer.sh",
"chars": 199,
"preview": "#!/bin/bash\n\nif [[ \"`whoami`\" != \"root\" ]]; then\n\techo \"FAILED: Require root previledge !\" > /dev/stderr\n\texit 1\nfi\n\npip"
},
{
"path": "conf/container/lxc2.container.batch.conf",
"chars": 2249,
"preview": "# This is the common container.conf for all containers.\n# If want set custom settings, you have two choices:\n# 1. Direct"
},
{
"path": "conf/container/lxc2.container.conf",
"chars": 2246,
"preview": "# This is the common container.conf for all containers.\n# If want set custom settings, you have two choices:\n# 1. Direct"
},
{
"path": "conf/container/lxc3.container.batch.conf",
"chars": 2249,
"preview": "# This is the common container.conf for all containers.\n# If want set custom settings, you have two choices:\n# 1. Direct"
},
{
"path": "conf/container/lxc3.container.conf",
"chars": 2246,
"preview": "# This is the common container.conf for all containers.\n# If want set custom settings, you have two choices:\n# 1. Direct"
},
{
"path": "conf/docklet.conf.template",
"chars": 8250,
"preview": "\n# ==================================================\n#\n# [Local config example]\n#\n# ==================================="
},
{
"path": "conf/lxc-script/lxc-ifdown",
"chars": 467,
"preview": "#!/bin/sh\n\n# $1 : name of container ( name in lxc-start with -n)\n# $2 : net\n# $3 : network flags, up or down\n# $4 : netw"
},
{
"path": "conf/lxc-script/lxc-ifup",
"chars": 299,
"preview": "#!/bin/sh\n\n\n# $1 : name of container ( name in lxc-start with -n)\n# $2 : net\n# $3 : network flags, up or down\n# $4 : net"
},
{
"path": "conf/lxc-script/lxc-mount",
"chars": 172,
"preview": "#!/bin/sh\n\n# $1 Container name.\n# $2 Section (always 'lxc').\n# $3 The hook type (i.e. 'clone' or 'pre-mount').\n\n#cd $LXC"
},
{
"path": "conf/lxc-script/lxc-prestart",
"chars": 695,
"preview": "#!/bin/sh\n\n# $1 Container id\n# $2 Container name.\n# $3 Section (always 'lxc').\n# $4 The hook type (i.e. 'clone' or 'pre-"
},
{
"path": "conf/nginx_docklet.conf",
"chars": 1895,
"preview": "server\n{\n listen %NGINX_PORT;\n #ssl on;\n #ssl_certificate /etc/nginx/ssl/server.crt;\n #s"
},
{
"path": "doc/devdoc/coding.md",
"chars": 5049,
"preview": "# NOTE\n\n## here is some thinking and notes in coding\n\n* path : scripts' path should be known by scripts to call/import o"
},
{
"path": "doc/devdoc/config_info.md",
"chars": 2940,
"preview": "# Info of docklet\n\n## container info\n container name : username-clusterid-nodeid\n hostname : host-nodeid \n lxc"
},
{
"path": "doc/devdoc/network-arch.md",
"chars": 2143,
"preview": "# Architecture of Network\n\n## Architecture of containers networks\nIn current version, to avoid VLAN ID using up, docklet"
},
{
"path": "doc/devdoc/networkmgr.md",
"chars": 1443,
"preview": "# Network Manager\n\n## About\n网络管理是为docklet提供网络管理的模块。\n\n关于需求,主要有两点:\n* 一个中心管理池,按 网络段(IP/CIDR) 给用户分配网络池\n* 很多用户网络池,按 一个或者几个网络地"
},
{
"path": "doc/devdoc/openvswitch-vlan.md",
"chars": 8473,
"preview": "# Test of VLAN on openvswitch\n\n## Note 1\n基本操作,建网桥,配置地址,启动网桥\n\n ovs-vsctl add-br br0\n ip address add 172.0.0.1/8 dev"
},
{
"path": "doc/devdoc/proxy-control.md",
"chars": 1205,
"preview": "# Some Note for configurable-http-proxy usage\n\n## intsall\n sudo apt-get install nodejs nodejs-legacy npm\n sudo npm"
},
{
"path": "doc/devdoc/startup.md",
"chars": 1786,
"preview": "# startup mode\n\n## new mode\n#### step 1 : data\n <Master>\n clean etcd table\n write token\n init etcd ta"
},
{
"path": "doc/devguide/devguide.md",
"chars": 3529,
"preview": "# Docklet Development Guide on GitHub\nThis document is intended for GitHubers to contribute for Docklet System.\n\n## Intr"
},
{
"path": "doc/example/example-LogisticRegression.py",
"chars": 1228,
"preview": "# import package\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import linear_model, datasets\n%matplotl"
},
{
"path": "meter/connector/master.py",
"chars": 3243,
"preview": "#!/usr/bin/python3\n\nimport socket, select, errno, threading, os\n\nclass master_connector:\n\t\n\ttcp_port = 1727\n\tmax_minions"
},
{
"path": "meter/connector/minion.py",
"chars": 1131,
"preview": "#!/usr/bin/python3\n\nimport socket, time, threading, os\n\nclass minion_connector:\n\t\n\tdef connect(server_ip):\n\t\tfrom connec"
},
{
"path": "meter/daemon/http.py",
"chars": 2192,
"preview": "import json, cgi, threading\nfrom http.server import BaseHTTPRequestHandler, HTTPServer\n\nclass base_http_handler(BaseHTTP"
},
{
"path": "meter/daemon/master_v1.py",
"chars": 2033,
"preview": "import subprocess, os\n\ndef http_client_post(ip, port, url, entries = {}):\n\timport urllib.request, urllib.parse, json\n\tur"
},
{
"path": "meter/daemon/minion_v1.py",
"chars": 2669,
"preview": "from intra.system import system_manager\nfrom intra.billing import billing_manager\nfrom intra.cgroup import cgroup_manage"
},
{
"path": "meter/intra/billing.py",
"chars": 1470,
"preview": "import subprocess, time, os\n\nfrom intra.system import system_manager\n\nclass billing_manager:\n\t\n\thistory_book = {}\n\t\n\tdef"
},
{
"path": "meter/intra/cgroup.py",
"chars": 5158,
"preview": "import subprocess, os\n\nclass cgroup_controller:\n\t\n\tdef read_value(group, uuid, item):\n\t\tpath = cgroup_manager.__default_"
},
{
"path": "meter/intra/smart.py",
"chars": 3177,
"preview": "import subprocess, time, os, threading, math\n\nfrom intra.system import system_manager\nfrom intra.cgroup import cgroup_ma"
},
{
"path": "meter/intra/system.py",
"chars": 4694,
"preview": "import subprocess, time, os\n\nfrom intra.cgroup import cgroup_manager\n\nclass system_manager:\n\t\n\tdb_prefix = '.'\n\t\n\tdef se"
},
{
"path": "meter/main.py",
"chars": 2101,
"preview": "#!/usr/bin/python3\n\n########################################\n# Boot for Local:\n# sudo ./main (or: sudo ./main [master-"
},
{
"path": "meter/policy/allocate.py",
"chars": 127,
"preview": "class candidates_selector:\n\t\n\tdef select(candidates):\n\t\treturn max(candidates, key=lambda addr: candidates[addr]['cpu_fr"
},
{
"path": "meter/policy/quota.py",
"chars": 1527,
"preview": "from intra.system import system_manager\nfrom intra.cgroup import cgroup_manager\nimport subprocess\n\nclass identify_policy"
},
{
"path": "prepare.sh",
"chars": 3897,
"preview": "#!/bin/bash\n\n##################################################\n# before-start.sh\n# when you first use do"
},
{
"path": "src/master/beansapplicationmgr.py",
"chars": 5927,
"preview": "#!/usr/bin/python3\n\n'''\nThis module consists of three parts:\n1.send_beans_email: a function to send email to remind user"
},
{
"path": "src/master/bugreporter.py",
"chars": 2008,
"preview": "from master.settings import settings\nimport smtplib\nfrom utils.log import logger\nfrom utils import env\nfrom email.mime.t"
},
{
"path": "src/master/cloudmgr.py",
"chars": 7957,
"preview": "#!/usr/bin/python3\nfrom io import StringIO\nimport os,sys,subprocess,time,re,datetime,threading,random,shutil\nfrom utils."
},
{
"path": "src/master/deploy.py",
"chars": 1744,
"preview": "#!/usr/bin/python3\n\nimport paramiko, time, os\nfrom utils.log import logger\nfrom utils import env\n\ndef myexec(ssh,command"
},
{
"path": "src/master/httprest.py",
"chars": 46389,
"preview": "#!/usr/bin/python3\n\n# load environment variables in the beginning\n# because some modules need variables when import\n# fo"
},
{
"path": "src/master/jobmgr.py",
"chars": 22792,
"preview": "import time, threading, random, string, os, traceback, requests\nimport master.monitor\nimport subprocess,json\nfrom functo"
},
{
"path": "src/master/lockmgr.py",
"chars": 890,
"preview": "#!/usr/bin/python3\n\n'''\nThis module is the manager of threadings locks.\nA LockMgr manages multiple threadings locks.\n'''"
},
{
"path": "src/master/monitor.py",
"chars": 10319,
"preview": "import threading, time, traceback\nfrom utils import env\nfrom utils.log import logger\nfrom httplib2 import Http\nfrom urll"
},
{
"path": "src/master/network.py",
"chars": 27555,
"preview": "#!/usr/bin/python3\n\nimport json, sys, netifaces, threading, traceback\nfrom utils.nettools import netcontrol,ovscontrol\n\n"
},
{
"path": "src/master/nodemgr.py",
"chars": 8251,
"preview": "#!/usr/bin/python3\n\nimport threading, random, time, xmlrpc.client, sys\n#import network\nfrom utils.nettools import netcon"
},
{
"path": "src/master/notificationmgr.py",
"chars": 10691,
"preview": "import json\n\nfrom utils.log import logger\nfrom utils.model import db, Notification, NotificationGroups, User, UserNotifi"
},
{
"path": "src/master/parser.py",
"chars": 3047,
"preview": "#!/user/bin/python3\nimport json\n\njob_data = {'image_1': 'base_base_base', 'mappingRemoteDir_2_2': 'sss', 'dependency_1':"
},
{
"path": "src/master/releasemgr.py",
"chars": 7841,
"preview": "import threading, time, requests, json, traceback\nfrom utils import env\nfrom utils.log import logger\nfrom utils.model im"
},
{
"path": "src/master/settings.py",
"chars": 1940,
"preview": "#!/usr/bin/python3\n\nfrom utils import env\nimport json, os\nfrom functools import wraps\nfrom utils.log import logger\n\n\ncla"
},
{
"path": "src/master/sysmgr.py",
"chars": 7806,
"preview": "import re, string, os\r\n\r\n\r\neditableParms = [\"LOG_LEVEL\",\"ADMIN_EMAIL_ADDRESS\",\"EMAIL_FROM_ADDRESS\",\"OPEN_REGISTRY\",\"APPR"
},
{
"path": "src/master/taskmgr.py",
"chars": 35210,
"preview": "import threading\nimport time\nimport string\nimport os\nimport random, copy, subprocess\nimport json, math\nfrom functools im"
},
{
"path": "src/master/testTaskCtrler.py",
"chars": 1808,
"preview": "import sys\nif sys.path[0].endswith(\"master\"):\n sys.path[0] = sys.path[0][:-6]\n\nimport grpc,time\n\nfrom protos import r"
},
{
"path": "src/master/testTaskMgr.py",
"chars": 5417,
"preview": "import master.taskmgr\nfrom concurrent import futures\nimport grpc\nfrom protos.rpc_pb2 import *\nfrom protos.rpc_pb2_grpc i"
},
{
"path": "src/master/testTaskWorker.py",
"chars": 3158,
"preview": "import sys\nif sys.path[0].endswith(\"master\"):\n sys.path[0] = sys.path[0][:-6]\n\nimport grpc,time\n\nfrom protos import r"
},
{
"path": "src/master/userManager.py",
"chars": 43042,
"preview": "'''\nuserManager for Docklet\nprovide a class for managing users and usergroups in Docklet\nWarning: in some early versions"
},
{
"path": "src/master/userinit.sh",
"chars": 810,
"preview": "#!/bin/bash\n\n# initialize for a new user\n# initialize directory : clusters, data, ssh\n# generate ssh keys "
},
{
"path": "src/master/vclustermgr.py",
"chars": 50507,
"preview": "#!/usr/bin/python3\n\nimport os, random, json, sys\nimport datetime, math, time\n\nfrom utils.log import logger\nfrom utils im"
},
{
"path": "src/protos/rpc.proto",
"chars": 2052,
"preview": "syntax = \"proto3\";\n\nservice Master {\n\trpc report (ReportMsg) returns (Reply) {}\n}\n\nservice Worker {\n\trpc start_vnode (VN"
},
{
"path": "src/protos/rpc_pb2.py",
"chars": 34343,
"preview": "# Generated by the protocol buffer compiler. DO NOT EDIT!\n# source: rpc.proto\n\nimport sys\n_b=sys.version_info[0]<3 and "
},
{
"path": "src/protos/rpc_pb2_grpc.py",
"chars": 4666,
"preview": "# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!\nimport grpc\n\nfrom protos import rpc_pb2 as rpc__pb"
},
{
"path": "src/utils/env.py",
"chars": 4256,
"preview": "import os,netifaces\n\ndef getenv(key):\n if key == \"CLUSTER_NAME\":\n return os.environ.get(\"CLUSTER_NAME\", \"dockl"
},
{
"path": "src/utils/etcdlib.py",
"chars": 7437,
"preview": "#!/usr/bin/python3\n\n############################################################\n# etcdlib.py -- etcdlib provides a pyth"
},
{
"path": "src/utils/gputools.py",
"chars": 5150,
"preview": "import lxc\nimport subprocess\nimport os\nimport signal\nfrom utils.log import logger\n\n\n# Note: keep physical device id alwa"
},
{
"path": "src/utils/imagemgr.py",
"chars": 21448,
"preview": "#!/usr/bin/python3\n\n\"\"\"\ndesign:\n 1. When user create an image, it will upload to an image server, at the same time, l"
},
{
"path": "src/utils/log.py",
"chars": 2419,
"preview": "#!/usr/bin/env python\n\nimport logging\nimport logging.handlers\nimport argparse\nimport sys\nimport time # this is only bei"
},
{
"path": "src/utils/logs.py",
"chars": 1535,
"preview": "#!/usr/bin/python3\n\nfrom utils import env\nimport json, os\nfrom utils.log import logger\nfrom werkzeug.utils import secure"
},
{
"path": "src/utils/lvmtool.py",
"chars": 6715,
"preview": "#!/usr/bin/python3\n\nimport subprocess,os,time\nfrom utils.log import logger\nfrom utils import env\n\ndef sys_run(command,ch"
},
{
"path": "src/utils/manage.py",
"chars": 349,
"preview": "import sys\nif sys.path[0].endswith(\"utils\"):\n sys.path[0] = sys.path[0][:-5]\nfrom flask_migrate import Migrate,Migrat"
},
{
"path": "src/utils/model.py",
"chars": 20100,
"preview": "#coding=utf-8\n'''\n2 tables: users, usergroup\nUser:\n id\n username\n password\n avatar\n nickname\n descript"
},
{
"path": "src/utils/nettools.py",
"chars": 22081,
"preview": "#!/usr/bin/python3\n\nimport subprocess, threading\nfrom utils.log import logger\nfrom utils import env\n\nclass ipcontrol(obj"
},
{
"path": "src/utils/proxytool.py",
"chars": 915,
"preview": "#!/usr/bin/python3\n\nimport requests, json\nfrom utils import env\n\nproxy_api_port = env.getenv(\"PROXY_API_PORT\")\nproxy_con"
},
{
"path": "src/utils/tools.py",
"chars": 646,
"preview": "#!/usr/bin/python3\n\nimport os, random\n\n#from log import logger\n\ndef loadenv(configpath):\n configfile = open(configpat"
},
{
"path": "src/utils/updatebase.py",
"chars": 3495,
"preview": "#!/usr/bin/python3\n\nimport os, shutil\nfrom utils.log import logger\n\ndef aufs_remove(basefs):\n try:\n if os.path"
},
{
"path": "src/worker/container.py",
"chars": 16476,
"preview": "#!/usr/bin/python3\n\nimport subprocess, os, json, traceback\nfrom utils.log import logger\nfrom utils import env, imagemgr\n"
},
{
"path": "src/worker/monitor.py",
"chars": 30138,
"preview": "#!/usr/bin/python3\n\n'''\nMonitor for Docklet\nDescription:Monitor system for docklet will collect data on resources usages"
},
{
"path": "src/worker/ossmounter.py",
"chars": 2455,
"preview": "import abc\nimport subprocess, os\nfrom utils.log import logger\n\nclass OssMounter(object):\n __metaclass__ = abc.ABCMeta"
},
{
"path": "src/worker/taskcontroller.py",
"chars": 19328,
"preview": "#!/usr/bin/python3\nimport sys\nif sys.path[0].endswith(\"worker\"):\n sys.path[0] = sys.path[0][:-6]\nfrom utils import en"
},
{
"path": "src/worker/taskworker.py",
"chars": 22078,
"preview": "#!/usr/bin/python3\nimport sys\nif sys.path[0].endswith(\"worker\"):\n sys.path[0] = sys.path[0][:-6]\nfrom utils import en"
},
{
"path": "src/worker/worker.py",
"chars": 11363,
"preview": "#!/usr/bin/python3\n\n# first init env\nimport sys\nif sys.path[0].endswith(\"worker\"):\n sys.path[0] = sys.path[0][:-6]\nfr"
},
{
"path": "tools/DOCKLET_NOTES.txt",
"chars": 576,
"preview": "** MUST READ **\n\n1. Please keep your important data in ~/nfs directory. It will not be\ndestroyed even if the workspace i"
},
{
"path": "tools/R_demo.ipynb",
"chars": 4883,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"# 一个R语言实现的爬虫,爬取拉手网美食信息\"\n ]\n },\n "
},
{
"path": "tools/alterUserTable.py",
"chars": 838,
"preview": "import sys\nsys.path.append(\"../src/\")\nfrom model import db,User\n\nprint(\"Query all users:\")\nusers = User.query.all()\nprin"
},
{
"path": "tools/clean-usage.py",
"chars": 730,
"preview": "#!/usr/bin/python3\n\nimport os, json, sys\nsys.path.append(\"../src/\")\nfrom model import db, User, UserUsage\n\ndef clean_usa"
},
{
"path": "tools/cloudsetting.aliyun.template.json",
"chars": 350,
"preview": "{\n\t\"CloudName\": \"aliyun\", \n\t\"AccessKeyId\": \"your-key\", \n\t\"AccessKeySecret\": \"your-secret\", \n\t\"RegionId\": \"cn-beijing\", \t"
},
{
"path": "tools/dl_start_spark.sh",
"chars": 517,
"preview": "#!/bin/sh\n\n# a naive script to fast start spark cluster, assuming host-0 master,\n# others slaves.\n# used with dl_stop_sp"
},
{
"path": "tools/dl_stop_spark.sh",
"chars": 469,
"preview": "#!/bin/sh\n\n# a naive script to stop spark cluster, assuming host-0 master\n# others slaves\n# used with dl_start_spark.sh\n"
},
{
"path": "tools/docklet-deploy.sh",
"chars": 836,
"preview": "apt-get update\n\napt-get install -y git\n\ngit clone http://github.com/unias/docklet.git /home/docklet\n\n/home/docklet/prepa"
},
{
"path": "tools/etcd-multi-nodes.sh",
"chars": 1678,
"preview": "#!/bin/bash\n\n# more details for https://coreos.com/etcd/docs/latest\n\nwhich etcd &>/dev/null || { echo \"etcd not installe"
},
{
"path": "tools/etcd-one-node.sh",
"chars": 1532,
"preview": "#!/bin/sh\n\n# more details for https://coreos.com/etcd/docs/latest\n\n#which etcd &>/dev/null || { echo \"etcd not installed"
},
{
"path": "tools/nginx_config.sh",
"chars": 876,
"preview": "#!/bin/sh\n\nMASTER_IP=0.0.0.0\nNGINX_PORT=8080\nPROXY_PORT=8000\nWEB_PORT=8888\nNGINX_CONF=/etc/nginx\n\ntoolsdir=${0%/*}\nDOCKL"
},
{
"path": "tools/npmrc",
"chars": 43,
"preview": "registry = https://registry.npm.taobao.org\n"
},
{
"path": "tools/pip.conf",
"chars": 60,
"preview": "[global]\nindex-url=https://pypi.mirrors.ustc.edu.cn/simple/\n"
},
{
"path": "tools/python_demo.ipynb",
"chars": 11224,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"# 用Python分析《美女与野兽》\"\n ]\n },\n {\n "
},
{
"path": "tools/resolv.conf",
"chars": 52,
"preview": "nameserver 162.105.129.26\nnameserver 162.105.129.27\n"
},
{
"path": "tools/sources.list",
"chars": 83,
"preview": "deb https://mirrors.ustc.edu.cn/ubuntu/ xenial main restricted universe multiverse\n"
},
{
"path": "tools/start_jupyter.sh",
"chars": 1694,
"preview": "#!/bin/sh \n\n#\n# this script should be placed in basefs/home/jupyter\n# \n\n# This next line determines what user the script"
},
{
"path": "tools/update-UserTable.sh",
"chars": 320,
"preview": "#!/bin/bash\n\necho \"Backup UserTable...\"\ncp /opt/docklet/global/sys/UserTable.db /opt/docklet/global/sys/UserTable.db.bac"
},
{
"path": "tools/update-basefs.sh",
"chars": 3458,
"preview": "#!/bin/sh\n\n## WARNING\n## This sript is just for my own convenience . my image is\n## based on Ubuntu xenial. I did not te"
},
{
"path": "tools/update_baseurl.sh",
"chars": 754,
"preview": "#!/bin/sh\n\ntoolsdir=${0%/*}\nDOCKLET_TOOLS=$(cd $toolsdir; pwd)\nDOCKLET_HOME=${DOCKLET_TOOLS%/*}\nDOCKLET_CONF=$DOCKLET_HO"
},
{
"path": "tools/update_con_network.py",
"chars": 919,
"preview": "import sys,os\nsys.path.append(\"../src/\")\nimport env,requests\n\nif len(sys.argv) < 2:\n print(\"Please enter USER_IP\")\n "
},
{
"path": "tools/update_v0.3.2.py",
"chars": 1362,
"preview": "import json\n\ndef isexist(quotas, key):\n flag = False\n for quota in quotas:\n if quota['name'] == key:\n "
},
{
"path": "tools/upgrade.py",
"chars": 7029,
"preview": "#!/usr/bin/python3\n\nimport os, json, sys\nsys.path.append(\"../src/\")\nfrom model import db, User\nfrom lvmtool import sys_r"
},
{
"path": "tools/upgrade_file2db.py",
"chars": 3353,
"preview": "import sys\nsys.path.append(\"../src/\")\nimport os,json\nfrom datetime import datetime\nfrom model import db, VCluster, Conta"
},
{
"path": "tools/vimrc.local",
"chars": 213,
"preview": "syntax on\n\nset smarttab expandtab sw=4 ts=4\n\nset sm ai\n\nset hlsearch\n\nset wildchar=<Tab> wildmenu wildmode=full\n\nset enc"
},
{
"path": "user/stopreqmgr.py",
"chars": 1591,
"preview": "import threading, time\nfrom httplib2 import Http\nfrom urllib.parse import urlencode\nfrom queue import Queue\nfrom utils i"
},
{
"path": "user/user.py",
"chars": 25141,
"preview": "#!/usr/bin/python3\nimport json\nimport os\nimport getopt\n\nimport sys, inspect\n\n\n\n\nthis_folder = os.path.realpath(os.path.a"
},
{
"path": "web/static/css/docklet.css",
"chars": 1109,
"preview": ".btn-outline, .btn-outline-default, .badge-outline, .badge-outline-default, .label-outline, .label-outline-default {\n\tbo"
},
{
"path": "web/static/dist/css/AdminLTE.css",
"chars": 109366,
"preview": "/*!\n * AdminLTE v2.3.2\n * Author: Almsaeed Studio\n *\t Website: Almsaeed Studio <http://almsaeedstudio.com>\n * Lice"
},
{
"path": "web/static/dist/css/filebox.css",
"chars": 685,
"preview": ".file-box {\n float: left;\n width: 220px;\n}\n.file {\n border: 1px solid #e7eaec;\n padding: 0;\n background-color: #fff"
},
{
"path": "web/static/dist/css/flotconfig.css",
"chars": 1022,
"preview": "/* FLOT CHART */\n.flot-chart {\n display: block;\n height: 200px;\n}\n.widget .flot-chart.dashboard-chart {\n display: bl"
},
{
"path": "web/static/dist/css/modalconfig.css",
"chars": 711,
"preview": "/* MODAL */\n.modal-content {\n background-clip: padding-box;\n background-color: #FFFFFF;\n border: 1px solid rgba(0, 0,"
},
{
"path": "web/static/dist/css/skins/_all-skins.css",
"chars": 49027,
"preview": "/*\n * Skin: Blue\n * ----------\n */\n.skin-blue .main-header .navbar {\n background-color: #3c8dbc;\n}\n.skin-blue .main-hea"
},
{
"path": "web/static/dist/css/skins/skin-blue.css",
"chars": 3675,
"preview": "/*\n * Skin: Blue\n * ----------\n */\n.skin-blue .main-header .navbar {\n background-color: #3c8dbc;\n}\n.skin-blue .main-hea"
},
{
"path": "web/static/dist/js/app.js",
"chars": 22744,
"preview": "/*! AdminLTE app.js\n * ================\n * Main JS application file for AdminLTE v2. This file\n * should be included in "
},
{
"path": "web/static/js/plot_monitor.js",
"chars": 8523,
"preview": "var mem_usedp = 0;\nvar cpu_usedp = 0;\nvar is_running = true;\nvar ingress_rate = 0;\nvar egress_rate = 0;\nvar ingress_rate"
},
{
"path": "web/static/js/plot_monitorReal.js",
"chars": 5423,
"preview": "\nvar used = 0;\nvar total = 0;\nvar idle = 0;\nvar disk_usedp = 0;\nvar count = 0;\nvar Ki = 1024;\nvar is_running = true;\n\nfu"
},
{
"path": "web/templates/addCluster.html",
"chars": 10215,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Create Workspace{% endblock %}\n\n{% block css_src %}\n<!--<"
},
{
"path": "web/templates/base_AdminLTE.html",
"chars": 14488,
"preview": "<!DOCTYPE html>\n<html>\n<head>\n <meta charset=\"utf-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <title>"
},
{
"path": "web/templates/batch/batch_admin_list.html",
"chars": 7477,
"preview": "{% extends \"base_AdminLTE.html\"%}\r\n{% block title %}Docklet | Batch Job{% endblock %}\r\n\r\n{% block panel_title %}Batch Jo"
},
{
"path": "web/templates/batch/batch_create.html",
"chars": 22099,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Create Batch Job{% endblock %}\n\n{% block css_src %}\n<!--<"
},
{
"path": "web/templates/batch/batch_info.html",
"chars": 12094,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Batch Job Info{% endblock %}\n\n{% block panel_title %}Info"
},
{
"path": "web/templates/batch/batch_list.html",
"chars": 7523,
"preview": "{% extends \"base_AdminLTE.html\"%}\r\n{% block title %}Docklet | Batch Job{% endblock %}\r\n\r\n{% block panel_title %}Batch Jo"
},
{
"path": "web/templates/batch/batch_output.html",
"chars": 2374,
"preview": "<!DOCTYPE html>\n<html>\n<head>\n <meta charset=\"utf-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <title>"
},
{
"path": "web/templates/beansapplication.html",
"chars": 5615,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Beans Application{% endblock %}\n\n{% block panel_title %}B"
},
{
"path": "web/templates/cloud.html",
"chars": 2843,
"preview": "{% extends \"base_AdminLTE.html\"%}\n{% block title %}Docklet | Cloud{% endblock %}\n\n{% block panel_title %}Cloud{% endbloc"
},
{
"path": "web/templates/config.html",
"chars": 25999,
"preview": "{% extends \"base_AdminLTE.html\"%}\n\n<!--\n\tConfig Page :\n\t\t1. images\n\t\t2. workspace templates\n\n-->\n\n{% block title %}Dockl"
},
{
"path": "web/templates/create_notification.html",
"chars": 3434,
"preview": "{% extends \"base_AdminLTE.html\" %}\n{% block title %}Docklet | Create Notification{% endblock %}\n\n{% block panel_title %}"
},
{
"path": "web/templates/dashboard.html",
"chars": 4456,
"preview": "{% extends \"base_AdminLTE.html\"%}\r\n{% block title %}Docklet | Workspace{% endblock %}\r\n\r\n{% block panel_title %}Workspac"
},
{
"path": "web/templates/description.html",
"chars": 248,
"preview": "{% extends \"base_AdminLTE.html\"%}\r\n{% block title %}Docklet | Description{% endblock %}\r\n\r\n{% block panel_title %}Descri"
},
{
"path": "web/templates/error/401.html",
"chars": 769,
"preview": "{% extends \"base_AdminLTE.html\"%}\r\n\r\n\r\n{% block title %}Docklet | Error{% endblock %}\r\n\r\n{% block panel_title %}401 Erro"
},
{
"path": "web/templates/error/500.html",
"chars": 614,
"preview": "{% extends \"base_AdminLTE.html\"%}\r\n\r\n\r\n{% block title %}Docklet | Error{% endblock %}\r\n\r\n{% block panel_title %}500 Erro"
},
{
"path": "web/templates/error.html",
"chars": 232,
"preview": "{% extends \"base_AdminLTE.html\"%}\r\n{% block title %}Docklet | Error{% endblock %}\r\n\r\n{% block panel_title %}Error{% endb"
},
{
"path": "web/templates/home.template",
"chars": 4954,
"preview": "<!DOCTYPE html>\n<html lang=\"en\">\n\t<head>\n\t\t<meta charset=\"utf-8\">\n\t\t<meta name=\"viewport\" content=\"width=device-width, i"
},
{
"path": "web/templates/listcontainer.html",
"chars": 5195,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Container{% endblock %}\n\n{% block panel_title %}Container"
},
{
"path": "web/templates/login.html",
"chars": 3313,
"preview": "\r\n<!DOCTYPE html>\r\n<html>\r\n<head>\r\n <meta charset=\"utf-8\">\r\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\r\n "
},
{
"path": "web/templates/logs.html",
"chars": 1260,
"preview": "{% extends \"base_AdminLTE.html\"%}\n{% block title %}Docklet | Logs{% endblock %}\n\n{% block panel_title %}Logs{% endblock "
},
{
"path": "web/templates/monitor/history.html",
"chars": 2523,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | History{% endblock %}\n\n{% block panel_title %}History of "
},
{
"path": "web/templates/monitor/historyVNode.html",
"chars": 2675,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | History{% endblock %}\n\n{% block panel_title %}History of "
},
{
"path": "web/templates/monitor/hosts.html",
"chars": 9131,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Hosts{% endblock %}\n\n{% block panel_title %}Hosts Info{% "
},
{
"path": "web/templates/monitor/hostsConAll.html",
"chars": 5781,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Hosts{% endblock %}\n\n{% block panel_title %}Node list for"
},
{
"path": "web/templates/monitor/hostsRealtime.html",
"chars": 11619,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Hosts{% endblock %}\n\n{% block panel_title %}Summary for <"
},
{
"path": "web/templates/monitor/monitorUserAll.html",
"chars": 2248,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | MonitorUser{% endblock %}\n\n{% block panel_title %}Users I"
},
{
"path": "web/templates/monitor/monitorUserCluster.html",
"chars": 2627,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Monitor{% endblock %}\n\n{% block panel_title %}NodeInfo fo"
},
{
"path": "web/templates/monitor/status.html",
"chars": 23570,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Status{% endblock %}\n\n{% block panel_title %}Workspace VC"
},
{
"path": "web/templates/monitor/statusRealtime.html",
"chars": 8693,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Node Summary{% endblock %}\n\n{% block panel_title %}Summar"
},
{
"path": "web/templates/notification.html",
"chars": 14759,
"preview": "{% extends \"base_AdminLTE.html\" %}\n{% block title %}Docklet | Notification{% endblock %}\n\n{% block panel_title %}Notific"
},
{
"path": "web/templates/notification_info.html",
"chars": 1664,
"preview": "{% extends \"base_AdminLTE.html\" %}\n{% block title %}Docklet | Notification{% endblock %}\n\n{% block panel_title %}Notific"
},
{
"path": "web/templates/opfailed.html",
"chars": 558,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Failed{% endblock %}\n\n{% block panel_title %}Failed{% end"
},
{
"path": "web/templates/opsuccess.html",
"chars": 597,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Success{% endblock %}\n\n{% block panel_title %}Success{% e"
},
{
"path": "web/templates/register.html",
"chars": 3078,
"preview": "<!DOCTYPE html>\n<html>\n<head>\n <meta charset=\"utf-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <title>"
},
{
"path": "web/templates/saveconfirm.html",
"chars": 1108,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Confirm{% endblock %}\n\n{% block panel_title %}Confirm{% e"
},
{
"path": "web/templates/settings.html",
"chars": 35627,
"preview": "{% extends \"base_AdminLTE.html\"%}\n{% block title %}Docklet | Settings{% endblock %}\n\n{% block panel_title %}Settings{% e"
},
{
"path": "web/templates/user/activate.html",
"chars": 3562,
"preview": "<!DOCTYPE html>\n<html>\n<head>\n <meta charset=\"utf-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <title>"
},
{
"path": "web/templates/user/info.html",
"chars": 11304,
"preview": "{% extends 'base_AdminLTE.html' %}\n\n{% block title %}Docklet | Information Modify{% endblock %}\n\n{% block css_src %}\n<li"
},
{
"path": "web/templates/user/mailservererror.html",
"chars": 757,
"preview": "{% extends \"base_AdminLTE.html\"%}\r\n\r\n\r\n{% block title %}Docklet | Error{% endblock %}\r\n\r\n{% block panel_title %}500 Erro"
},
{
"path": "web/templates/user_list.html",
"chars": 17850,
"preview": "{% extends \"base_AdminLTE.html\"%}\n{% block title %}Docklet | UserList{% endblock %}\n\n{% block panel_title %}UserList{% e"
},
{
"path": "web/web.py",
"chars": 26615,
"preview": "#!/usr/bin/python3\nimport json\nimport os\nimport getopt\n\nimport sys, inspect\n\n\nthis_folder = os.path.realpath(os.path.abs"
},
{
"path": "web/webViews/admin.py",
"chars": 3888,
"preview": "from flask import session, render_template, redirect, request\nfrom webViews.view import normalView\nfrom webViews.docklet"
},
{
"path": "web/webViews/authenticate/auth.py",
"chars": 1405,
"preview": "from flask import session, request, abort, redirect\nfrom functools import wraps\n\n\ndef login_required(func):\n @wraps(f"
},
{
"path": "web/webViews/authenticate/login.py",
"chars": 6790,
"preview": "from webViews.view import normalView\nfrom webViews.authenticate.auth import is_authenticated\nfrom webViews.dockletreques"
},
{
"path": "web/webViews/authenticate/register.py",
"chars": 740,
"preview": "from webViews.view import normalView\nfrom webViews.dockletrequest import dockletRequest\nfrom flask import redirect, requ"
},
{
"path": "web/webViews/batch.py",
"chars": 5705,
"preview": "from flask import session, redirect, request\nfrom webViews.view import normalView\nfrom webViews.log import logger\nfrom w"
},
{
"path": "web/webViews/beansapplication.py",
"chars": 1504,
"preview": "from flask import session,render_template,request,redirect\nfrom webViews.view import normalView\nfrom webViews.dockletreq"
},
{
"path": "web/webViews/checkname.py",
"chars": 736,
"preview": "import re\nfrom flask import abort, session\n\npattern = re.compile(r'[a-zA-Z_][a-zA-Z0-9_]*')\nerror_msg = ''' Your name ma"
},
{
"path": "web/webViews/cloud.py",
"chars": 933,
"preview": "from flask import session, render_template, redirect, request\nfrom webViews.view import normalView\nfrom webViews.docklet"
},
{
"path": "web/webViews/cluster.py",
"chars": 15450,
"preview": "from flask import session, redirect, request\nfrom webViews.view import normalView\nfrom webViews.dockletrequest import do"
},
{
"path": "web/webViews/cookie_tool.py",
"chars": 2057,
"preview": "#!/usr/bin/python3\n\nimport json, hashlib, base64, time\nimport sys\nfrom webViews.log import logger\n\n# generate cookie :\n#"
},
{
"path": "web/webViews/dashboard.py",
"chars": 1408,
"preview": "from flask import session,render_template\nfrom webViews.view import normalView\nfrom webViews.dockletrequest import dockl"
},
{
"path": "web/webViews/dockletrequest.py",
"chars": 4226,
"preview": "import requests\nfrom flask import abort, session\nfrom webViews.log import logger\nimport os,sys,inspect,traceback\n\n\nthis_"
},
{
"path": "web/webViews/log.py",
"chars": 2667,
"preview": "#!/usr/bin/env python\n\nimport logging\nimport logging.handlers\nimport argparse\nimport sys\nimport time # this is only bei"
},
{
"path": "web/webViews/monitor.py",
"chars": 7161,
"preview": "from flask import session\nfrom webViews.view import normalView\nfrom webViews.dockletrequest import dockletRequest\n\n\nclas"
},
{
"path": "web/webViews/notification/notification.py",
"chars": 2008,
"preview": "import json\n\nfrom flask import session, render_template, redirect, request\nfrom webViews.view import normalView\nfrom web"
},
{
"path": "web/webViews/reportbug.py",
"chars": 466,
"preview": "from flask import session,render_template,request,redirect\nfrom webViews.view import normalView\nfrom webViews.dockletreq"
},
{
"path": "web/webViews/syslogs.py",
"chars": 415,
"preview": "from flask import session,render_template,redirect, request\nfrom webViews.view import normalView\nfrom webViews.dockletre"
},
{
"path": "web/webViews/user/grouplist.py",
"chars": 703,
"preview": "from flask import redirect, request\nfrom webViews.dockletrequest import dockletRequest\nfrom webViews.view import normalV"
},
{
"path": "web/webViews/user/userActivate.py",
"chars": 667,
"preview": "from flask import render_template, redirect, request\nfrom webViews.dockletrequest import dockletRequest\nfrom webViews.vi"
},
{
"path": "web/webViews/user/userinfo.py",
"chars": 615,
"preview": "from flask import redirect, request\nfrom webViews.dockletrequest import dockletRequest\nfrom webViews.authenticate import"
},
{
"path": "web/webViews/user/userlist.py",
"chars": 1560,
"preview": "from flask import render_template, redirect, request\nfrom webViews.dockletrequest import dockletRequest\nfrom webViews.vi"
},
{
"path": "web/webViews/view.py",
"chars": 1192,
"preview": "from flask import render_template, request, abort, session\nfrom webViews.dockletrequest import dockletRequest\n\nimport os"
}
]
About this extraction
This page contains the full source code of the unias/docklet GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 185 files (1.3 MB), approximately 330.9k tokens, and a symbol index with 1213 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.