Repository: burmilla/os Branch: master Commit: 76d5ad24b897 Files: 1572 Total size: 10.2 MB Directory structure: gitextract_wmx6zcva/ ├── .dockerignore ├── .github/ │ ├── ISSUE_TEMPLATE.md │ └── workflows/ │ ├── create-release.yml │ └── pull-request-validation.yml ├── .gitignore ├── Dockerfile.dapper ├── LICENSE ├── Makefile ├── README.md ├── assets/ │ ├── rancher.key │ ├── rancher.key.pub │ └── scripts_ssh_config ├── cmd/ │ ├── cloudinitexecute/ │ │ ├── authorize_ssh_keys.go │ │ └── cloudinitexecute.go │ ├── cloudinitsave/ │ │ └── cloudinitsave.go │ ├── control/ │ │ ├── autologin.go │ │ ├── bootstrap.go │ │ ├── cli.go │ │ ├── config.go │ │ ├── config_test.go │ │ ├── console.go │ │ ├── console_init.go │ │ ├── dev.go │ │ ├── docker_init.go │ │ ├── engine.go │ │ ├── entrypoint.go │ │ ├── env.go │ │ ├── install/ │ │ │ ├── grub.go │ │ │ ├── install.go │ │ │ ├── service.go │ │ │ └── syslinux.go │ │ ├── install.go │ │ ├── os.go │ │ ├── preload.go │ │ ├── recovery_init.go │ │ ├── service/ │ │ │ ├── app/ │ │ │ │ └── app.go │ │ │ ├── command/ │ │ │ │ └── command.go │ │ │ └── service.go │ │ ├── switch_console.go │ │ ├── tlsconf.go │ │ ├── udevsettle.go │ │ ├── user_docker.go │ │ └── util.go │ ├── init/ │ │ └── init.go │ ├── network/ │ │ └── network.go │ ├── power/ │ │ ├── power.go │ │ └── shutdown.go │ ├── respawn/ │ │ └── respawn.go │ ├── sysinit/ │ │ └── sysinit.go │ └── wait/ │ └── wait.go ├── config/ │ ├── cloudinit/ │ │ ├── .gitignore │ │ ├── .travis.yml │ │ ├── CONTRIBUTING.md │ │ ├── DCO │ │ ├── Documentation/ │ │ │ ├── cloud-config-deprecated.md │ │ │ ├── cloud-config-locations.md │ │ │ ├── cloud-config-oem.md │ │ │ ├── cloud-config.md │ │ │ ├── config-drive.md │ │ │ ├── debian-interfaces.md │ │ │ └── vmware-guestinfo.md │ │ ├── LICENSE │ │ ├── MAINTAINERS │ │ ├── NOTICE │ │ ├── README.md │ │ ├── build │ │ ├── config/ │ │ │ ├── config.go │ │ │ ├── config_test.go │ │ │ ├── decode.go │ │ │ ├── etc_hosts.go │ │ │ ├── etcd.go │ │ │ ├── etcd2.go │ │ │ ├── file.go │ │ │ ├── file_test.go │ │ │ ├── flannel.go │ │ │ ├── fleet.go │ │ │ ├── ignition.go │ │ │ ├── locksmith.go │ │ │ ├── locksmith_test.go │ │ │ ├── oem.go │ │ │ ├── script.go │ │ │ ├── unit.go │ │ │ ├── unit_test.go │ │ │ ├── update.go │ │ │ ├── update_test.go │ │ │ ├── user.go │ │ │ └── validate/ │ │ │ ├── context.go │ │ │ ├── context_test.go │ │ │ ├── node.go │ │ │ ├── node_test.go │ │ │ ├── report.go │ │ │ ├── report_test.go │ │ │ ├── rules.go │ │ │ ├── rules_test.go │ │ │ ├── validate.go │ │ │ └── validate_test.go │ │ ├── cover │ │ ├── datasource/ │ │ │ ├── configdrive/ │ │ │ │ ├── configdrive.go │ │ │ │ └── configdrive_test.go │ │ │ ├── datasource.go │ │ │ ├── file/ │ │ │ │ └── file.go │ │ │ ├── metadata/ │ │ │ │ ├── aliyun/ │ │ │ │ │ ├── metadata.go │ │ │ │ │ └── metadata_test.go │ │ │ │ ├── azure/ │ │ │ │ │ ├── metadata.go │ │ │ │ │ └── metadata_test.go │ │ │ │ ├── cloudstack/ │ │ │ │ │ ├── metadata.go │ │ │ │ │ └── metadata_test.go │ │ │ │ ├── digitalocean/ │ │ │ │ │ ├── metadata.go │ │ │ │ │ └── metadata_test.go │ │ │ │ ├── ec2/ │ │ │ │ │ ├── metadata.go │ │ │ │ │ └── metadata_test.go │ │ │ │ ├── exoscale/ │ │ │ │ │ ├── metadata.go │ │ │ │ │ └── metadata_test.go │ │ │ │ ├── gce/ │ │ │ │ │ ├── metadata.go │ │ │ │ │ └── metadata_test.go │ │ │ │ ├── metadata.go │ │ │ │ ├── metadata_test.go │ │ │ │ ├── packet/ │ │ │ │ │ └── metadata.go │ │ │ │ └── test/ │ │ │ │ └── test.go │ │ │ ├── proccmdline/ │ │ │ │ ├── proc_cmdline.go │ │ │ │ └── proc_cmdline_test.go │ │ │ ├── proxmox/ │ │ │ │ ├── proxmox.go │ │ │ │ └── proxmox_test.go │ │ │ ├── test/ │ │ │ │ ├── filesystem.go │ │ │ │ └── filesystem_test.go │ │ │ ├── tftp/ │ │ │ │ ├── tftp.go │ │ │ │ └── tftp_test.go │ │ │ ├── url/ │ │ │ │ └── url.go │ │ │ └── vmware/ │ │ │ ├── vmware.go │ │ │ ├── vmware_amd64.go │ │ │ ├── vmware_test.go │ │ │ └── vmware_unsupported.go │ │ ├── initialize/ │ │ │ ├── env.go │ │ │ ├── env_test.go │ │ │ ├── github.go │ │ │ ├── ssh_keys.go │ │ │ ├── ssh_keys_test.go │ │ │ ├── user_data.go │ │ │ ├── user_data_test.go │ │ │ └── workspace.go │ │ ├── network/ │ │ │ ├── debian.go │ │ │ ├── debian_test.go │ │ │ ├── interface.go │ │ │ ├── interface_test.go │ │ │ ├── is_go15_false_test.go │ │ │ ├── is_go15_true_test.go │ │ │ ├── packet.go │ │ │ ├── stanza.go │ │ │ ├── stanza_test.go │ │ │ ├── vmware.go │ │ │ └── vmware_test.go │ │ ├── pkg/ │ │ │ ├── http_client.go │ │ │ └── http_client_test.go │ │ ├── system/ │ │ │ ├── env.go │ │ │ ├── env_file.go │ │ │ ├── env_file_test.go │ │ │ ├── env_test.go │ │ │ ├── etc_hosts.go │ │ │ ├── etc_hosts_test.go │ │ │ ├── etcd.go │ │ │ ├── etcd2.go │ │ │ ├── etcd_test.go │ │ │ ├── file.go │ │ │ ├── file_test.go │ │ │ ├── flannel.go │ │ │ ├── flannel_test.go │ │ │ ├── fleet.go │ │ │ ├── fleet_test.go │ │ │ ├── locksmith.go │ │ │ ├── locksmith_test.go │ │ │ ├── oem.go │ │ │ ├── oem_test.go │ │ │ ├── ssh_key.go │ │ │ ├── unit.go │ │ │ ├── unit_test.go │ │ │ ├── update.go │ │ │ ├── update_test.go │ │ │ └── user.go │ │ ├── test │ │ ├── units/ │ │ │ ├── 90-configdrive.rules │ │ │ ├── 90-ovfenv.rules │ │ │ ├── media-configdrive.mount │ │ │ ├── media-configvirtfs.mount │ │ │ ├── media-ovfenv.mount │ │ │ ├── system-cloudinit@.service │ │ │ ├── system-config.target │ │ │ ├── user-cloudinit-proc-cmdline.service │ │ │ ├── user-cloudinit@.path │ │ │ ├── user-cloudinit@.service │ │ │ ├── user-config-ovfenv.service │ │ │ ├── user-config.target │ │ │ ├── user-configdrive.service │ │ │ └── user-configvirtfs.service │ │ └── vendor.manifest │ ├── cmdline/ │ │ └── cmdline.go │ ├── config.go │ ├── config_test.go │ ├── data_funcs.go │ ├── disk.go │ ├── docker_config.go │ ├── docker_config_test.go │ ├── metadata_test.go │ ├── schema.go │ ├── types.go │ ├── validate.go │ ├── validate_test.go │ └── yaml/ │ └── command.go ├── images/ │ ├── 00-rootfs/ │ │ ├── .dockerignore │ │ ├── Dockerfile │ │ └── prebuild.sh │ ├── 01-base/ │ │ ├── .dockerignore │ │ ├── Dockerfile │ │ ├── etc/ │ │ │ ├── dhcpcd.conf.tpl │ │ │ ├── dhcpcd.enter-hook │ │ │ ├── inputrc │ │ │ └── wpa_supplicant.conf.tpl │ │ └── usr/ │ │ ├── bin/ │ │ │ ├── growpart │ │ │ └── start_ntp.sh │ │ ├── lib/ │ │ │ ├── dhcpcd/ │ │ │ │ ├── dhcpcd-hooks/ │ │ │ │ │ └── 10-mtu │ │ │ │ └── dhcpcd-run-hooks │ │ │ └── udev/ │ │ │ └── rules-extras.d/ │ │ │ ├── 50-firmware.rules │ │ │ ├── 70-uaccess.rules │ │ │ ├── 73-special-net-names.rules │ │ │ ├── 73-usb-net-by-mac.rules │ │ │ ├── 77-mm-cinterion-port-types.rules │ │ │ ├── 77-mm-dell-port-types.rules │ │ │ ├── 77-mm-ericsson-mbm.rules │ │ │ ├── 77-mm-haier-port-types.rules │ │ │ ├── 77-mm-huawei-net-port-types.rules │ │ │ ├── 77-mm-longcheer-port-types.rules │ │ │ ├── 77-mm-mtk-port-types.rules │ │ │ ├── 77-mm-nokia-port-types.rules │ │ │ ├── 77-mm-pcmcia-device-blacklist.rules │ │ │ ├── 77-mm-platform-serial-whitelist.rules │ │ │ ├── 77-mm-qdl-device-blacklist.rules │ │ │ ├── 77-mm-simtech-port-types.rules │ │ │ ├── 77-mm-telit-port-types.rules │ │ │ ├── 77-mm-usb-device-blacklist.rules │ │ │ ├── 77-mm-usb-serial-adapters-greylist.rules │ │ │ ├── 77-mm-x22x-port-types.rules │ │ │ ├── 77-mm-zte-port-types.rules │ │ │ └── 80-mm-candidate.rules │ │ └── share/ │ │ └── logrotate/ │ │ └── logrotate.d/ │ │ └── dhcpcd.debug │ ├── 02-acpid/ │ │ ├── .dockerignore │ │ ├── Dockerfile │ │ └── etc/ │ │ └── acpi/ │ │ ├── events/ │ │ │ └── lid │ │ └── suspend.sh │ ├── 02-bootstrap/ │ │ ├── .dockerignore │ │ ├── Dockerfile │ │ ├── od-1m0 │ │ └── usr/ │ │ └── sbin/ │ │ └── auto-format.sh │ ├── 02-console/ │ │ ├── Dockerfile │ │ ├── iscsid.conf │ │ ├── prebuild.sh │ │ └── sshd_config.append.tpl │ ├── 02-logrotate/ │ │ ├── .dockerignore │ │ ├── Dockerfile │ │ ├── etc/ │ │ │ └── logrotate.conf │ │ └── usr/ │ │ ├── bin/ │ │ │ └── entrypoint.sh │ │ └── share/ │ │ └── logrotate/ │ │ └── logrotate.d/ │ │ └── docker │ └── 02-syslog/ │ ├── .dockerignore │ ├── Dockerfile │ └── usr/ │ ├── bin/ │ │ └── entrypoint.sh │ └── share/ │ └── logrotate/ │ └── logrotate.d/ │ └── syslog ├── main.go ├── os-config.tpl.yml ├── pkg/ │ ├── compose/ │ │ ├── project.go │ │ └── reload.go │ ├── dfs/ │ │ └── scratch.go │ ├── docker/ │ │ ├── auth.go │ │ ├── client.go │ │ ├── client_factory.go │ │ ├── env.go │ │ ├── service.go │ │ ├── service_factory.go │ │ └── util.go │ ├── hostname/ │ │ └── hostname.go │ ├── init/ │ │ ├── b2d/ │ │ │ └── b2d.go │ │ ├── bootstrap/ │ │ │ └── bootstrap.go │ │ ├── cloudinit/ │ │ │ └── cloudinit.go │ │ ├── configfiles/ │ │ │ └── configfiles.go │ │ ├── debug/ │ │ │ └── debug.go │ │ ├── docker/ │ │ │ └── docker.go │ │ ├── env/ │ │ │ └── env.go │ │ ├── fsmount/ │ │ │ └── fsmount.go │ │ ├── hypervisor/ │ │ │ └── hypervisor.go │ │ ├── modules/ │ │ │ └── modules.go │ │ ├── one/ │ │ │ └── one.go │ │ ├── prepare/ │ │ │ └── prepare.go │ │ ├── recovery/ │ │ │ └── recovery.go │ │ ├── sharedroot/ │ │ │ └── sharedroot.go │ │ └── switchroot/ │ │ └── switchroot.go │ ├── log/ │ │ ├── log.go │ │ └── showuserlog.go │ ├── netconf/ │ │ ├── bonding.go │ │ ├── bridge.go │ │ ├── ipv4ll_linux.go │ │ ├── netconf_linux.go │ │ ├── netconf_linux_test.go │ │ ├── types.go │ │ └── vlan.go │ ├── sysinit/ │ │ └── sysinit.go │ └── util/ │ ├── backoff.go │ ├── cutil.go │ ├── network/ │ │ ├── cache.go │ │ ├── network.go │ │ ├── network_test.go │ │ └── route.go │ ├── term.go │ ├── util.go │ ├── util_linux.go │ ├── util_test.go │ └── versions/ │ ├── compare.go │ └── compare_test.go ├── scripts/ │ ├── build │ ├── build-host │ ├── build-images │ ├── build-moby │ ├── build-target │ ├── checksums │ ├── ci │ ├── clean │ ├── copy-latest.sh │ ├── copy-release.sh │ ├── create-installed │ ├── default │ ├── dev │ ├── entry │ ├── global.cfg │ ├── hash-initrd │ ├── help │ ├── hosting/ │ │ ├── burmillaos.ipxe │ │ ├── digitalocean/ │ │ │ ├── cloud-config.yml │ │ │ ├── fedora-symbiote.yml │ │ │ └── host.sh │ │ └── packet/ │ │ ├── packet.sh │ │ ├── test.expect │ │ └── test.sh │ ├── images/ │ │ └── raspberry-pi-hypriot64/ │ │ ├── .dockerignore │ │ ├── .gitignore │ │ ├── README.md │ │ └── scripts/ │ │ └── build.sh │ ├── inline_schema.go │ ├── installer/ │ │ ├── BaseDockerfile.amd64 │ │ ├── BaseDockerfile.arm64 │ │ ├── Dockerfile.amd64 │ │ ├── Dockerfile.arm64 │ │ ├── README.md │ │ ├── cache-services.sh │ │ ├── conf/ │ │ │ ├── cloud-config-local.yml │ │ │ ├── empty.yml │ │ │ └── vagrant.yml │ │ └── kexec/ │ │ └── Dockerfile.dapper │ ├── isolinux.cfg │ ├── isolinux_label.cfg │ ├── layout │ ├── layout-initrd │ ├── layout-kernel │ ├── moby/ │ │ ├── Dockerfile │ │ ├── README.md │ │ └── rancheros.yml │ ├── package │ ├── package-initrd │ ├── package-installer │ ├── package-iso │ ├── package-rootfs │ ├── prepare │ ├── release │ ├── release-4glte │ ├── release-amd64 │ ├── release-arm64 │ ├── release-build │ ├── release-rpi64 │ ├── ros │ ├── run │ ├── run-common │ ├── run-install │ ├── run-moby │ ├── schema.json │ ├── schema_template │ ├── shell │ ├── ssh │ ├── tar-images │ ├── template │ ├── test │ ├── tools/ │ │ ├── collect_rancheros_info.sh │ │ ├── flush_crt_iso.sh │ │ └── flush_crt_nbd.sh │ ├── validate │ └── version ├── trash.conf └── vendor/ ├── github.com/ │ ├── Microsoft/ │ │ └── go-winio/ │ │ ├── LICENSE │ │ ├── README.md │ │ ├── backup.go │ │ ├── file.go │ │ ├── fileinfo.go │ │ ├── pipe.go │ │ ├── privilege.go │ │ ├── reparse.go │ │ ├── sd.go │ │ ├── syscall.go │ │ └── zsyscall.go │ ├── SvenDowideit/ │ │ └── cpuid/ │ │ ├── .gitignore │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ ├── cpuid.go │ │ ├── cpuid_386.s │ │ ├── cpuid_amd64.s │ │ ├── detect_intel.go │ │ ├── detect_ref.go │ │ ├── generate.go │ │ └── private-gen.go │ ├── cloudfoundry-incubator/ │ │ └── candiedyaml/ │ │ ├── .gitignore │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ ├── api.go │ │ ├── decode.go │ │ ├── emitter.go │ │ ├── encode.go │ │ ├── libyaml-LICENSE │ │ ├── parser.go │ │ ├── reader.go │ │ ├── resolver.go │ │ ├── run_parser.go │ │ ├── scanner.go │ │ ├── tags.go │ │ ├── writer.go │ │ ├── yaml_definesh.go │ │ ├── yaml_privateh.go │ │ └── yamlh.go │ ├── codegangsta/ │ │ └── cli/ │ │ ├── .gitignore │ │ ├── .travis.yml │ │ ├── CHANGELOG.md │ │ ├── LICENSE │ │ ├── README.md │ │ ├── app.go │ │ ├── appveyor.yml │ │ ├── category.go │ │ ├── cli.go │ │ ├── command.go │ │ ├── context.go │ │ ├── errors.go │ │ ├── flag.go │ │ ├── funcs.go │ │ ├── help.go │ │ └── runtests │ ├── coreos/ │ │ └── yaml/ │ │ ├── LICENSE │ │ ├── LICENSE.libyaml │ │ ├── README.md │ │ ├── apic.go │ │ ├── decode.go │ │ ├── emitterc.go │ │ ├── encode.go │ │ ├── parserc.go │ │ ├── readerc.go │ │ ├── resolve.go │ │ ├── scannerc.go │ │ ├── sorter.go │ │ ├── writerc.go │ │ ├── yaml.go │ │ ├── yamlh.go │ │ └── yamlprivateh.go │ ├── davecgh/ │ │ └── go-spew/ │ │ ├── .gitignore │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ ├── cov_report.sh │ │ ├── spew/ │ │ │ ├── bypass.go │ │ │ ├── bypasssafe.go │ │ │ ├── common.go │ │ │ ├── config.go │ │ │ ├── doc.go │ │ │ ├── dump.go │ │ │ ├── format.go │ │ │ └── spew.go │ │ └── test_coverage.txt │ ├── docker/ │ │ ├── containerd/ │ │ │ ├── .gitignore │ │ │ ├── CONTRIBUTING.md │ │ │ ├── Dockerfile │ │ │ ├── LICENSE.code │ │ │ ├── LICENSE.docs │ │ │ ├── MAINTAINERS │ │ │ ├── Makefile │ │ │ ├── NOTICE │ │ │ ├── README.md │ │ │ ├── osutils/ │ │ │ │ ├── fds.go │ │ │ │ ├── prctl.go │ │ │ │ ├── prctl_solaris.go │ │ │ │ └── reaper.go │ │ │ ├── subreaper/ │ │ │ │ ├── exec/ │ │ │ │ │ ├── copy.go │ │ │ │ │ └── wrapper.go │ │ │ │ └── reaper.go │ │ │ └── trash.conf │ │ ├── distribution/ │ │ │ ├── .gitignore │ │ │ ├── .mailmap │ │ │ ├── AUTHORS │ │ │ ├── CONTRIBUTING.md │ │ │ ├── Dockerfile │ │ │ ├── LICENSE │ │ │ ├── MAINTAINERS │ │ │ ├── Makefile │ │ │ ├── README.md │ │ │ ├── ROADMAP.md │ │ │ ├── blobs.go │ │ │ ├── circle.yml │ │ │ ├── context/ │ │ │ │ ├── context.go │ │ │ │ ├── doc.go │ │ │ │ ├── http.go │ │ │ │ ├── logger.go │ │ │ │ ├── trace.go │ │ │ │ ├── util.go │ │ │ │ └── version.go │ │ │ ├── coverpkg.sh │ │ │ ├── digest/ │ │ │ │ ├── digest.go │ │ │ │ ├── digester.go │ │ │ │ ├── doc.go │ │ │ │ ├── set.go │ │ │ │ └── verifiers.go │ │ │ ├── doc.go │ │ │ ├── errors.go │ │ │ ├── manifests.go │ │ │ ├── reference/ │ │ │ │ ├── reference.go │ │ │ │ └── regexp.go │ │ │ ├── registry/ │ │ │ │ ├── api/ │ │ │ │ │ ├── errcode/ │ │ │ │ │ │ ├── errors.go │ │ │ │ │ │ ├── handler.go │ │ │ │ │ │ └── register.go │ │ │ │ │ └── v2/ │ │ │ │ │ ├── descriptors.go │ │ │ │ │ ├── doc.go │ │ │ │ │ ├── errors.go │ │ │ │ │ ├── routes.go │ │ │ │ │ └── urls.go │ │ │ │ ├── client/ │ │ │ │ │ ├── auth/ │ │ │ │ │ │ ├── api_version.go │ │ │ │ │ │ ├── authchallenge.go │ │ │ │ │ │ └── session.go │ │ │ │ │ ├── blob_writer.go │ │ │ │ │ ├── errors.go │ │ │ │ │ ├── repository.go │ │ │ │ │ └── transport/ │ │ │ │ │ ├── http_reader.go │ │ │ │ │ └── transport.go │ │ │ │ └── storage/ │ │ │ │ └── cache/ │ │ │ │ ├── cache.go │ │ │ │ ├── cachedblobdescriptorstore.go │ │ │ │ └── memory/ │ │ │ │ └── memory.go │ │ │ ├── registry.go │ │ │ ├── tags.go │ │ │ └── uuid/ │ │ │ └── uuid.go │ │ ├── docker/ │ │ │ ├── .dockerignore │ │ │ ├── .gitignore │ │ │ ├── Dockerfile.dapper │ │ │ ├── LICENSE │ │ │ ├── Makefile │ │ │ ├── NOTICE │ │ │ ├── builder/ │ │ │ │ ├── builder.go │ │ │ │ ├── context.go │ │ │ │ ├── context_unix.go │ │ │ │ ├── dockerignore/ │ │ │ │ │ └── dockerignore.go │ │ │ │ ├── dockerignore.go │ │ │ │ ├── git.go │ │ │ │ ├── remote.go │ │ │ │ └── tarsum.go │ │ │ ├── cliconfig/ │ │ │ │ └── config.go │ │ │ ├── daemon/ │ │ │ │ └── graphdriver/ │ │ │ │ ├── counter.go │ │ │ │ ├── driver.go │ │ │ │ ├── driver_freebsd.go │ │ │ │ ├── driver_linux.go │ │ │ │ ├── driver_unsupported.go │ │ │ │ ├── fsdiff.go │ │ │ │ ├── plugin.go │ │ │ │ ├── plugin_unsupported.go │ │ │ │ └── proxy.go │ │ │ ├── image/ │ │ │ │ ├── fs.go │ │ │ │ ├── image.go │ │ │ │ ├── rootfs.go │ │ │ │ ├── rootfs_unix.go │ │ │ │ ├── store.go │ │ │ │ └── v1/ │ │ │ │ └── imagev1.go │ │ │ ├── layer/ │ │ │ │ ├── empty.go │ │ │ │ ├── filestore.go │ │ │ │ ├── layer.go │ │ │ │ ├── layer_store.go │ │ │ │ ├── layer_unix.go │ │ │ │ ├── migration.go │ │ │ │ ├── mounted_layer.go │ │ │ │ └── ro_layer.go │ │ │ ├── opts/ │ │ │ │ ├── hosts.go │ │ │ │ ├── hosts_unix.go │ │ │ │ ├── ip.go │ │ │ │ ├── opts.go │ │ │ │ └── opts_unix.go │ │ │ ├── pkg/ │ │ │ │ ├── README.md │ │ │ │ ├── archive/ │ │ │ │ │ ├── README.md │ │ │ │ │ ├── archive.go │ │ │ │ │ ├── archive_unix.go │ │ │ │ │ ├── changes.go │ │ │ │ │ ├── changes_linux.go │ │ │ │ │ ├── changes_other.go │ │ │ │ │ ├── changes_unix.go │ │ │ │ │ ├── copy.go │ │ │ │ │ ├── copy_unix.go │ │ │ │ │ ├── diff.go │ │ │ │ │ ├── example_changes.go │ │ │ │ │ ├── time_linux.go │ │ │ │ │ ├── time_unsupported.go │ │ │ │ │ ├── whiteouts.go │ │ │ │ │ └── wrap.go │ │ │ │ ├── chrootarchive/ │ │ │ │ │ ├── archive.go │ │ │ │ │ ├── archive_unix.go │ │ │ │ │ ├── diff.go │ │ │ │ │ ├── diff_unix.go │ │ │ │ │ └── init_unix.go │ │ │ │ ├── fileutils/ │ │ │ │ │ ├── fileutils.go │ │ │ │ │ └── fileutils_unix.go │ │ │ │ ├── gitutils/ │ │ │ │ │ └── gitutils.go │ │ │ │ ├── homedir/ │ │ │ │ │ └── homedir.go │ │ │ │ ├── httputils/ │ │ │ │ │ ├── httputils.go │ │ │ │ │ ├── mimetype.go │ │ │ │ │ └── resumablerequestreader.go │ │ │ │ ├── idtools/ │ │ │ │ │ ├── idtools.go │ │ │ │ │ ├── idtools_unix.go │ │ │ │ │ ├── usergroupadd_linux.go │ │ │ │ │ └── usergroupadd_unsupported.go │ │ │ │ ├── ioutils/ │ │ │ │ │ ├── bytespipe.go │ │ │ │ │ ├── fmt.go │ │ │ │ │ ├── multireader.go │ │ │ │ │ ├── readers.go │ │ │ │ │ ├── scheduler.go │ │ │ │ │ ├── scheduler_gccgo.go │ │ │ │ │ ├── temp_unix.go │ │ │ │ │ ├── writeflusher.go │ │ │ │ │ └── writers.go │ │ │ │ ├── jsonlog/ │ │ │ │ │ ├── jsonlog.go │ │ │ │ │ ├── jsonlog_marshalling.go │ │ │ │ │ ├── jsonlogbytes.go │ │ │ │ │ └── time_marshalling.go │ │ │ │ ├── jsonmessage/ │ │ │ │ │ └── jsonmessage.go │ │ │ │ ├── mflag/ │ │ │ │ │ ├── LICENSE │ │ │ │ │ ├── README.md │ │ │ │ │ └── flag.go │ │ │ │ ├── mount/ │ │ │ │ │ ├── flags.go │ │ │ │ │ ├── flags_freebsd.go │ │ │ │ │ ├── flags_linux.go │ │ │ │ │ ├── flags_unsupported.go │ │ │ │ │ ├── mount.go │ │ │ │ │ ├── mounter_freebsd.go │ │ │ │ │ ├── mounter_linux.go │ │ │ │ │ ├── mounter_unsupported.go │ │ │ │ │ ├── mountinfo.go │ │ │ │ │ ├── mountinfo_freebsd.go │ │ │ │ │ ├── mountinfo_linux.go │ │ │ │ │ ├── mountinfo_unsupported.go │ │ │ │ │ └── sharedsubtree_linux.go │ │ │ │ ├── plugins/ │ │ │ │ │ ├── client.go │ │ │ │ │ ├── discovery.go │ │ │ │ │ ├── errors.go │ │ │ │ │ ├── plugins.go │ │ │ │ │ └── transport/ │ │ │ │ │ ├── http.go │ │ │ │ │ └── transport.go │ │ │ │ ├── pools/ │ │ │ │ │ └── pools.go │ │ │ │ ├── progress/ │ │ │ │ │ ├── progress.go │ │ │ │ │ └── progressreader.go │ │ │ │ ├── promise/ │ │ │ │ │ └── promise.go │ │ │ │ ├── random/ │ │ │ │ │ └── random.go │ │ │ │ ├── reexec/ │ │ │ │ │ ├── README.md │ │ │ │ │ ├── command_freebsd.go │ │ │ │ │ ├── command_linux.go │ │ │ │ │ ├── command_unsupported.go │ │ │ │ │ └── reexec.go │ │ │ │ ├── signal/ │ │ │ │ │ ├── README.md │ │ │ │ │ ├── signal.go │ │ │ │ │ ├── signal_darwin.go │ │ │ │ │ ├── signal_freebsd.go │ │ │ │ │ ├── signal_linux.go │ │ │ │ │ ├── signal_unix.go │ │ │ │ │ ├── signal_unsupported.go │ │ │ │ │ └── trap.go │ │ │ │ ├── stdcopy/ │ │ │ │ │ └── stdcopy.go │ │ │ │ ├── streamformatter/ │ │ │ │ │ └── streamformatter.go │ │ │ │ ├── stringid/ │ │ │ │ │ ├── README.md │ │ │ │ │ └── stringid.go │ │ │ │ ├── symlink/ │ │ │ │ │ ├── LICENSE.APACHE │ │ │ │ │ ├── LICENSE.BSD │ │ │ │ │ ├── README.md │ │ │ │ │ ├── fs.go │ │ │ │ │ └── fs_unix.go │ │ │ │ ├── system/ │ │ │ │ │ ├── chtimes.go │ │ │ │ │ ├── chtimes_unix.go │ │ │ │ │ ├── errors.go │ │ │ │ │ ├── filesys.go │ │ │ │ │ ├── lstat.go │ │ │ │ │ ├── meminfo.go │ │ │ │ │ ├── meminfo_linux.go │ │ │ │ │ ├── meminfo_unsupported.go │ │ │ │ │ ├── mknod.go │ │ │ │ │ ├── path_unix.go │ │ │ │ │ ├── stat.go │ │ │ │ │ ├── stat_freebsd.go │ │ │ │ │ ├── stat_linux.go │ │ │ │ │ ├── stat_openbsd.go │ │ │ │ │ ├── stat_solaris.go │ │ │ │ │ ├── stat_unsupported.go │ │ │ │ │ ├── syscall_unix.go │ │ │ │ │ ├── umask.go │ │ │ │ │ ├── utimes_darwin.go │ │ │ │ │ ├── utimes_freebsd.go │ │ │ │ │ ├── utimes_linux.go │ │ │ │ │ ├── utimes_unsupported.go │ │ │ │ │ ├── xattrs_linux.go │ │ │ │ │ └── xattrs_unsupported.go │ │ │ │ ├── tarsum/ │ │ │ │ │ ├── builder_context.go │ │ │ │ │ ├── fileinfosums.go │ │ │ │ │ ├── tarsum.go │ │ │ │ │ ├── tarsum_spec.md │ │ │ │ │ ├── versioning.go │ │ │ │ │ └── writercloser.go │ │ │ │ ├── term/ │ │ │ │ │ ├── ascii.go │ │ │ │ │ ├── tc_linux_cgo.go │ │ │ │ │ ├── tc_other.go │ │ │ │ │ ├── term.go │ │ │ │ │ ├── termios_darwin.go │ │ │ │ │ ├── termios_freebsd.go │ │ │ │ │ ├── termios_linux.go │ │ │ │ │ └── termios_openbsd.go │ │ │ │ ├── urlutil/ │ │ │ │ │ └── urlutil.go │ │ │ │ └── version/ │ │ │ │ └── version.go │ │ │ ├── reference/ │ │ │ │ ├── reference.go │ │ │ │ └── store.go │ │ │ ├── registry/ │ │ │ │ ├── auth.go │ │ │ │ ├── config.go │ │ │ │ ├── config_unix.go │ │ │ │ ├── endpoint_v1.go │ │ │ │ ├── reference.go │ │ │ │ ├── registry.go │ │ │ │ ├── service.go │ │ │ │ ├── service_v1.go │ │ │ │ ├── service_v2.go │ │ │ │ ├── session.go │ │ │ │ └── types.go │ │ │ ├── runconfig/ │ │ │ │ └── opts/ │ │ │ │ ├── envfile.go │ │ │ │ ├── opts.go │ │ │ │ ├── parse.go │ │ │ │ ├── throttledevice.go │ │ │ │ ├── ulimit.go │ │ │ │ └── weightdevice.go │ │ │ └── trash.conf │ │ ├── engine-api/ │ │ │ ├── .travis.yml │ │ │ ├── CHANGELOG.md │ │ │ ├── CONTRIBUTING.md │ │ │ ├── LICENSE │ │ │ ├── MAINTAINERS │ │ │ ├── Makefile │ │ │ ├── README.md │ │ │ ├── appveyor.yml │ │ │ ├── client/ │ │ │ │ ├── client.go │ │ │ │ ├── client_darwin.go │ │ │ │ ├── client_unix.go │ │ │ │ ├── client_windows.go │ │ │ │ ├── container_attach.go │ │ │ │ ├── container_commit.go │ │ │ │ ├── container_copy.go │ │ │ │ ├── container_create.go │ │ │ │ ├── container_diff.go │ │ │ │ ├── container_exec.go │ │ │ │ ├── container_export.go │ │ │ │ ├── container_inspect.go │ │ │ │ ├── container_kill.go │ │ │ │ ├── container_list.go │ │ │ │ ├── container_logs.go │ │ │ │ ├── container_pause.go │ │ │ │ ├── container_remove.go │ │ │ │ ├── container_rename.go │ │ │ │ ├── container_resize.go │ │ │ │ ├── container_restart.go │ │ │ │ ├── container_start.go │ │ │ │ ├── container_stats.go │ │ │ │ ├── container_stop.go │ │ │ │ ├── container_top.go │ │ │ │ ├── container_unpause.go │ │ │ │ ├── container_update.go │ │ │ │ ├── container_wait.go │ │ │ │ ├── errors.go │ │ │ │ ├── events.go │ │ │ │ ├── hijack.go │ │ │ │ ├── image_build.go │ │ │ │ ├── image_create.go │ │ │ │ ├── image_history.go │ │ │ │ ├── image_import.go │ │ │ │ ├── image_inspect.go │ │ │ │ ├── image_list.go │ │ │ │ ├── image_load.go │ │ │ │ ├── image_pull.go │ │ │ │ ├── image_push.go │ │ │ │ ├── image_remove.go │ │ │ │ ├── image_save.go │ │ │ │ ├── image_search.go │ │ │ │ ├── image_tag.go │ │ │ │ ├── info.go │ │ │ │ ├── interface.go │ │ │ │ ├── login.go │ │ │ │ ├── network_connect.go │ │ │ │ ├── network_create.go │ │ │ │ ├── network_disconnect.go │ │ │ │ ├── network_inspect.go │ │ │ │ ├── network_list.go │ │ │ │ ├── network_remove.go │ │ │ │ ├── privileged.go │ │ │ │ ├── request.go │ │ │ │ ├── transport/ │ │ │ │ │ ├── cancellable/ │ │ │ │ │ │ ├── canceler.go │ │ │ │ │ │ ├── canceler_go14.go │ │ │ │ │ │ └── cancellable.go │ │ │ │ │ ├── client.go │ │ │ │ │ └── transport.go │ │ │ │ ├── version.go │ │ │ │ ├── volume_create.go │ │ │ │ ├── volume_inspect.go │ │ │ │ ├── volume_list.go │ │ │ │ └── volume_remove.go │ │ │ └── types/ │ │ │ ├── auth.go │ │ │ ├── blkiodev/ │ │ │ │ └── blkio.go │ │ │ ├── client.go │ │ │ ├── configs.go │ │ │ ├── container/ │ │ │ │ ├── config.go │ │ │ │ ├── host_config.go │ │ │ │ ├── hostconfig_unix.go │ │ │ │ └── hostconfig_windows.go │ │ │ ├── filters/ │ │ │ │ └── parse.go │ │ │ ├── network/ │ │ │ │ └── network.go │ │ │ ├── registry/ │ │ │ │ └── registry.go │ │ │ ├── seccomp.go │ │ │ ├── stats.go │ │ │ ├── strslice/ │ │ │ │ └── strslice.go │ │ │ ├── time/ │ │ │ │ └── timestamp.go │ │ │ └── types.go │ │ ├── go-connections/ │ │ │ ├── CONTRIBUTING.md │ │ │ ├── LICENSE │ │ │ ├── MAINTAINERS │ │ │ ├── README.md │ │ │ ├── circle.yml │ │ │ ├── nat/ │ │ │ │ ├── nat.go │ │ │ │ ├── parse.go │ │ │ │ └── sort.go │ │ │ ├── sockets/ │ │ │ │ ├── README.md │ │ │ │ ├── inmem_socket.go │ │ │ │ ├── proxy.go │ │ │ │ ├── sockets.go │ │ │ │ ├── sockets_unix.go │ │ │ │ ├── sockets_windows.go │ │ │ │ ├── tcp_socket.go │ │ │ │ └── unix_socket.go │ │ │ └── tlsconfig/ │ │ │ ├── config.go │ │ │ ├── config_client_ciphers.go │ │ │ └── config_legacy_client_ciphers.go │ │ ├── go-units/ │ │ │ ├── LICENSE │ │ │ ├── README.md │ │ │ ├── circle.yml │ │ │ ├── duration.go │ │ │ ├── size.go │ │ │ └── ulimit.go │ │ ├── libcompose/ │ │ │ ├── .dockerignore │ │ │ ├── .gitignore │ │ │ ├── CHANGELOG.md │ │ │ ├── CONTRIBUTING.md │ │ │ ├── Dockerfile │ │ │ ├── Jenkinsfile │ │ │ ├── LICENSE │ │ │ ├── MAINTAINERS │ │ │ ├── Makefile │ │ │ ├── README.md │ │ │ ├── cli/ │ │ │ │ ├── app/ │ │ │ │ │ ├── app.go │ │ │ │ │ ├── types.go │ │ │ │ │ └── version.go │ │ │ │ ├── docker/ │ │ │ │ │ └── app/ │ │ │ │ │ ├── commands.go │ │ │ │ │ └── factory.go │ │ │ │ └── logger/ │ │ │ │ ├── color_logger.go │ │ │ │ └── colors.go │ │ │ ├── config/ │ │ │ │ ├── convert.go │ │ │ │ ├── hash.go │ │ │ │ ├── interpolation.go │ │ │ │ ├── merge.go │ │ │ │ ├── merge_v1.go │ │ │ │ ├── merge_v2.go │ │ │ │ ├── schema.go │ │ │ │ ├── schema_helpers.go │ │ │ │ ├── types.go │ │ │ │ ├── utils.go │ │ │ │ └── validation.go │ │ │ ├── docker/ │ │ │ │ ├── auth.go │ │ │ │ ├── builder/ │ │ │ │ │ └── builder.go │ │ │ │ ├── client/ │ │ │ │ │ └── client.go │ │ │ │ ├── container.go │ │ │ │ ├── context.go │ │ │ │ ├── convert.go │ │ │ │ ├── functions.go │ │ │ │ ├── image.go │ │ │ │ ├── name.go │ │ │ │ ├── project.go │ │ │ │ ├── service.go │ │ │ │ └── service_factory.go │ │ │ ├── labels/ │ │ │ │ └── labels.go │ │ │ ├── logger/ │ │ │ │ ├── null.go │ │ │ │ └── types.go │ │ │ ├── lookup/ │ │ │ │ ├── composable.go │ │ │ │ ├── envfile.go │ │ │ │ ├── file.go │ │ │ │ └── simple_env.go │ │ │ ├── project/ │ │ │ │ ├── client_factory.go │ │ │ │ ├── container.go │ │ │ │ ├── context.go │ │ │ │ ├── empty.go │ │ │ │ ├── events/ │ │ │ │ │ └── events.go │ │ │ │ ├── info.go │ │ │ │ ├── interface.go │ │ │ │ ├── listener.go │ │ │ │ ├── options/ │ │ │ │ │ └── types.go │ │ │ │ ├── project.go │ │ │ │ ├── service-wrapper.go │ │ │ │ ├── service.go │ │ │ │ └── utils.go │ │ │ ├── utils/ │ │ │ │ └── util.go │ │ │ ├── version/ │ │ │ │ └── version.go │ │ │ └── yaml/ │ │ │ └── types_yaml.go │ │ ├── libnetwork/ │ │ │ ├── .dockerignore │ │ │ ├── .gitignore │ │ │ ├── CHANGELOG.md │ │ │ ├── Dockerfile.build │ │ │ ├── LICENSE │ │ │ ├── MAINTAINERS │ │ │ ├── Makefile │ │ │ ├── README.md │ │ │ ├── ROADMAP.md │ │ │ ├── Vagrantfile │ │ │ ├── circle.yml │ │ │ ├── machines │ │ │ ├── resolvconf/ │ │ │ │ ├── README.md │ │ │ │ ├── dns/ │ │ │ │ │ └── resolvconf.go │ │ │ │ └── resolvconf.go │ │ │ └── wrapmake.sh │ │ └── machine/ │ │ ├── .dockerignore │ │ ├── .gitignore │ │ ├── .godir │ │ ├── .travis.yml │ │ ├── CHANGELOG.md │ │ ├── CONTRIBUTING.md │ │ ├── Dockerfile │ │ ├── LICENSE │ │ ├── MAINTAINERS │ │ ├── Makefile │ │ ├── README.md │ │ ├── ROADMAP.md │ │ ├── log/ │ │ │ ├── log.go │ │ │ └── terminal.go │ │ └── utils/ │ │ ├── b2d.go │ │ ├── certs.go │ │ └── utils.go │ ├── fatih/ │ │ └── structs/ │ │ ├── .gitignore │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ ├── field.go │ │ ├── structs.go │ │ └── tags.go │ ├── flynn/ │ │ └── go-shlex/ │ │ ├── COPYING │ │ ├── Makefile │ │ ├── README.md │ │ └── shlex.go │ ├── gorilla/ │ │ ├── context/ │ │ │ ├── .travis.yml │ │ │ ├── LICENSE │ │ │ ├── README.md │ │ │ ├── context.go │ │ │ └── doc.go │ │ └── mux/ │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ ├── doc.go │ │ ├── mux.go │ │ ├── regexp.go │ │ └── route.go │ ├── opencontainers/ │ │ └── runc/ │ │ ├── .gitignore │ │ ├── CONTRIBUTING.md │ │ ├── Dockerfile │ │ ├── LICENSE │ │ ├── MAINTAINERS │ │ ├── MAINTAINERS_GUIDE.md │ │ ├── Makefile │ │ ├── NOTICE │ │ ├── PRINCIPLES.md │ │ ├── README.md │ │ └── libcontainer/ │ │ ├── README.md │ │ ├── SPEC.md │ │ └── user/ │ │ ├── MAINTAINERS │ │ ├── lookup.go │ │ ├── lookup_unix.go │ │ ├── lookup_unsupported.go │ │ └── user.go │ ├── packethost/ │ │ └── packngo/ │ │ ├── .drone.yml │ │ ├── .gitignore │ │ ├── LICENSE.txt │ │ ├── README.md │ │ └── metadata/ │ │ └── metadata.go │ ├── pin/ │ │ └── tftp/ │ │ ├── .gitignore │ │ ├── .travis.yml │ │ ├── CONTRIBUTORS │ │ ├── LICENSE │ │ ├── README.md │ │ ├── backoff.go │ │ ├── client.go │ │ ├── netascii/ │ │ │ └── netascii.go │ │ ├── packet.go │ │ ├── receiver.go │ │ ├── sender.go │ │ └── server.go │ ├── pkg/ │ │ └── errors/ │ │ ├── .gitignore │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ ├── appveyor.yml │ │ ├── errors.go │ │ └── stack.go │ ├── pmezard/ │ │ └── go-difflib/ │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ └── difflib/ │ │ └── difflib.go │ ├── ryanuber/ │ │ └── go-glob/ │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ └── glob.go │ ├── sigma/ │ │ ├── bdoor/ │ │ │ ├── LICENSE │ │ │ ├── README │ │ │ └── check.go │ │ ├── vmw-guestinfo/ │ │ │ ├── CDDL.txt │ │ │ ├── LGPL.txt │ │ │ ├── LICENSE │ │ │ ├── README │ │ │ ├── rpcvmx/ │ │ │ │ └── rpcvmx.go │ │ │ └── vmcheck/ │ │ │ ├── vmcheck.go │ │ │ ├── vmcheck_386.s │ │ │ ├── vmcheck_amd64.s │ │ │ ├── vmcheck_general.go │ │ │ └── vmcheck_linux.go │ │ └── vmw-ovflib/ │ │ ├── LICENSE │ │ ├── README │ │ └── ovf.go │ ├── stretchr/ │ │ └── testify/ │ │ ├── .gitignore │ │ ├── .travis.yml │ │ ├── LICENCE.txt │ │ ├── README.md │ │ ├── assert/ │ │ │ ├── assertions.go │ │ │ ├── doc.go │ │ │ ├── errors.go │ │ │ ├── forward_assertions.go │ │ │ └── http_assertions.go │ │ └── require/ │ │ ├── doc.go │ │ ├── forward_requirements.go │ │ └── requirements.go │ ├── tredoe/ │ │ └── term/ │ │ ├── AUTHORS.md │ │ ├── CONTRIBUTORS.md │ │ ├── LICENSE-MPL.txt │ │ ├── README.md │ │ ├── doc.go │ │ ├── sys/ │ │ │ ├── doc.go │ │ │ ├── key_unix.go │ │ │ ├── sys_bsd.go │ │ │ ├── sys_linux.go │ │ │ ├── sys_unix.go │ │ │ ├── z-sys_darwin_386.go │ │ │ ├── z-sys_darwin_amd64.go │ │ │ ├── z-sys_freebsd.go │ │ │ ├── z-sys_linux.go │ │ │ ├── z-sys_netbsd.go │ │ │ └── z-sys_openbsd.go │ │ ├── term.go │ │ ├── term_unix.go │ │ └── util_unix.go │ ├── vbatts/ │ │ └── tar-split/ │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ ├── archive/ │ │ │ └── tar/ │ │ │ ├── common.go │ │ │ ├── reader.go │ │ │ ├── stat_atim.go │ │ │ ├── stat_atimespec.go │ │ │ ├── stat_unix.go │ │ │ └── writer.go │ │ └── tar/ │ │ ├── asm/ │ │ │ ├── README.md │ │ │ ├── assemble.go │ │ │ ├── disassemble.go │ │ │ └── doc.go │ │ └── storage/ │ │ ├── doc.go │ │ ├── entry.go │ │ ├── getter.go │ │ └── packer.go │ ├── vishvananda/ │ │ ├── netlink/ │ │ │ ├── .travis.yml │ │ │ ├── LICENSE │ │ │ ├── Makefile │ │ │ ├── README.md │ │ │ ├── addr.go │ │ │ ├── addr_linux.go │ │ │ ├── bpf_linux.go │ │ │ ├── bridge_linux.go │ │ │ ├── class.go │ │ │ ├── class_linux.go │ │ │ ├── conntrack_linux.go │ │ │ ├── conntrack_unspecified.go │ │ │ ├── filter.go │ │ │ ├── filter_linux.go │ │ │ ├── genetlink_linux.go │ │ │ ├── genetlink_unspecified.go │ │ │ ├── gtp_linux.go │ │ │ ├── handle_linux.go │ │ │ ├── handle_unspecified.go │ │ │ ├── link.go │ │ │ ├── link_linux.go │ │ │ ├── link_tuntap_linux.go │ │ │ ├── neigh.go │ │ │ ├── neigh_linux.go │ │ │ ├── netlink.go │ │ │ ├── netlink_linux.go │ │ │ ├── netlink_unspecified.go │ │ │ ├── nl/ │ │ │ │ ├── addr_linux.go │ │ │ │ ├── bridge_linux.go │ │ │ │ ├── conntrack_linux.go │ │ │ │ ├── genetlink_linux.go │ │ │ │ ├── link_linux.go │ │ │ │ ├── mpls_linux.go │ │ │ │ ├── nl_linux.go │ │ │ │ ├── nl_unspecified.go │ │ │ │ ├── route_linux.go │ │ │ │ ├── syscall.go │ │ │ │ ├── tc_linux.go │ │ │ │ ├── xfrm_linux.go │ │ │ │ ├── xfrm_monitor_linux.go │ │ │ │ ├── xfrm_policy_linux.go │ │ │ │ └── xfrm_state_linux.go │ │ │ ├── order.go │ │ │ ├── protinfo.go │ │ │ ├── protinfo_linux.go │ │ │ ├── qdisc.go │ │ │ ├── qdisc_linux.go │ │ │ ├── route.go │ │ │ ├── route_linux.go │ │ │ ├── route_unspecified.go │ │ │ ├── rule.go │ │ │ ├── rule_linux.go │ │ │ ├── socket.go │ │ │ ├── socket_linux.go │ │ │ ├── xfrm.go │ │ │ ├── xfrm_monitor_linux.go │ │ │ ├── xfrm_policy.go │ │ │ ├── xfrm_policy_linux.go │ │ │ ├── xfrm_state.go │ │ │ └── xfrm_state_linux.go │ │ └── netns/ │ │ ├── LICENSE │ │ ├── README.md │ │ ├── netns.go │ │ ├── netns_linux.go │ │ └── netns_unspecified.go │ ├── vmware/ │ │ └── vmw-guestinfo/ │ │ ├── LICENSE │ │ ├── README │ │ ├── bdoor/ │ │ │ ├── bdoor.go │ │ │ ├── bdoor_386.go │ │ │ ├── bdoor_386.s │ │ │ ├── bdoor_amd64.go │ │ │ ├── bdoor_amd64.s │ │ │ └── word.go │ │ ├── go.mod │ │ ├── message/ │ │ │ ├── log.go │ │ │ └── message.go │ │ └── rpcout/ │ │ └── rpcout.go │ └── xeipuuv/ │ ├── gojsonpointer/ │ │ ├── LICENSE-APACHE-2.0.txt │ │ ├── README.md │ │ └── pointer.go │ ├── gojsonreference/ │ │ ├── LICENSE-APACHE-2.0.txt │ │ ├── README.md │ │ └── reference.go │ └── gojsonschema/ │ ├── .gitignore │ ├── .travis.yml │ ├── LICENSE-APACHE-2.0.txt │ ├── README.md │ ├── errors.go │ ├── format_checkers.go │ ├── glide.yaml │ ├── internalLog.go │ ├── jsonContext.go │ ├── jsonLoader.go │ ├── locales.go │ ├── result.go │ ├── schema.go │ ├── schemaPool.go │ ├── schemaReferencePool.go │ ├── schemaType.go │ ├── subSchema.go │ ├── types.go │ ├── utils.go │ └── validation.go └── golang.org/ └── x/ ├── crypto/ │ ├── .gitattributes │ ├── .gitignore │ ├── AUTHORS │ ├── CONTRIBUTING.md │ ├── CONTRIBUTORS │ ├── LICENSE │ ├── PATENTS │ ├── README │ ├── codereview.cfg │ └── ssh/ │ └── terminal/ │ ├── terminal.go │ ├── util.go │ ├── util_bsd.go │ ├── util_linux.go │ └── util_windows.go ├── net/ │ ├── .gitattributes │ ├── .gitignore │ ├── AUTHORS │ ├── CONTRIBUTING.md │ ├── CONTRIBUTORS │ ├── LICENSE │ ├── PATENTS │ ├── README.md │ ├── codereview.cfg │ ├── context/ │ │ ├── context.go │ │ ├── go17.go │ │ ├── go19.go │ │ ├── pre_go17.go │ │ └── pre_go19.go │ ├── go.mod │ ├── go.sum │ ├── internal/ │ │ └── socks/ │ │ ├── client.go │ │ └── socks.go │ └── proxy/ │ ├── dial.go │ ├── direct.go │ ├── per_host.go │ ├── proxy.go │ └── socks5.go └── sys/ ├── .gitattributes ├── .gitignore ├── AUTHORS ├── CONTRIBUTING.md ├── CONTRIBUTORS ├── LICENSE ├── PATENTS ├── README.md ├── codereview.cfg ├── go.mod ├── internal/ │ └── unsafeheader/ │ └── unsafeheader.go ├── unix/ │ ├── .gitignore │ ├── README.md │ ├── affinity_linux.go │ ├── aliases.go │ ├── asm_aix_ppc64.s │ ├── asm_darwin_386.s │ ├── asm_darwin_amd64.s │ ├── asm_darwin_arm.s │ ├── asm_darwin_arm64.s │ ├── asm_dragonfly_amd64.s │ ├── asm_freebsd_386.s │ ├── asm_freebsd_amd64.s │ ├── asm_freebsd_arm.s │ ├── asm_freebsd_arm64.s │ ├── asm_linux_386.s │ ├── asm_linux_amd64.s │ ├── asm_linux_arm.s │ ├── asm_linux_arm64.s │ ├── asm_linux_mips64x.s │ ├── asm_linux_mipsx.s │ ├── asm_linux_ppc64x.s │ ├── asm_linux_riscv64.s │ ├── asm_linux_s390x.s │ ├── asm_netbsd_386.s │ ├── asm_netbsd_amd64.s │ ├── asm_netbsd_arm.s │ ├── asm_netbsd_arm64.s │ ├── asm_openbsd_386.s │ ├── asm_openbsd_amd64.s │ ├── asm_openbsd_arm.s │ ├── asm_openbsd_arm64.s │ ├── asm_solaris_amd64.s │ ├── bluetooth_linux.go │ ├── cap_freebsd.go │ ├── constants.go │ ├── dev_aix_ppc.go │ ├── dev_aix_ppc64.go │ ├── dev_darwin.go │ ├── dev_dragonfly.go │ ├── dev_freebsd.go │ ├── dev_linux.go │ ├── dev_netbsd.go │ ├── dev_openbsd.go │ ├── dirent.go │ ├── endian_big.go │ ├── endian_little.go │ ├── env_unix.go │ ├── errors_freebsd_386.go │ ├── errors_freebsd_amd64.go │ ├── errors_freebsd_arm.go │ ├── errors_freebsd_arm64.go │ ├── fcntl.go │ ├── fcntl_darwin.go │ ├── fcntl_linux_32bit.go │ ├── fdset.go │ ├── gccgo.go │ ├── gccgo_c.c │ ├── gccgo_linux_amd64.go │ ├── ioctl.go │ ├── mkall.sh │ ├── mkasm_darwin.go │ ├── mkerrors.sh │ ├── mkmerge.go │ ├── mkpost.go │ ├── mksyscall.go │ ├── mksyscall_aix_ppc.go │ ├── mksyscall_aix_ppc64.go │ ├── mksyscall_solaris.go │ ├── mksysctl_openbsd.go │ ├── mksysnum.go │ ├── pagesize_unix.go │ ├── pledge_openbsd.go │ ├── race.go │ ├── race0.go │ ├── readdirent_getdents.go │ ├── readdirent_getdirentries.go │ ├── sockcmsg_dragonfly.go │ ├── sockcmsg_linux.go │ ├── sockcmsg_unix.go │ ├── sockcmsg_unix_other.go │ ├── str.go │ ├── syscall.go │ ├── syscall_aix.go │ ├── syscall_aix_ppc.go │ ├── syscall_aix_ppc64.go │ ├── syscall_bsd.go │ ├── syscall_darwin.1_12.go │ ├── syscall_darwin.1_13.go │ ├── syscall_darwin.go │ ├── syscall_darwin_386.1_11.go │ ├── syscall_darwin_386.go │ ├── syscall_darwin_amd64.1_11.go │ ├── syscall_darwin_amd64.go │ ├── syscall_darwin_arm.1_11.go │ ├── syscall_darwin_arm.go │ ├── syscall_darwin_arm64.1_11.go │ ├── syscall_darwin_arm64.go │ ├── syscall_darwin_libSystem.go │ ├── syscall_dragonfly.go │ ├── syscall_dragonfly_amd64.go │ ├── syscall_freebsd.go │ ├── syscall_freebsd_386.go │ ├── syscall_freebsd_amd64.go │ ├── syscall_freebsd_arm.go │ ├── syscall_freebsd_arm64.go │ ├── syscall_illumos.go │ ├── syscall_linux.go │ ├── syscall_linux_386.go │ ├── syscall_linux_amd64.go │ ├── syscall_linux_amd64_gc.go │ ├── syscall_linux_arm.go │ ├── syscall_linux_arm64.go │ ├── syscall_linux_gc.go │ ├── syscall_linux_gc_386.go │ ├── syscall_linux_gccgo_386.go │ ├── syscall_linux_gccgo_arm.go │ ├── syscall_linux_mips64x.go │ ├── syscall_linux_mipsx.go │ ├── syscall_linux_ppc64x.go │ ├── syscall_linux_riscv64.go │ ├── syscall_linux_s390x.go │ ├── syscall_linux_sparc64.go │ ├── syscall_netbsd.go │ ├── syscall_netbsd_386.go │ ├── syscall_netbsd_amd64.go │ ├── syscall_netbsd_arm.go │ ├── syscall_netbsd_arm64.go │ ├── syscall_openbsd.go │ ├── syscall_openbsd_386.go │ ├── syscall_openbsd_amd64.go │ ├── syscall_openbsd_arm.go │ ├── syscall_openbsd_arm64.go │ ├── syscall_solaris.go │ ├── syscall_solaris_amd64.go │ ├── syscall_unix.go │ ├── syscall_unix_gc.go │ ├── syscall_unix_gc_ppc64x.go │ ├── timestruct.go │ ├── types_aix.go │ ├── types_darwin.go │ ├── types_dragonfly.go │ ├── types_freebsd.go │ ├── types_netbsd.go │ ├── types_openbsd.go │ ├── types_solaris.go │ ├── unveil_openbsd.go │ ├── xattr_bsd.go │ ├── zerrors_aix_ppc.go │ ├── zerrors_aix_ppc64.go │ ├── zerrors_darwin_386.go │ ├── zerrors_darwin_amd64.go │ ├── zerrors_darwin_arm.go │ ├── zerrors_darwin_arm64.go │ ├── zerrors_dragonfly_amd64.go │ ├── zerrors_freebsd_386.go │ ├── zerrors_freebsd_amd64.go │ ├── zerrors_freebsd_arm.go │ ├── zerrors_freebsd_arm64.go │ ├── zerrors_linux.go │ ├── zerrors_linux_386.go │ ├── zerrors_linux_amd64.go │ ├── zerrors_linux_arm.go │ ├── zerrors_linux_arm64.go │ ├── zerrors_linux_mips.go │ ├── zerrors_linux_mips64.go │ ├── zerrors_linux_mips64le.go │ ├── zerrors_linux_mipsle.go │ ├── zerrors_linux_ppc64.go │ ├── zerrors_linux_ppc64le.go │ ├── zerrors_linux_riscv64.go │ ├── zerrors_linux_s390x.go │ ├── zerrors_linux_sparc64.go │ ├── zerrors_netbsd_386.go │ ├── zerrors_netbsd_amd64.go │ ├── zerrors_netbsd_arm.go │ ├── zerrors_netbsd_arm64.go │ ├── zerrors_openbsd_386.go │ ├── zerrors_openbsd_amd64.go │ ├── zerrors_openbsd_arm.go │ ├── zerrors_openbsd_arm64.go │ ├── zerrors_solaris_amd64.go │ ├── zptrace_armnn_linux.go │ ├── zptrace_linux_arm64.go │ ├── zptrace_mipsnn_linux.go │ ├── zptrace_mipsnnle_linux.go │ ├── zptrace_x86_linux.go │ ├── zsyscall_aix_ppc.go │ ├── zsyscall_aix_ppc64.go │ ├── zsyscall_aix_ppc64_gc.go │ ├── zsyscall_aix_ppc64_gccgo.go │ ├── zsyscall_darwin_386.1_11.go │ ├── zsyscall_darwin_386.1_13.go │ ├── zsyscall_darwin_386.1_13.s │ ├── zsyscall_darwin_386.go │ ├── zsyscall_darwin_386.s │ ├── zsyscall_darwin_amd64.1_11.go │ ├── zsyscall_darwin_amd64.1_13.go │ ├── zsyscall_darwin_amd64.1_13.s │ ├── zsyscall_darwin_amd64.go │ ├── zsyscall_darwin_amd64.s │ ├── zsyscall_darwin_arm.1_11.go │ ├── zsyscall_darwin_arm.1_13.go │ ├── zsyscall_darwin_arm.1_13.s │ ├── zsyscall_darwin_arm.go │ ├── zsyscall_darwin_arm.s │ ├── zsyscall_darwin_arm64.1_11.go │ ├── zsyscall_darwin_arm64.1_13.go │ ├── zsyscall_darwin_arm64.1_13.s │ ├── zsyscall_darwin_arm64.go │ ├── zsyscall_darwin_arm64.s │ ├── zsyscall_dragonfly_amd64.go │ ├── zsyscall_freebsd_386.go │ ├── zsyscall_freebsd_amd64.go │ ├── zsyscall_freebsd_arm.go │ ├── zsyscall_freebsd_arm64.go │ ├── zsyscall_illumos_amd64.go │ ├── zsyscall_linux.go │ ├── zsyscall_linux_386.go │ ├── zsyscall_linux_amd64.go │ ├── zsyscall_linux_arm.go │ ├── zsyscall_linux_arm64.go │ ├── zsyscall_linux_mips.go │ ├── zsyscall_linux_mips64.go │ ├── zsyscall_linux_mips64le.go │ ├── zsyscall_linux_mipsle.go │ ├── zsyscall_linux_ppc64.go │ ├── zsyscall_linux_ppc64le.go │ ├── zsyscall_linux_riscv64.go │ ├── zsyscall_linux_s390x.go │ ├── zsyscall_linux_sparc64.go │ ├── zsyscall_netbsd_386.go │ ├── zsyscall_netbsd_amd64.go │ ├── zsyscall_netbsd_arm.go │ ├── zsyscall_netbsd_arm64.go │ ├── zsyscall_openbsd_386.go │ ├── zsyscall_openbsd_amd64.go │ ├── zsyscall_openbsd_arm.go │ ├── zsyscall_openbsd_arm64.go │ ├── zsyscall_solaris_amd64.go │ ├── zsysctl_openbsd_386.go │ ├── zsysctl_openbsd_amd64.go │ ├── zsysctl_openbsd_arm.go │ ├── zsysctl_openbsd_arm64.go │ ├── zsysnum_darwin_386.go │ ├── zsysnum_darwin_amd64.go │ ├── zsysnum_darwin_arm.go │ ├── zsysnum_darwin_arm64.go │ ├── zsysnum_dragonfly_amd64.go │ ├── zsysnum_freebsd_386.go │ ├── zsysnum_freebsd_amd64.go │ ├── zsysnum_freebsd_arm.go │ ├── zsysnum_freebsd_arm64.go │ ├── zsysnum_linux_386.go │ ├── zsysnum_linux_amd64.go │ ├── zsysnum_linux_arm.go │ ├── zsysnum_linux_arm64.go │ ├── zsysnum_linux_mips.go │ ├── zsysnum_linux_mips64.go │ ├── zsysnum_linux_mips64le.go │ ├── zsysnum_linux_mipsle.go │ ├── zsysnum_linux_ppc64.go │ ├── zsysnum_linux_ppc64le.go │ ├── zsysnum_linux_riscv64.go │ ├── zsysnum_linux_s390x.go │ ├── zsysnum_linux_sparc64.go │ ├── zsysnum_netbsd_386.go │ ├── zsysnum_netbsd_amd64.go │ ├── zsysnum_netbsd_arm.go │ ├── zsysnum_netbsd_arm64.go │ ├── zsysnum_openbsd_386.go │ ├── zsysnum_openbsd_amd64.go │ ├── zsysnum_openbsd_arm.go │ ├── zsysnum_openbsd_arm64.go │ ├── ztypes_aix_ppc.go │ ├── ztypes_aix_ppc64.go │ ├── ztypes_darwin_386.go │ ├── ztypes_darwin_amd64.go │ ├── ztypes_darwin_arm.go │ ├── ztypes_darwin_arm64.go │ ├── ztypes_dragonfly_amd64.go │ ├── ztypes_freebsd_386.go │ ├── ztypes_freebsd_amd64.go │ ├── ztypes_freebsd_arm.go │ ├── ztypes_freebsd_arm64.go │ ├── ztypes_linux.go │ ├── ztypes_linux_386.go │ ├── ztypes_linux_amd64.go │ ├── ztypes_linux_arm.go │ ├── ztypes_linux_arm64.go │ ├── ztypes_linux_mips.go │ ├── ztypes_linux_mips64.go │ ├── ztypes_linux_mips64le.go │ ├── ztypes_linux_mipsle.go │ ├── ztypes_linux_ppc64.go │ ├── ztypes_linux_ppc64le.go │ ├── ztypes_linux_riscv64.go │ ├── ztypes_linux_s390x.go │ ├── ztypes_linux_sparc64.go │ ├── ztypes_netbsd_386.go │ ├── ztypes_netbsd_amd64.go │ ├── ztypes_netbsd_arm.go │ ├── ztypes_netbsd_arm64.go │ ├── ztypes_openbsd_386.go │ ├── ztypes_openbsd_amd64.go │ ├── ztypes_openbsd_arm.go │ ├── ztypes_openbsd_arm64.go │ └── ztypes_solaris_amd64.go └── windows/ ├── aliases.go ├── dll_windows.go ├── empty.s ├── env_windows.go ├── eventlog.go ├── exec_windows.go ├── memory_windows.go ├── mkerrors.bash ├── mkknownfolderids.bash ├── mksyscall.go ├── race.go ├── race0.go ├── security_windows.go ├── service.go ├── str.go ├── syscall.go ├── syscall_windows.go ├── types_windows.go ├── types_windows_386.go ├── types_windows_amd64.go ├── types_windows_arm.go ├── zerrors_windows.go ├── zknownfolderids_windows.go └── zsyscall_windows.go ================================================ FILE CONTENTS ================================================ ================================================ FILE: .dockerignore ================================================ .DS_Store .idea .trash-cache bin state build images/*/build scripts/images/*/dist/ dist tests/integration/.venv* tests/integration/.tox */*/*/*.pyc */*/*/__pycache__ .trash-cache vendor/*/*/*/.git tmp docs/_site ================================================ FILE: .github/ISSUE_TEMPLATE.md ================================================ **BurmillaOS Version: (ros os version)** **Where are you running BurmillaOS? (docker-machine, AWS, GCE, baremetal, etc.)** **Which processor architecture you are using?** **Do you use some extra hardware? (GPU, etc)?** **Which console you use (default, ubuntu, centos, etc..)** **Do you use some service(s) which are not enabled by default?** **Have you installed some extra tools to console?** **Do you use some other customizations?** **Please share copy of your cloud-init (remember remove all sensitive data first)** ```yaml ``` ================================================ FILE: .github/workflows/create-release.yml ================================================ name: release on: workflow_dispatch: jobs: build: runs-on: ubuntu-22.04 steps: - uses: actions/checkout@v2 with: fetch-depth: '0' - name: Install github-release run: | sudo wget https://github.com/github-release/github-release/releases/download/v0.9.0/linux-amd64-github-release.bz2 -O /usr/local/bin/github-release.bz2 sudo bunzip2 /usr/local/bin/github-release.bz2 sudo chmod 0755 /usr/local/bin/github-release - name: Build OS run: | export VERSION=$(git describe --exact-match --tags $(git log -n1 --pretty='%h')) if [ -z "$VERSION" ]; then echo "Build is not started from tag. Will exit..." exit 1 fi export OS_FIRMWARE=${{ github.event.inputs.firmware }} export ARCH=amd64 make release - name: Login to DockerHub uses: docker/login-action@v1 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Publish release run: ${PWD}/dist/publish.sh env: GITHUB_TOKEN: ${{ secrets.OS_RELEASE_TOKEN }} ================================================ FILE: .github/workflows/pull-request-validation.yml ================================================ name: PR on: pull_request jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Test with dapper run: | make pr-validation ================================================ FILE: .gitignore ================================================ .DS_Store /assets/ca.crt /state /bin /build /dist /gopath /images/*/build /scripts/images/vmware/assets .dockerfile *.swp /tests/integration/MANIFEST /tests/integration/.venv* /tests/integration/.tox /tests/integration/.idea *.pyc __pycache__ /.dapper /.trash-cache /trash.lock .idea .trash-conf /Dockerfile.dapper* !/Dockerfile.dapper scripts/images/raspberry-pi-hypriot64/Dockerfile.dapper* ================================================ FILE: Dockerfile.dapper ================================================ FROM ubuntu:bionic # FROM arm64=arm64v8/ubuntu:bionic # get the apt-cacher proxy set ARG APTPROXY= ARG APT_ARCHIVE_SOURCE="archive.ubuntu.com" RUN echo "Acquire::http { Proxy \"$APTPROXY\"; };" >> /etc/apt/apt.conf.d/01proxy \ && cat /etc/apt/apt.conf.d/01proxy \ && sed -i "s|archive.ubuntu.com|${APT_ARCHIVE_SOURCE}|" /etc/apt/sources.list \ && cat /etc/apt/sources.list \ && apt-get update \ && apt-get install -y --no-install-recommends \ build-essential \ ca-certificates \ cpio \ curl \ dosfstools \ gccgo \ genisoimage \ gettext \ git \ isolinux \ less \ libblkid-dev \ libmount-dev \ libselinux1-dev \ locales \ module-init-tools \ mtools \ openssh-client \ pkg-config \ qemu \ qemu-kvm \ rsync \ sudo \ syslinux-common \ vim \ wget \ xorriso \ xz-utils \ telnet ########## Dapper Configuration ##################### ENV DAPPER_ENV VERSION DEV_BUILD RUNTEST DEBUG APTPROXY ENGINE_REGISTRY_MIRROR KERNEL_CHECK APPEND_SYSTEM_IMAGES APPEND_USER_IMAGES ENV DAPPER_DOCKER_SOCKET true ENV DAPPER_SOURCE /go/src/github.com/burmilla/os ENV DAPPER_OUTPUT ./bin ./dist ./build/initrd ./build/kernel ENV DAPPER_RUN_ARGS --privileged ENV TRASH_CACHE ${DAPPER_SOURCE}/.trash-cache ENV SHELL /bin/bash WORKDIR ${DAPPER_SOURCE} ########## General Configuration ##################### ARG DAPPER_HOST_ARCH=amd64 ARG HOST_ARCH=${DAPPER_HOST_ARCH} ARG ARCH=${HOST_ARCH} ARG OS_REPO=burmilla ARG HOSTNAME_DEFAULT=burmilla ARG DISTRIB_ID=BurmillaOS ARG KERNEL_VERSION=5.10.248-burmilla ARG KERNEL_URL_amd64=https://github.com/burmilla/os-kernel/releases/download/v${KERNEL_VERSION}/linux-${KERNEL_VERSION}-x86.tar.gz ARG KERNEL_URL_arm64=https://github.com/burmilla/os-kernel/releases/download/v${KERNEL_VERSION}/linux-${KERNEL_VERSION}-arm64.tar.gz ARG BUILD_DOCKER_URL_amd64=https://download.docker.com/linux/static/stable/x86_64/docker-26.1.4.tgz ARG BUILD_DOCKER_URL_arm64=https://download.docker.com/linux/static/stable/aarch64/docker-26.1.4.tgz ARG OS_RELEASES_YML=https://raw.githubusercontent.com/burmilla/releases/v2.0.x ARG OS_SERVICES_REPO=https://raw.githubusercontent.com/${OS_REPO}/os-services ARG IMAGE_NAME=${OS_REPO}/os ARG OS_CONSOLE=default ARG OS_AUTOFORMAT=false ARG OS_FIRMWARE=true ARG OS_BASE_URL_amd64=https://github.com/burmilla/os-base/releases/download/v2023.05-1/os-base_amd64.tar.xz ARG OS_BASE_URL_arm64=https://github.com/burmilla/os-base/releases/download/v2023.05-1/os-base_arm64.tar.xz ARG OS_INITRD_BASE_URL_amd64=https://github.com/burmilla/os-initrd-base/releases/download/v2023.02.10-2/os-initrd-base-amd64.tar.gz ARG OS_INITRD_BASE_URL_arm64=https://github.com/burmilla/os-initrd-base/releases/download/v2023.02.10-2/os-initrd-base-arm64.tar.gz ARG SYSTEM_DOCKER_VERSION=17.06.107 ARG SYSTEM_DOCKER_URL_amd64=https://github.com/burmilla/os-system-docker/releases/download/${SYSTEM_DOCKER_VERSION}/docker-amd64-${SYSTEM_DOCKER_VERSION}.tgz ARG SYSTEM_DOCKER_URL_arm64=https://github.com/burmilla/os-system-docker/releases/download/${SYSTEM_DOCKER_VERSION}/docker-arm64-${SYSTEM_DOCKER_VERSION}.tgz ARG AZURE_SERVICE=false ARG PROXMOXVE_SERVICE=false ARG SKIP_BUILD=false ###################################################### # Set up environment and export all ARGS as ENV ENV ARCH=${ARCH} \ HOST_ARCH=${HOST_ARCH} \ XZ_DEFAULTS="-T0" ENV BUILD_DOCKER_URL=BUILD_DOCKER_URL_${ARCH} \ BUILD_DOCKER_URL_amd64=${BUILD_DOCKER_URL_amd64} \ BUILD_DOCKER_URL_arm64=${BUILD_DOCKER_URL_arm64} \ DAPPER_HOST_ARCH=${DAPPER_HOST_ARCH} \ DISTRIB_ID=${DISTRIB_ID} \ DOWNLOADS=/usr/src/downloads \ GOPATH=/go \ GO_VERSION=1.19.5 \ GO111MODULE=off \ GOARCH=$ARCH \ HOSTNAME_DEFAULT=${HOSTNAME_DEFAULT} \ IMAGE_NAME=${IMAGE_NAME} \ KERNEL_VERSION=${KERNEL_VERSION} \ KERNEL_URL=KERNEL_URL_${ARCH} \ KERNEL_URL_amd64=${KERNEL_URL_amd64} \ KERNEL_URL_arm64=${KERNEL_URL_arm64} \ OS_BASE_URL=OS_BASE_URL_${ARCH} \ OS_BASE_URL_amd64=${OS_BASE_URL_amd64} \ OS_BASE_URL_arm64=${OS_BASE_URL_arm64} \ OS_INITRD_BASE_URL=OS_INITRD_BASE_URL_${ARCH} \ OS_INITRD_BASE_URL_amd64=${OS_INITRD_BASE_URL_amd64} \ OS_INITRD_BASE_URL_arm64=${OS_INITRD_BASE_URL_arm64} \ OS_RELEASES_YML=${OS_RELEASES_YML} \ OS_REPO=${OS_REPO} \ OS_SERVICES_REPO=${OS_SERVICES_REPO} \ OS_CONSOLE=${OS_CONSOLE} \ OS_AUTOFORMAT=${OS_AUTOFORMAT} \ OS_FIRMWARE=${OS_FIRMWARE} \ REPO_VERSION=master \ SYSTEM_DOCKER_URL=SYSTEM_DOCKER_URL_${ARCH} \ SYSTEM_DOCKER_URL_amd64=${SYSTEM_DOCKER_URL_amd64} \ SYSTEM_DOCKER_URL_arm64=${SYSTEM_DOCKER_URL_arm64} \ AZURE_SERVICE=${AZURE_SERVICE} \ PROXMOXVE_SERVICE=${PROXMOXVE_SERVICE} \ SKIP_BUILD=${SKIP_BUILD} ENV PATH=${GOPATH}/bin:/usr/local/go/bin:$PATH ENV GO111MODULE=off RUN mkdir -p ${DOWNLOADS} # Download kernel RUN rm /bin/sh && ln -s /bin/bash /bin/sh RUN echo "... Downloading ${!KERNEL_URL}"; \ if [ "${!KERNEL_URL}" != "skip" ]; then \ curl -fL ${!KERNEL_URL} > ${DOWNLOADS}/kernel.tar.gz \ ;fi # Install Go RUN curl -L https://dl.google.com/go/go${GO_VERSION}.linux-${HOST_ARCH}.tar.gz | tar -xzf - -C /usr/local && \ go get github.com/burmilla/trash # Install Host Docker RUN curl -fL ${!BUILD_DOCKER_URL} > /tmp/docker.tgz && \ tar zxvf /tmp/docker.tgz --strip-components=1 -C /usr/bin/ && \ chmod +x /usr/bin/docker # Install dapper RUN curl -sL https://releases.rancher.com/dapper/v0.5.4/dapper-`uname -s`-`uname -m | sed 's/arm.*/arm/'` > /usr/bin/dapper && \ chmod +x /usr/bin/dapper RUN cd ${DOWNLOADS} && \ curl -pfL ${!OS_BASE_URL} | tar xvJf - ENTRYPOINT ["./scripts/entry"] CMD ["ci"] ================================================ FILE: LICENSE ================================================ Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS ================================================ FILE: Makefile ================================================ TARGETS := $(shell ls scripts | grep -vE 'clean|run|help|release*|build-moby|run-moby') .dapper: @echo Downloading dapper @curl -sL https://releases.rancher.com/dapper/latest/dapper-`uname -s`-`uname -m|sed 's/v7l//'` > .dapper.tmp @@chmod +x .dapper.tmp @./.dapper.tmp -v @mv .dapper.tmp .dapper $(TARGETS): .dapper ./.dapper $@ pr-validation: .dapper ARCH="amd64" \ KERNEL_URL_amd64="skip" \ SKIP_BUILD="true" \ ./.dapper ci trash: .dapper ./.dapper -m bind trash trash-keep: .dapper ./.dapper -m bind trash -k deps: trash build/initrd/.id: .dapper ./.dapper prepare run: build/initrd/.id .dapper ./.dapper -m bind build-target ./scripts/run build-moby: ./scripts/build-moby run-moby: ./scripts/run-moby shell-bind: .dapper ./.dapper -m bind -s clean: @./scripts/clean release: .dapper release-build release-build: mkdir -p dist ./.dapper release rpi64: .dapper ./scripts/release-rpi64 vmware: .dapper mkdir -p dist OS_FIRMWARE="false" \ APPEND_SYSTEM_IMAGES="burmilla/os-openvmtools:11.2.0-5" \ ./.dapper release-vmware hyperv: .dapper mkdir -p dist OS_FIRMWARE="false" \ APPEND_SYSTEM_IMAGES="burmilla/os-hypervvmtools:v4.14.206-burmilla-1" \ ./.dapper release-hyperv azurebase: .dapper mkdir -p dist AZURE_SERVICE="true" \ OS_FIRMWARE="false" \ APPEND_SYSTEM_IMAGES="burmilla/os-hypervvmtools:v4.14.206-burmilla-1 burmilla/os-waagent:v2.2.49.2-1" \ ./.dapper release-azurebase 4glte: .dapper mkdir -p dist APPEND_SYSTEM_IMAGES="burmilla/os-modemmanager:v1.6.4-1" \ ./.dapper release-4glte proxmoxve: .dapper mkdir -p dist PROXMOXVE_SERVICE="true" \ OS_FIRMWARE="false" \ APPEND_SYSTEM_IMAGES="burmilla/os-qemuguestagent:v3.1.0-1" \ ./.dapper release-proxmoxve help: @./scripts/help .DEFAULT_GOAL := default .PHONY: $(TARGETS) ================================================ FILE: README.md ================================================ # BurmillaOS BurmillaOS is successor of [RancherOS](//github.com/rancher/os) which reached end of life. ![GitHub release](https://img.shields.io/github/v/release/burmilla/os.svg) [![Docker Pulls](https://img.shields.io/docker/pulls/burmilla/os.svg)](https://store.docker.com/community/images/burmilla/os) [![Go Report Card](https://goreportcard.com/badge/github.com/burmilla/os)](https://goreportcard.com/badge/github.com/burmilla/os) The smallest, easiest way to run Docker in production at scale. Everything in BurmillaOS is a container managed by Docker. This includes system services such as udev and rsyslog. BurmillaOS includes only the bare minimum amount of software needed to run Docker. This keeps the binary download of BurmillaOS very small. Everything else can be pulled in dynamically through Docker. ## How this works Everything in BurmillaOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call the system Docker which runs as the first process. System Docker then launches a container that runs the user Docker. The user Docker is then the instance that gets primarily used to create containers. We created this separation because it seemed logical and also it would really be bad if somebody did `docker rm -f $(docker ps -qa)` and deleted the entire OS. ![How it works](./howitworks.png "How it works") ## Documentation for BurmillaOS Please refer to our [BurmillaOS Documentation](https://burmilla.github.io) website to read all about BurmillaOS. It has detailed information on how BurmillaOS works, getting-started and other details. Please submit any **BurmillaOS** bugs, issues, and feature requests to [burmilla/os](//github.com/burmilla/os/issues). ## License Copyright (c) 2020-2024 [BurmillaOS community](https://burmillaos.org) Copyright (c) 2014-2020 [Rancher Labs, Inc.](http://rancher.com) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0) Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================================ FILE: assets/rancher.key ================================================ -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAvOcPbLWcoi0Kfw5FTun6sIkoWHI9QpSnbQqoB7X/jy6SZBUX khbvMttcvnr9PYLEjEUa4xe8rdKVB1es53EIVXqrGbYHOVmxC2NgzmFBpkZ/wgrz 216L6Aa0I0qK6pQZqbj8LErWC6/dl5/lVbDDUlCHoB2Ntg6YRmwyhvOb6ygfB8VM RiA8RQXbP7hPBYUsvsbKMk/41GQyuqYKth5xpeg/NVYiJSnnKTqVpVtwfn3mvfQA zcmTVbw82xxOaXCN1UcPLufVpqVjlIE+qmnDVQApfBqqQb0JPLMOzv1/Q0mRwimn 3g/RuZPhmGwFM/dnylq8f5Tl31Fz0t/nPHUSBQIDAQABAoIBAGS4EhpVTvmNaF5M PpoP2TFNQCzAZHdeiVJzbxoFaQhvvXANau7iuZD1MyMAsouccK2VnvtcSaaoc/th PPh95QKmkBn6Wymx79rxlskTRAyi5DWS32ikpZYGFQAIG79tTa2XyyTWlf/POihB AedJgysdcuLlPwzGBVzvDZW0x/p+Ejs+etW0QBb1swcqheM9cc6RBoF/aLPyUK4i 1ztVuzJTvTTV16xTNF93XFS177Y6toXZEaCBpuMg/XX0y0Fj4iAkSIqJoEV6MKeI SqQad/sVsLTwCsW+/so7jfRWtm5xRJOtNxpSUrGrNYuRBUr4VlXNZT4TOOS9BFEF AyTSBcECgYEA+9wnSZjEFgz/8x/xsyPCsKBhCu6nfV/mxVgTrRXuMtpzujAnKXsf jEh1vtKH9UpbwM+gRYxZL+ZVjB2uE523hoDxqDfpVnWfItNs4OU4qougqUHBdsZe 0t0Xjyl/17f7g7BWMXSWhTcoDlirGHtjvIDh0CXVfWvtcLaiQj0dAfUCgYEAwAH6 JPZotxue8bUyglCrOyg3P0G9QgeQSabbCCKDyiGzKYTXqx1vxEZ+0RCKSg0D/spK 2x7V0wearEOX/rCuQVw2r4oltUmbmq+BHYnXXz0hM0TMJs1BPZhvMhPGWq0lm1WL NKfAOU64hZQAPwf3Z+3B5jywQfOmwssOfAXr9dECgYBroKrRUo0I90kxNkdtTCzY mdCegVnlw+O0FW1jG+oMpTmrKQSzP0A+DID0qLcc5UfMX22YCt/aDk4kcFKBY3aX 7eZXAn2eSulUUpFGke3jQ4PGkKkB/sdqyLxWm19caez7W5GZ1L618toVN2L2NVRr q4/UCTbwP/zZm9I/CCqrOQKBgQClQk5hd/BDAbP5B+L0RKhMX13FxTg217m5mrJU uxhBZmYFK0BRGCH1hlNqb9kGyVMR/l0VYeHaI2ZeNENjRACHYu3ygm3YLgWOytXP ba+AWmXz8ZfhIbKwaD30lQ6ZRwPiQWtyI5wP9xBccDkSBzJLMlk8aCmwahyy9gB+ gL5JsQKBgBT1ILAQvK2IGmwHFvUApXruARIvU4lrMQJ9tpYiPtpSNAPP73jUac68 thh3zQYJfYDQbdbwnF41X6WPvbYwb7uH5PG/T2A78YSZQyMpCk+enaf8o0dOZEAH jhnrn3KQFCRQXRUfm9O6N+S04S4uXT8++vlZucW7jEq8GW0nYxj/ -----END RSA PRIVATE KEY----- ================================================ FILE: assets/rancher.key.pub ================================================ ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC85w9stZyiLQp/DkVO6fqwiShYcj1ClKdtCqgHtf+PLpJkFReSFu8y21y+ev09gsSMRRrjF7yt0pUHV6zncQhVeqsZtgc5WbELY2DOYUGmRn/CCvPbXovoBrQjSorqlBmpuPwsStYLr92Xn+VVsMNSUIegHY22DphGbDKG85vrKB8HxUxGIDxFBds/uE8FhSy+xsoyT/jUZDK6pgq2HnGl6D81ViIlKecpOpWlW3B+fea99ADNyZNVvDzbHE5pcI3VRw8u59WmpWOUgT6qacNVACl8GqpBvQk8sw7O/X9DSZHCKafeD9G5k+GYbAUz92fKWrx/lOXfUXPS3+c8dRIF ================================================ FILE: assets/scripts_ssh_config ================================================ StrictHostKeyChecking no UserKnownHostsFile /dev/null LogLevel quiet ================================================ FILE: cmd/cloudinitexecute/authorize_ssh_keys.go ================================================ package cloudinitexecute import ( "io/ioutil" "os" "path" "strconv" "strings" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" ) const ( sshDirName = ".ssh" authorizedKeysFileName = "authorized_keys" ) func authorizeSSHKeys(username string, authorizedKeys []string, name string) error { var uid int var gid int var homeDir string bytes, err := ioutil.ReadFile("/etc/passwd") if err != nil { return err } for _, line := range strings.Split(string(bytes), "\n") { if strings.HasPrefix(line, username) { split := strings.Split(line, ":") if len(split) < 6 { break } uid, err = strconv.Atoi(split[2]) if err != nil { return err } gid, err = strconv.Atoi(split[3]) if err != nil { return err } homeDir = split[5] } } sshDir := path.Join(homeDir, sshDirName) authorizedKeysFile := path.Join(sshDir, authorizedKeysFileName) if _, err := os.Stat(sshDir); os.IsNotExist(err) { if err = os.Mkdir(sshDir, 0700); err != nil { return err } } else if err != nil { return err } if err = os.Chown(sshDir, uid, gid); err != nil { return err } for _, authorizedKey := range authorizedKeys { if err = authorizeSSHKey(authorizedKey, authorizedKeysFile, uid, gid); err != nil { log.Errorf("Failed to authorize SSH key %s: %v", authorizedKey, err) } } return nil } func authorizeSSHKey(authorizedKey, authorizedKeysFile string, uid, gid int) error { authorizedKeysFileInfo, err := os.Stat(authorizedKeysFile) if os.IsNotExist(err) { keysFile, err := os.Create(authorizedKeysFile) if err != nil { return err } if err = keysFile.Chmod(0600); err != nil { return err } if err = keysFile.Close(); err != nil { return err } authorizedKeysFileInfo, err = os.Stat(authorizedKeysFile) if err != nil { return err } } else if err != nil { return err } bytes, err := ioutil.ReadFile(authorizedKeysFile) if err != nil { return err } if !strings.Contains(string(bytes), authorizedKey) { bytes = append(bytes, []byte(authorizedKey)...) bytes = append(bytes, '\n') } perm := authorizedKeysFileInfo.Mode().Perm() if err = util.WriteFileAtomic(authorizedKeysFile, bytes, perm); err != nil { return err } return os.Chown(authorizedKeysFile, uid, gid) } ================================================ FILE: cmd/cloudinitexecute/cloudinitexecute.go ================================================ package cloudinitexecute import ( "flag" "fmt" "io/ioutil" "os" "os/exec" "path" "strings" rancherConfig "github.com/burmilla/os/config" "github.com/burmilla/os/config/cloudinit/config" "github.com/burmilla/os/config/cloudinit/system" "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "golang.org/x/net/context" ) const ( resizeStamp = "/var/lib/rancher/resizefs.done" sshKeyName = "rancheros-cloud-config" ) var ( console bool preConsole bool flags *flag.FlagSet ) func init() { flags = flag.NewFlagSet(os.Args[0], flag.ContinueOnError) flags.BoolVar(&console, "console", false, "apply console configuration") flags.BoolVar(&preConsole, "pre-console", false, "apply pre-console configuration") } func Main() { flags.Parse(os.Args[1:]) log.InitLogger() log.Infof("Running cloud-init-execute: pre-console=%v, console=%v", preConsole, console) cfg := rancherConfig.LoadConfig() if !console && !preConsole { console = true preConsole = true } if console { ApplyConsole(cfg) } if preConsole { applyPreConsole(cfg) } } func ApplyConsole(cfg *rancherConfig.CloudConfig) { if len(cfg.SSHAuthorizedKeys) > 0 { if err := authorizeSSHKeys("rancher", cfg.SSHAuthorizedKeys, sshKeyName); err != nil { log.Error(err) } if err := authorizeSSHKeys("docker", cfg.SSHAuthorizedKeys, sshKeyName); err != nil { log.Error(err) } } WriteFiles(cfg, "console") for _, mount := range cfg.Mounts { if len(mount) != 4 { log.Errorf("Unable to mount %s: must specify exactly four arguments", mount[1]) } if mount[2] == "nfs" || mount[2] == "nfs4" { if err := os.MkdirAll(mount[1], 0755); err != nil { log.Errorf("Unable to create mount point %s: %v", mount[1], err) continue } cmdArgs := []string{mount[0], mount[1], "-t", mount[2]} if mount[3] != "" { cmdArgs = append(cmdArgs, "-o", mount[3]) } cmd := exec.Command("mount", cmdArgs...) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr if err := cmd.Run(); err != nil { log.Errorf("Failed to mount %s: %v", mount[1], err) } continue } device := util.ResolveDevice(mount[0]) if mount[2] == "swap" { cmd := exec.Command("swapon", device) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr err := cmd.Run() if err != nil { log.Errorf("Unable to swapon %s: %v", device, err) } continue } if err := util.Mount(device, mount[1], mount[2], mount[3]); err != nil { log.Errorf("Failed to mount %s: %v", mount[1], err) } } err := util.RunCommandSequence(cfg.Runcmd) if err != nil { log.Error(err) } } func WriteFiles(cfg *rancherConfig.CloudConfig, container string) { for _, file := range cfg.WriteFiles { fileContainer := file.Container if fileContainer == "" { fileContainer = "console" } if fileContainer != container { continue } content, err := config.DecodeContent(file.File.Content, file.File.Encoding) if err != nil { continue } file.File.Content = string(content) file.File.Encoding = "" f := system.File{ File: file.File, } fullPath, err := system.WriteFile(&f, "/") if err != nil { log.WithFields(log.Fields{"err": err, "path": fullPath}).Error("Error writing file") continue } log.Printf("Wrote file %s to filesystem", fullPath) } } func applyPreConsole(cfg *rancherConfig.CloudConfig) { if cfg.Rancher.ResizeDevice != "" { if _, err := os.Stat(resizeStamp); os.IsNotExist(err) { if err := resizeDevice(cfg); err == nil { os.Create(resizeStamp) } else { log.Errorf("Failed to resize %s: %s", cfg.Rancher.ResizeDevice, err) } } else { log.Infof("Skipped resizing %s because %s exists", cfg.Rancher.ResizeDevice, resizeStamp) } } for k, v := range cfg.Rancher.Sysctl { elems := []string{"/proc", "sys"} elems = append(elems, strings.Split(k, ".")...) path := path.Join(elems...) if err := ioutil.WriteFile(path, []byte(v), 0644); err != nil { log.Errorf("Failed to set sysctl key %s: %s", k, err) } } client, err := docker.NewSystemClient() if err != nil { log.Error(err) } for _, restart := range cfg.Rancher.RestartServices { if err = client.ContainerRestart(context.Background(), restart, 10); err != nil { log.Error(err) } } } func resizeDevice(cfg *rancherConfig.CloudConfig) error { partition := "1" targetPartition := fmt.Sprintf("%s%s", cfg.Rancher.ResizeDevice, partition) if strings.Contains(cfg.Rancher.ResizeDevice, "mmcblk") { partition = "2" targetPartition = fmt.Sprintf("%sp%s", cfg.Rancher.ResizeDevice, partition) } else if strings.Contains(cfg.Rancher.ResizeDevice, "nvme") { targetPartition = fmt.Sprintf("%sp%s", cfg.Rancher.ResizeDevice, partition) } cmd := exec.Command("growpart", cfg.Rancher.ResizeDevice, partition) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr cmd.Run() cmd = exec.Command("partprobe", cfg.Rancher.ResizeDevice) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr err := cmd.Run() if err != nil { return err } cmd = exec.Command("resize2fs", targetPartition) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr err = cmd.Run() if err != nil { return err } return nil } ================================================ FILE: cmd/cloudinitsave/cloudinitsave.go ================================================ // Copyright 2015 CoreOS, Inc. // Copyright 2015-2017 Rancher Labs, Inc. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package cloudinitsave import ( "bytes" "errors" "os" "path" "strings" "sync" "time" "github.com/burmilla/os/cmd/control" "github.com/burmilla/os/cmd/network" rancherConfig "github.com/burmilla/os/config" "github.com/burmilla/os/config/cloudinit/config" "github.com/burmilla/os/config/cloudinit/datasource" "github.com/burmilla/os/config/cloudinit/datasource/configdrive" "github.com/burmilla/os/config/cloudinit/datasource/file" "github.com/burmilla/os/config/cloudinit/datasource/metadata/aliyun" "github.com/burmilla/os/config/cloudinit/datasource/metadata/azure" "github.com/burmilla/os/config/cloudinit/datasource/metadata/cloudstack" "github.com/burmilla/os/config/cloudinit/datasource/metadata/digitalocean" "github.com/burmilla/os/config/cloudinit/datasource/metadata/ec2" "github.com/burmilla/os/config/cloudinit/datasource/metadata/exoscale" "github.com/burmilla/os/config/cloudinit/datasource/metadata/gce" "github.com/burmilla/os/config/cloudinit/datasource/metadata/packet" "github.com/burmilla/os/config/cloudinit/datasource/proccmdline" "github.com/burmilla/os/config/cloudinit/datasource/proxmox" "github.com/burmilla/os/config/cloudinit/datasource/tftp" "github.com/burmilla/os/config/cloudinit/datasource/url" "github.com/burmilla/os/config/cloudinit/datasource/vmware" "github.com/burmilla/os/config/cloudinit/pkg" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/netconf" "github.com/burmilla/os/pkg/util" yaml "github.com/cloudfoundry-incubator/candiedyaml" ) const ( datasourceInterval = 100 * time.Millisecond datasourceMaxInterval = 30 * time.Second datasourceTimeout = 5 * time.Minute ) func Main() { log.InitLogger() log.Info("Running cloud-init-save") if err := control.UdevSettle(); err != nil { log.Errorf("Failed to run udev settle: %v", err) } if err := saveCloudConfig(); err != nil { log.Errorf("Failed to save cloud-config: %v", err) } // exit wpa_supplicant netconf.StopWpaSupplicant() // exit dhcpcd netconf.StopDhcpcd() } func saveCloudConfig() error { log.Infof("SaveCloudConfig") cfg := rancherConfig.LoadConfig() log.Debugf("init: SaveCloudConfig(pre ApplyNetworkConfig): %#v", cfg.Rancher.Network) network.ApplyNetworkConfig(cfg) log.Infof("datasources that will be considered: %#v", cfg.Rancher.CloudInit.Datasources) dss := getDatasources(cfg.Rancher.CloudInit.Datasources) if len(dss) == 0 { log.Errorf("currentDatasource - none found") return nil } foundDs := selectDatasource(dss) log.Infof("Cloud-init datasource that was used: %s", foundDs) // Apply any newly detected network config. cfg = rancherConfig.LoadConfig() log.Debugf("init: SaveCloudConfig(post ApplyNetworkConfig): %#v", cfg.Rancher.Network) network.ApplyNetworkConfig(cfg) return nil } func saveFiles(cloudConfigBytes, scriptBytes []byte, metadata datasource.Metadata) error { os.MkdirAll(rancherConfig.CloudConfigDir, os.ModeDir|0600) if len(scriptBytes) > 0 { log.Infof("Writing to %s", rancherConfig.CloudConfigScriptFile) if err := util.WriteFileAtomic(rancherConfig.CloudConfigScriptFile, scriptBytes, 500); err != nil { log.Errorf("Error while writing file %s: %v", rancherConfig.CloudConfigScriptFile, err) return err } } if len(cloudConfigBytes) > 0 { if err := util.WriteFileAtomic(rancherConfig.CloudConfigBootFile, cloudConfigBytes, 400); err != nil { return err } log.Infof("Wrote to %s", rancherConfig.CloudConfigBootFile) } metaDataBytes, err := yaml.Marshal(metadata) if err != nil { return err } if err = util.WriteFileAtomic(rancherConfig.MetaDataFile, metaDataBytes, 400); err != nil { return err } log.Infof("Wrote to %s", rancherConfig.MetaDataFile) // if we write the empty meta yml, the merge fails. // TODO: the problem is that a partially filled one will still have merge issues, so that needs fixing - presumably by making merge more clever, and making more fields optional emptyMeta, err := yaml.Marshal(datasource.Metadata{}) if err != nil { return err } if bytes.Compare(metaDataBytes, emptyMeta) == 0 { log.Infof("not writing %s: its all defaults.", rancherConfig.CloudConfigNetworkFile) return nil } type nonRancherCfg struct { Network netconf.NetworkConfig `yaml:"network,omitempty"` } type nonCfg struct { Rancher nonRancherCfg `yaml:"rancher,omitempty"` } // write the network.yml file from metadata cc := nonCfg{ Rancher: nonRancherCfg{ Network: metadata.NetworkConfig, }, } if err := os.MkdirAll(path.Dir(rancherConfig.CloudConfigNetworkFile), 0700); err != nil { log.Errorf("Failed to create directory for file %s: %v", rancherConfig.CloudConfigNetworkFile, err) } if err := rancherConfig.WriteToFile(cc, rancherConfig.CloudConfigNetworkFile); err != nil { log.Errorf("Failed to save config file %s: %v", rancherConfig.CloudConfigNetworkFile, err) } log.Infof("Wrote to %s", rancherConfig.CloudConfigNetworkFile) return nil } func fetchAndSave(ds datasource.Datasource) error { var metadata datasource.Metadata log.Infof("Fetching user-data from datasource %s", ds) userDataBytes, err := ds.FetchUserdata() if err != nil { log.Errorf("Failed fetching user-data from datasource: %v", err) return err } userDataBytes, err = decompressIfGzip(userDataBytes) if err != nil { log.Errorf("Failed decompressing user-data from datasource: %v", err) return err } log.Infof("Fetching meta-data from datasource of type %v", ds.Type()) metadata, err = ds.FetchMetadata() if err != nil { log.Errorf("Failed fetching meta-data from datasource: %v", err) return err } userData := string(userDataBytes) scriptBytes := []byte{} if config.IsScript(userData) { scriptBytes = userDataBytes userDataBytes = []byte{} } else if isCompose(userData) { if userDataBytes, err = composeToCloudConfig(userDataBytes); err != nil { log.Errorf("Failed to convert compose to cloud-config syntax: %v", err) return err } } else if config.IsCloudConfig(userData) { if _, err := rancherConfig.ReadConfig(userDataBytes, false); err != nil { log.WithFields(log.Fields{"cloud-config": userData, "err": err}).Warn("Failed to parse cloud-config, not saving.") userDataBytes = []byte{} } } else { log.Errorf("Unrecognized user-data\n(%s)", userData) userDataBytes = []byte{} } if _, err := rancherConfig.ReadConfig(userDataBytes, false); err != nil { log.WithFields(log.Fields{"cloud-config": userData, "err": err}).Warn("Failed to parse cloud-config") return errors.New("Failed to parse cloud-config") } return saveFiles(userDataBytes, scriptBytes, metadata) } // getDatasources creates a slice of possible Datasources for cloudinit based // on the different source command-line flags. func getDatasources(datasources []string) []datasource.Datasource { dss := make([]datasource.Datasource, 0, 5) for _, ds := range datasources { parts := strings.SplitN(ds, ":", 2) root := "" if len(parts) > 1 { root = parts[1] } switch parts[0] { case "*": dss = append(dss, getDatasources([]string{"configdrive", "vmware", "ec2", "digitalocean", "packet", "gce", "cloudstack", "exoscale", "proxmox"})...) case "proxmox": if root == "" { root = "/media/pve-config" } dss = append(dss, proxmox.NewDataSource(root)) case "exoscale": dss = append(dss, exoscale.NewDatasource(root)) case "cloudstack": for _, source := range cloudstack.NewDatasource(root) { dss = append(dss, source) } case "ec2": dss = append(dss, ec2.NewDatasource(root)) case "file": if root != "" { dss = append(dss, file.NewDatasource(root)) } case "tftp": dss = append(dss, tftp.NewDatasource(root)) case "url": if root != "" { dss = append(dss, url.NewDatasource(root)) } case "cmdline": if len(parts) == 1 { dss = append(dss, proccmdline.NewDatasource()) } case "configdrive": if root == "" { root = "/media/config-2" } dss = append(dss, configdrive.NewDatasource(root)) case "digitalocean": // TODO: should we enableDoLinkLocal() - to avoid the need for the other kernel/oem options? dss = append(dss, digitalocean.NewDatasource(root)) case "gce": dss = append(dss, gce.NewDatasource(root)) case "packet": dss = append(dss, packet.NewDatasource(root)) case "vmware": // made vmware datasource dependent on detecting vmware independently, as it crashes things otherwise v := vmware.NewDatasource(root) if v != nil { dss = append(dss, v) } case "aliyun": dss = append(dss, aliyun.NewDatasource(root)) case "azure": dss = append(dss, azure.NewDatasource(root)) } } return dss } func enableDoLinkLocal() { cfg := rancherConfig.LoadConfig() dhcpTimeout := cfg.Rancher.Defaults.Network.DHCPTimeout if cfg.Rancher.Network.DHCPTimeout > 0 { dhcpTimeout = cfg.Rancher.Network.DHCPTimeout } _, err := netconf.ApplyNetworkConfigs(&netconf.NetworkConfig{ Interfaces: map[string]netconf.InterfaceConfig{ "eth0": { IPV4LL: true, }, }, DHCPTimeout: dhcpTimeout, }, false, false) if err != nil { log.Errorf("Failed to apply link local on eth0: %v", err) } } // selectDatasource attempts to choose a valid Datasource to use based on its // current availability. The first Datasource to report to be available is // returned. Datasources will be retried if possible if they are not // immediately available. If all Datasources are permanently unavailable or // datasourceTimeout is reached before one becomes available, nil is returned. func selectDatasource(sources []datasource.Datasource) datasource.Datasource { ds := make(chan datasource.Datasource) stop := make(chan struct{}) var wg sync.WaitGroup for _, s := range sources { wg.Add(1) go func(s datasource.Datasource) { defer wg.Done() duration := datasourceInterval for { log.Infof("cloud-init: Checking availability of %q", s.Type()) if s.IsAvailable() { log.Infof("cloud-init: Datasource available: %s", s) ds <- s return } if !s.AvailabilityChanges() { log.Infof("cloud-init: Datasource unavailable, skipping: %s", s) return } log.Errorf("cloud-init: Datasource not ready, will retry: %s", s) select { case <-stop: return case <-time.After(duration): duration = pkg.ExpBackoff(duration, datasourceMaxInterval) } } }(s) } done := make(chan struct{}) go func() { wg.Wait() close(done) }() var s datasource.Datasource select { case s = <-ds: err := fetchAndSave(s) if err != nil { log.Errorf("Error fetching cloud-init datasource(%s): %s", s, err) } case <-done: case <-time.After(datasourceTimeout): } close(stop) return s } func isCompose(content string) bool { return strings.HasPrefix(content, "#compose\n") } func composeToCloudConfig(bytes []byte) ([]byte, error) { compose := make(map[interface{}]interface{}) err := yaml.Unmarshal(bytes, &compose) if err != nil { return nil, err } return yaml.Marshal(map[interface{}]interface{}{ "rancher": map[interface{}]interface{}{ "services": compose, }, }) } const gzipMagicBytes = "\x1f\x8b" func decompressIfGzip(userdataBytes []byte) ([]byte, error) { if !bytes.HasPrefix(userdataBytes, []byte(gzipMagicBytes)) { return userdataBytes, nil } return config.DecompressGzip(userdataBytes) } ================================================ FILE: cmd/control/autologin.go ================================================ package control import ( "fmt" "os" "os/exec" "path/filepath" "strings" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/codegangsta/cli" ) func AutologinMain() { log.InitLogger() app := cli.NewApp() app.Name = os.Args[0] app.Usage = "autologin console" app.Version = config.Version app.Author = "Project Burmilla\n\tRancher Labs, Inc." app.Email = "burmilla@localhost.local" app.EnableBashCompletion = true app.Action = autologinAction app.HideHelp = true app.Run(os.Args) } func autologinAction(c *cli.Context) error { cmd := exec.Command("/bin/stty", "sane") cmd.Stderr = os.Stderr cmd.Stdout = os.Stdout cmd.Stdin = os.Stdin if err := cmd.Run(); err != nil { log.Error(err) } usertty := "" user := "root" if c.NArg() > 0 { usertty = c.Args().Get(0) s := strings.SplitN(usertty, ":", 2) user = s[0] } mode := filepath.Base(os.Args[0]) console := CurrentConsole() cfg := config.LoadConfig() loginBin := "" args := []string{} if console == "centos" || console == "fedora" || mode == "recovery" { // For some reason, centos and fedora ttyS0 and tty1 don't work with `login -f rancher` // until I make time to read their source, lets just give us a way to get work done loginBin = "bash" args = append(args, "--login") if mode == "recovery" { os.Setenv("PROMPT_COMMAND", `echo "[`+fmt.Sprintf("Recovery console %s@%s:${PWD}", user, cfg.Hostname)+`]"`) } } else { loginBin = "login" args = append(args, "-f", user) // TODO: add a PROMPT_COMMAND if we haven't switch-rooted } loginBinPath, err := exec.LookPath(loginBin) if err != nil { fmt.Printf("error finding %s in path: %s", cmd.Args[0], err) return err } os.Setenv("TERM", "linux") // Causes all sorts of issues //return syscall.Exec(loginBinPath, args, os.Environ()) cmd = exec.Command(loginBinPath, args...) cmd.Env = os.Environ() cmd.Stderr = os.Stderr cmd.Stdout = os.Stdout cmd.Stdin = os.Stdin if err := cmd.Run(); err != nil { log.Errorf("\nError starting %s: %s", cmd.Args[0], err) } return nil } ================================================ FILE: cmd/control/bootstrap.go ================================================ package control import ( "fmt" "io/ioutil" "os" "os/exec" "path/filepath" "strings" "time" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" ) func BootstrapMain() { log.InitLogger() log.Debugf("bootstrapAction") if err := UdevSettle(); err != nil { log.Errorf("Failed to run udev settle: %v", err) } log.Debugf("bootstrapAction: loadingConfig") cfg := config.LoadConfig() log.Debugf("bootstrapAction: Rngd(%v)", cfg.Rancher.State.Rngd) if cfg.Rancher.State.Rngd { if err := runRngd(); err != nil { log.Errorf("Failed to run rngd: %v", err) } } log.Debugf("bootstrapAction: MdadmScan(%v)", cfg.Rancher.State.MdadmScan) if cfg.Rancher.State.MdadmScan { if err := mdadmScan(); err != nil { log.Errorf("Failed to run mdadm scan: %v", err) } } log.Debugf("bootstrapAction: cryptsetup(%v)", cfg.Rancher.State.Cryptsetup) if cfg.Rancher.State.Cryptsetup { if err := cryptsetup(); err != nil { log.Errorf("Failed to run cryptsetup: %v", err) } } log.Debugf("bootstrapAction: LvmScan(%v)", cfg.Rancher.State.LvmScan) if cfg.Rancher.State.LvmScan { if err := vgchange(); err != nil { log.Errorf("Failed to run vgchange: %v", err) } } stateScript := cfg.Rancher.State.Script log.Debugf("bootstrapAction: stateScript(%v)", stateScript) if stateScript != "" { if err := runStateScript(stateScript); err != nil { log.Errorf("Failed to run state script: %v", err) } } log.Debugf("bootstrapAction: RunCommandSequence(%v)", cfg.Bootcmd) err := util.RunCommandSequence(cfg.Bootcmd) if err != nil { log.Error(err) } if cfg.Rancher.State.Dev != "" && cfg.Rancher.State.Wait { waitForRoot(cfg) } if len(cfg.Rancher.State.Autoformat) > 0 { log.Infof("bootstrap container: Autoformat(%v) as %s", cfg.Rancher.State.Autoformat, "ext4") if err := autoformat(cfg.Rancher.State.Autoformat); err != nil { log.Errorf("Failed to run autoformat: %v", err) } } log.Debugf("bootstrapAction: udev settle2") if err := UdevSettle(); err != nil { log.Errorf("Failed to run udev settle: %v", err) } } func mdadmScan() error { cmd := exec.Command("mdadm", "--assemble", "--scan") cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr return cmd.Run() } func vgchange() error { cmd := exec.Command("vgchange", "--activate", "ay") cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr return cmd.Run() } func cryptsetup() error { devices, err := util.BlkidType("crypto_LUKS") if err != nil { return err } for _, cryptdevice := range devices { fdRead, err := os.Open("/dev/console") if err != nil { return err } defer fdRead.Close() fdWrite, err := os.OpenFile("/dev/console", os.O_WRONLY|os.O_APPEND, 0) if err != nil { return err } defer fdWrite.Close() cmd := exec.Command("cryptsetup", "luksOpen", cryptdevice, fmt.Sprintf("luks-%s", filepath.Base(cryptdevice))) cmd.Stdout = fdWrite cmd.Stderr = fdWrite cmd.Stdin = fdRead if err := cmd.Run(); err != nil { log.Errorf("Failed to run cryptsetup for %s: %v", cryptdevice, err) } } return nil } func runRngd() error { // use /dev/urandom as random number input for rngd // this is a really bad idea // since I am simple filling the kernel entropy pool with entropy coming from the kernel itself! // but this does not need to consider the user's hw rngd drivers. cmd := exec.Command("rngd", "-r", "/dev/urandom", "-q") cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr return cmd.Run() } func runStateScript(script string) error { f, err := ioutil.TempFile("", "") if err != nil { return err } if _, err := f.WriteString(script); err != nil { return err } if err := f.Chmod(os.ModePerm); err != nil { return err } if err := f.Close(); err != nil { return err } return util.RunScript(f.Name()) } func waitForRoot(cfg *config.CloudConfig) { var dev string for i := 0; i < 30; i++ { dev = util.ResolveDevice(cfg.Rancher.State.Dev) if dev != "" { break } time.Sleep(time.Millisecond * 1000) } if dev == "" { return } for i := 0; i < 30; i++ { if _, err := os.Stat(dev); err == nil { break } time.Sleep(time.Millisecond * 1000) } } func autoformat(autoformatDevices []string) error { cmd := exec.Command("/usr/sbin/auto-format.sh") cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr cmd.Env = []string{ "AUTOFORMAT=" + strings.Join(autoformatDevices, " "), } return cmd.Run() } ================================================ FILE: cmd/control/cli.go ================================================ package control import ( "fmt" "os" "github.com/burmilla/os/cmd/control/service" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/codegangsta/cli" ) func Main() { log.InitLogger() cli.VersionPrinter = func(c *cli.Context) { cfg := config.LoadConfig() runningName := cfg.Rancher.Upgrade.Image + ":" + config.Version fmt.Fprintf(c.App.Writer, "version %s from os image %s\n", c.App.Version, runningName) } app := cli.NewApp() app.Name = os.Args[0] app.Usage = fmt.Sprintf("Control and configure BurmillaOS\nbuilt: %s", config.BuildDate) app.Version = config.Version app.Author = "Project Burmilla\n\tRancher Labs, Inc." app.EnableBashCompletion = true app.Before = func(c *cli.Context) error { if os.Geteuid() != 0 { log.Fatalf("%s: Need to be root", os.Args[0]) } return nil } app.Commands = []cli.Command{ { Name: "config", ShortName: "c", Usage: "configure settings", HideHelp: true, Subcommands: configSubcommands(), }, { Name: "console", Usage: "manage which console container is used", HideHelp: true, Subcommands: consoleSubcommands(), }, { Name: "console-init", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: consoleInitAction, }, { Name: "dev", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: devAction, }, { Name: "docker-init", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: dockerInitAction, }, { Name: "engine", Usage: "manage which Docker engine is used", HideHelp: true, Subcommands: engineSubcommands(), }, { Name: "entrypoint", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: entrypointAction, }, { Name: "env", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: envAction, }, service.Commands(), { Name: "os", Usage: "operating system upgrade/downgrade", HideHelp: true, Subcommands: osSubcommands(), }, { Name: "preload-images", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: preloadImagesAction, }, { Name: "recovery-init", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: recoveryInitAction, }, { Name: "switch-console", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: switchConsoleAction, }, { Name: "tls", Usage: "setup tls configuration", HideHelp: true, Subcommands: tlsConfCommands(), }, { Name: "udev-settle", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: udevSettleAction, }, { Name: "user-docker", Hidden: true, HideHelp: true, SkipFlagParsing: true, Action: userDockerAction, }, installCommand, } app.Run(os.Args) } ================================================ FILE: cmd/control/config.go ================================================ package control import ( "bufio" "bytes" "fmt" "io" "io/ioutil" "os" "os/exec" "sort" "strings" "text/template" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" yaml "github.com/cloudfoundry-incubator/candiedyaml" "github.com/codegangsta/cli" "github.com/pkg/errors" ) func configSubcommands() []cli.Command { return []cli.Command{ { Name: "get", Usage: "get value", Action: configGet, }, { Name: "set", Usage: "set a value", Action: configSet, }, { Name: "images", Usage: "List Docker images for a configuration from a file", Action: runImages, Flags: []cli.Flag{ cli.StringFlag{ Name: "input, i", Usage: "File from which to read config", }, }, }, { Name: "generate", Usage: "Generate a configuration file from a template", Action: runGenerate, HideHelp: true, }, { Name: "export", Usage: "export configuration", Flags: []cli.Flag{ cli.StringFlag{ Name: "output, o", Usage: "File to which to save", }, cli.BoolFlag{ Name: "private, p", Usage: "Include the generated private keys", }, cli.BoolFlag{ Name: "full, f", Usage: "Export full configuration, including internal and default settings", }, }, Action: export, }, { Name: "merge", Usage: "merge configuration from stdin", Action: merge, Flags: []cli.Flag{ cli.StringFlag{ Name: "input, i", Usage: "File from which to read", }, }, }, { Name: "syslinux", Usage: "edit Syslinux boot global.cfg", Action: editSyslinux, }, { Name: "validate", Usage: "validate configuration from stdin", Action: validate, Flags: []cli.Flag{ cli.StringFlag{ Name: "input, i", Usage: "File from which to read", }, }, }, } } func imagesFromConfig(cfg *config.CloudConfig) []string { imagesMap := map[string]int{} for _, service := range cfg.Rancher.BootstrapContainers { imagesMap[service.Image] = 1 } for _, service := range cfg.Rancher.Services { imagesMap[service.Image] = 1 } images := make([]string, len(imagesMap)) i := 0 for image := range imagesMap { images[i] = image i++ } sort.Strings(images) return images } func runImages(c *cli.Context) error { configFile := c.String("input") cfg, err := config.ReadConfig(nil, false, configFile) if err != nil { log.WithFields(log.Fields{"err": err, "file": configFile}).Fatalf("Could not read config from file") } images := imagesFromConfig(cfg) fmt.Println(strings.Join(images, " ")) return nil } func runGenerate(c *cli.Context) error { if err := genTpl(os.Stdin, os.Stdout); err != nil { log.Fatalf("Failed to generate config, err: '%s'", err) } return nil } func genTpl(in io.Reader, out io.Writer) error { bytes, err := ioutil.ReadAll(in) if err != nil { log.Fatal("Could not read from stdin") } tpl := template.Must(template.New("osconfig").Parse(string(bytes))) return tpl.Execute(out, env2map(os.Environ())) } func env2map(env []string) map[string]string { m := make(map[string]string, len(env)) for _, s := range env { d := strings.Split(s, "=") m[d[0]] = d[1] } return m } func editSyslinux(c *cli.Context) error { // check whether is Raspberry Pi or not bytes, err := ioutil.ReadFile("/proc/device-tree/model") if err == nil && strings.Contains(strings.ToLower(string(bytes)), "raspberry") { buf := bufio.NewWriter(os.Stdout) fmt.Fprintln(buf, "raspberry pi can not use this command") buf.Flush() return errors.New("raspberry pi can not use this command") } if isExist := checkGlobalCfg(); !isExist { buf := bufio.NewWriter(os.Stdout) fmt.Fprintln(buf, "global.cfg can not be found") buf.Flush() return errors.New("global.cfg can not be found") } cmd := exec.Command("system-docker", "run", "--rm", "-it", "-v", "/:/host", "-w", "/host", "--entrypoint=nano", "burmilla/os-console:"+config.Version, "boot/global.cfg") cmd.Stdout, cmd.Stderr, cmd.Stdin = os.Stdout, os.Stderr, os.Stdin return cmd.Run() } func configSet(c *cli.Context) error { if c.NArg() < 2 { return nil } key := c.Args().Get(0) value := c.Args().Get(1) if key == "" { return nil } err := config.Set(key, value) if err != nil { log.Fatal(err) } return nil } func configGet(c *cli.Context) error { arg := c.Args().Get(0) if arg == "" { return nil } val, err := config.Get(arg) if err != nil { log.WithFields(log.Fields{"key": arg, "val": val, "err": err}).Fatal("config get: failed to retrieve value") } printYaml := false switch val.(type) { case []interface{}: printYaml = true case map[interface{}]interface{}: printYaml = true } if printYaml { bytes, err := yaml.Marshal(val) if err != nil { log.Fatal(err) } fmt.Println(string(bytes)) } else { fmt.Println(val) } return nil } func merge(c *cli.Context) error { bytes, err := inputBytes(c) if err != nil { log.Fatal(err) } if err = config.Merge(bytes); err != nil { log.Error(err) validationErrors, err := config.ValidateBytes(bytes) if err != nil { log.Fatal(err) } for _, validationError := range validationErrors.Errors() { log.Error(validationError) } log.Fatal("EXITING: Failed to parse configuration") } return nil } func export(c *cli.Context) error { content, err := config.Export(c.Bool("private"), c.Bool("full")) if err != nil { log.Fatal(err) } output := c.String("output") if output == "" { fmt.Println(content) } else { err := util.WriteFileAtomic(output, []byte(content), 0400) if err != nil { log.Fatal(err) } } return nil } func validate(c *cli.Context) error { bytes, err := inputBytes(c) if err != nil { log.Fatal(err) } validationErrors, err := config.ValidateBytes(bytes) if err != nil { log.Fatal(err) } for _, validationError := range validationErrors.Errors() { log.Error(validationError) } return nil } func inputBytes(c *cli.Context) ([]byte, error) { input := os.Stdin inputFile := c.String("input") if inputFile != "" { var err error input, err = os.Open(inputFile) if err != nil { return nil, err } defer input.Close() } content, err := ioutil.ReadAll(input) if err != nil { return nil, err } if bytes.Contains(content, []byte{13, 10}) { return nil, errors.New("file format shouldn't contain CRLF characters") } return content, nil } ================================================ FILE: cmd/control/config_test.go ================================================ package control import ( "bytes" "os" "strings" "testing" "github.com/stretchr/testify/require" ) func TestGenTpl(t *testing.T) { assert := require.New(t) tpl := ` services: {{if eq "amd64" .ARCH -}} acpid: image: burmilla/os-acpid:0.x.x labels: io.rancher.os.scope: system net: host uts: host privileged: true volumes_from: - command-volumes - system-volumes {{end -}} all-volumes:` for _, tc := range []struct { arch string expected string }{ {"amd64", ` services: acpid: image: burmilla/os-acpid:0.x.x labels: io.rancher.os.scope: system net: host uts: host privileged: true volumes_from: - command-volumes - system-volumes all-volumes:`}, {"arm", ` services: all-volumes:`}, } { out := &bytes.Buffer{} os.Setenv("ARCH", tc.arch) genTpl(strings.NewReader(tpl), out) assert.Equal(tc.expected, out.String(), tc.arch) } } ================================================ FILE: cmd/control/console.go ================================================ package control import ( "fmt" "sort" "strings" "github.com/burmilla/os/cmd/control/service" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/compose" "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/burmilla/os/pkg/util/network" "github.com/codegangsta/cli" "github.com/docker/docker/reference" composeConfig "github.com/docker/libcompose/config" "github.com/docker/libcompose/project/options" "golang.org/x/net/context" ) func consoleSubcommands() []cli.Command { return []cli.Command{ { Name: "switch", Usage: "switch console without a reboot", Action: consoleSwitch, Flags: []cli.Flag{ cli.BoolFlag{ Name: "force, f", Usage: "do not prompt for input", }, cli.BoolFlag{ Name: "no-pull", Usage: "don't pull console image", }, }, }, { Name: "enable", Usage: "set console to be switched on next reboot", Action: consoleEnable, }, { Name: "list", Usage: "list available consoles", Flags: []cli.Flag{ cli.BoolFlag{ Name: "update, u", Usage: "update console cache", }, }, Action: consoleList, }, } } func consoleSwitch(c *cli.Context) error { if len(c.Args()) != 1 { log.Fatal("Must specify exactly one console to switch to") } newConsole := c.Args()[0] cfg := config.LoadConfig() validateConsole(newConsole, cfg) if newConsole == CurrentConsole() { log.Warnf("Console is already set to %s", newConsole) } if !c.Bool("force") { fmt.Println(`Switching consoles will 1. destroy the current console container 2. log you out 3. restart Docker`) if !yes("Continue") { return nil } } if !c.Bool("no-pull") && newConsole != "default" { if err := compose.StageServices(cfg, newConsole); err != nil { return err } } service, err := compose.CreateService(nil, "switch-console", &composeConfig.ServiceConfigV1{ LogDriver: "json-file", Privileged: true, Net: "host", Pid: "host", Image: config.OsBase, Labels: map[string]string{ config.ScopeLabel: config.System, }, Command: []string{"/usr/bin/ros", "switch-console", newConsole}, VolumesFrom: []string{"all-volumes"}, }) if err != nil { return err } if err = service.Delete(context.Background(), options.Delete{}); err != nil { return err } if err = service.Up(context.Background(), options.Up{}); err != nil { return err } return service.Log(context.Background(), true) } func consoleEnable(c *cli.Context) error { if len(c.Args()) != 1 { log.Fatal("Must specify exactly one console to enable") } newConsole := c.Args()[0] cfg := config.LoadConfig() validateConsole(newConsole, cfg) if newConsole != "default" { if err := compose.StageServices(cfg, newConsole); err != nil { return err } } if err := config.Set("rancher.console", newConsole); err != nil { log.Errorf("Failed to update 'rancher.console': %v", err) } return nil } func consoleList(c *cli.Context) error { cfg := config.LoadConfig() consoles := availableConsoles(cfg, c.Bool("update")) CurrentConsole := CurrentConsole() for _, console := range consoles { if console == CurrentConsole { fmt.Printf("current %s\n", console) } else if console == cfg.Rancher.Console { fmt.Printf("enabled %s\n", console) } else { fmt.Printf("disabled %s\n", console) } } return nil } func validateConsole(console string, cfg *config.CloudConfig) { consoles := availableConsoles(cfg, false) if !service.IsLocalOrURL(console) && !util.Contains(consoles, console) { log.Fatalf("%s is not a valid console", console) } } func availableConsoles(cfg *config.CloudConfig, update bool) []string { if update { err := network.UpdateCaches(cfg.Rancher.Repositories.ToArray(), "consoles") if err != nil { log.Debugf("Failed to update console caches: %v", err) } } consoles, err := network.GetConsoles(cfg.Rancher.Repositories.ToArray()) if err != nil { log.Fatal(err) } consoles = append(consoles, "default") sort.Strings(consoles) return consoles } // CurrentConsole gets the name of the console that's running func CurrentConsole() (console string) { // TODO: replace this docker container look up with a libcompose service lookup? // sudo system-docker inspect --format "{{.Config.Image}}" console client, err := docker.NewSystemClient() if err != nil { log.Warnf("Failed to detect current console: %v", err) return } info, err := client.ContainerInspect(context.Background(), "console") if err != nil { log.Warnf("Failed to detect current console: %v", err) return } // parse image name, then remove os- prefix and the console suffix image, err := reference.ParseNamed(info.Config.Image) if err != nil { log.Warnf("Failed to detect current console(%s): %v", info.Config.Image, err) return } if strings.Contains(image.Name(), "os-console") { console = "default" return } console = strings.TrimPrefix(strings.TrimSuffix(image.Name(), "console"), "burmilla/os-") return } ================================================ FILE: cmd/control/console_init.go ================================================ package control import ( "bytes" "fmt" "io/ioutil" "os" "os/exec" "path" "strconv" "strings" "syscall" "text/template" "github.com/burmilla/os/cmd/cloudinitexecute" "github.com/burmilla/os/config" "github.com/burmilla/os/config/cmdline" "github.com/burmilla/os/pkg/compose" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/codegangsta/cli" "golang.org/x/crypto/ssh/terminal" "golang.org/x/sys/unix" ) const ( consoleDone = "/run/console-done" dockerHome = "/home/docker" gettyCmd = "/sbin/agetty" rancherHome = "/home/rancher" startScript = "/opt/rancher/bin/start.sh" runLockDir = "/run/lock" sshdFile = "/etc/ssh/sshd_config" sshdTplFile = "/etc/ssh/sshd_config.tpl" ) type symlink struct { oldname, newname string } func consoleInitAction(c *cli.Context) error { return consoleInitFunc() } func createHomeDir(homedir string, uid, gid int) { if _, err := os.Stat(homedir); os.IsNotExist(err) { if err := os.MkdirAll(homedir, 0755); err != nil { log.Error(err) } if err := os.Chown(homedir, uid, gid); err != nil { log.Error(err) } } } func enableBashRC(homedir string, uid, gid int) { if _, err := os.Stat(homedir + "/.bash_logout"); os.IsNotExist(err) { if err := util.FileCopy("/etc/skel/.bash_logout", homedir+"/.bash_logout"); err != nil { log.Error(err) } if err := os.Chown(homedir+"/.bash_logout", uid, gid); err != nil { log.Error(err) } } if _, err := os.Stat(homedir + "/.bashrc"); os.IsNotExist(err) { if err := util.FileCopy("/etc/skel/.bashrc", homedir+"/.bashrc"); err != nil { log.Error(err) } if err := os.Chown(homedir+"/.bashrc", uid, gid); err != nil { log.Error(err) } } if _, err := os.Stat(homedir + "/.profile"); os.IsNotExist(err) { if err := util.FileCopy("/etc/skel/.profile", homedir+"/.profile"); err != nil { log.Error(err) } if err := os.Chown(homedir+"/.profile", uid, gid); err != nil { log.Error(err) } } } func consoleInitFunc() error { cfg := config.LoadConfig() // Now that we're booted, stop writing debug messages to the console cmd := exec.Command("sudo", "dmesg", "--console-off") if err := cmd.Run(); err != nil { log.Error(err) } createHomeDir(rancherHome, 1100, 1100) createHomeDir(dockerHome, 1101, 1101) // who & w command need this file if _, err := os.Stat("/run/utmp"); os.IsNotExist(err) { f, err := os.OpenFile("/run/utmp", os.O_RDWR|os.O_CREATE, 0644) if err != nil { log.Error(err) } defer f.Close() } // last command need this file if _, err := os.Stat("/var/log/wtmp"); os.IsNotExist(err) { f, err := os.OpenFile("/var/log/wtmp", os.O_RDWR|os.O_CREATE, 0644) if err != nil { log.Error(err) } defer f.Close() } // some software need this dir, like open-iscsi if _, err := os.Stat(runLockDir); os.IsNotExist(err) { if err = os.Mkdir(runLockDir, 0755); err != nil { log.Error(err) } } ignorePassword := false for _, d := range cfg.Rancher.Disable { if d == "password" { ignorePassword = true break } } password := cmdline.GetCmdline("rancher.password") if !ignorePassword && password != "" { cmd := exec.Command("chpasswd") cmd.Stdin = strings.NewReader(fmt.Sprint("rancher:", password)) if err := cmd.Run(); err != nil { log.Error(err) } cmd = exec.Command("bash", "-c", `sed -E -i 's/(rancher:.*:).*(:.*:.*:.*:.*:.*:.*)$/\1\2/' /etc/shadow`) if err := cmd.Run(); err != nil { log.Error(err) } } const pollInfo = `#!/bin/sh export TERM=xterm-256color echo " $(tput setaf 3) -------------------------------------------------- | Dear Burmilla OS user, | | Please, answer to poll in $(tput setaf 4)burmillaos.org/poll$(tput setaf 3) | | about your main Burmilla OS use case. | | | | Thank you advance. | | | | You can disable this message with command: | | $(tput setaf 5)sudo chmod a-x /etc/update-motd.d/1-burmillaos-1$(tput setaf 3) | -------------------------------------------------- $(tput sgr0) " ` if _, err := os.Stat("/etc/update-motd.d/1-burmillaos-1"); os.IsNotExist(err) { if err := ioutil.WriteFile("/etc/update-motd.d/1-burmillaos-1", []byte(pollInfo), 0755); err != nil { log.Error(err) } } if err := setupSSH(cfg); err != nil { log.Error(err) } if err := writeRespawn("rancher", cfg.Rancher.SSH.Daemon, false); err != nil { log.Error(err) } if err := modifySshdConfig(cfg); err != nil { log.Error(err) } p, err := compose.GetProject(cfg, false, true) if err != nil { log.Error(err) } // check the multi engine service & generate the multi engine script for _, key := range p.ServiceConfigs.Keys() { serviceConfig, ok := p.ServiceConfigs.Get(key) if !ok { log.Errorf("Failed to get service config from the project") continue } if _, ok := serviceConfig.Labels[config.UserDockerLabel]; ok { err = util.GenerateDindEngineScript(serviceConfig.Labels[config.UserDockerLabel]) if err != nil { log.Errorf("Failed to generate engine script: %v", err) continue } } } // create Docker CLI plugins folder if _, err := os.Stat("/usr/libexec/docker/cli-plugins"); os.IsNotExist(err) { if err = os.MkdirAll("/usr/libexec/docker/cli-plugins", 0755); err != nil { log.Error(err) } } baseSymlink := symLinkEngineBinary() if _, err := os.Stat(dockerCompletionFile); err == nil { baseSymlink = append(baseSymlink, symlink{ dockerCompletionFile, dockerCompletionLinkFile, }) } for _, link := range baseSymlink { syscall.Unlink(link.newname) if err := os.Symlink(link.oldname, link.newname); err != nil { log.Error(err) } } // mount systemd cgroups if err := os.MkdirAll("/sys/fs/cgroup/systemd", 0555); err != nil { log.Error(err) } if err := unix.Mount("cgroup", "/sys/fs/cgroup/systemd", "cgroup", 0, "none,name=systemd"); err != nil { log.Error(err) } // font backslashes need to be escaped for when issue is output! (but not the others..) if err := ioutil.WriteFile("/etc/issue", []byte(config.Banner), 0644); err != nil { log.Error(err) } // write out a profile.d file for the proxy settings. // maybe write these on the host and bindmount into everywhere? proxyLines := []string{} for _, k := range []string{"http_proxy", "HTTP_PROXY", "https_proxy", "HTTPS_PROXY", "no_proxy", "NO_PROXY"} { if v, ok := cfg.Rancher.Environment[k]; ok { proxyLines = append(proxyLines, fmt.Sprintf("export %s=%q", k, v)) } } if len(proxyLines) > 0 { proxyString := strings.Join(proxyLines, "\n") proxyString = fmt.Sprintf("#!/bin/sh\n%s\n", proxyString) if err := ioutil.WriteFile("/etc/profile.d/proxy.sh", []byte(proxyString), 0755); err != nil { log.Error(err) } } // write out a profile.d file for the PATH settings. pathLines := []string{} for _, k := range []string{"PATH", "path"} { if v, ok := cfg.Rancher.Environment[k]; ok { for _, p := range strings.Split(v, ",") { pathLines = append(pathLines, fmt.Sprintf("export PATH=$PATH:%s", strings.TrimSpace(p))) } } } if len(pathLines) > 0 { pathString := strings.Join(pathLines, "\n") pathString = fmt.Sprintf("#!/bin/sh\n%s\n", pathString) if err := ioutil.WriteFile("/etc/profile.d/path.sh", []byte(pathString), 0755); err != nil { log.Error(err) } } cmd = exec.Command("bash", "-c", `echo $(/sbin/ifconfig | grep -B1 "inet" |awk '{ if ( $1 == "inet" ) { print $2 } else if ( $3 == "mtu" ) { printf "%s:" ,$1 } }' |awk -F: '{ print $1 ": " $3}') >> /etc/issue`) if err := cmd.Run(); err != nil { log.Error(err) } cloudinitexecute.ApplyConsole(cfg) if err := util.RunScript(config.CloudConfigScriptFile); err != nil { log.Error(err) } if err := util.RunScript(startScript); err != nil { log.Error(err) } if err := ioutil.WriteFile(consoleDone, []byte(CurrentConsole()), 0644); err != nil { log.Error(err) } // Check if user Docker has ever run in this installation yet and switch to latest/user defined version if not if _, err := os.Stat("/var/lib/docker/engine-id"); os.IsNotExist(err) { dockerVersion := "latest" if cfg.Rancher.Docker.Engine != dockerVersion { dockerVersion = cfg.Rancher.Docker.Engine } log.Warn("User Docker does not exist, switching to " + dockerVersion) cmd := exec.Command("/usr/bin/ros", "engine", "switch", dockerVersion) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr if err := cmd.Run(); err != nil { log.Error(err) } } if err := util.RunScript("/etc/rc.local"); err != nil { log.Error(err) } if err := util.RunScript("/etc/init.d/apparmor", "start"); err != nil { log.Error(err) } // Enable Bash colors enableBashRC("/root", 0, 0) enableBashRC(rancherHome, 1100, 1100) enableBashRC(dockerHome, 1101, 1101) os.Setenv("TERM", "linux") respawnBinPath, err := exec.LookPath("respawn") if err != nil { return err } return syscall.Exec(respawnBinPath, []string{"respawn", "-f", "/etc/respawn.conf"}, os.Environ()) } func generateRespawnConf(cmdline, user string, sshd, recovery bool) string { var respawnConf bytes.Buffer autologinBin := "/usr/bin/autologin" if recovery { autologinBin = "/usr/bin/recovery" } config := config.LoadConfig() allowAutoLogin := true for _, d := range config.Rancher.Disable { if d == "autologin" { allowAutoLogin = false break } } for i := 1; i < 7; i++ { tty := fmt.Sprintf("tty%d", i) if !istty(tty) { continue } respawnConf.WriteString(gettyCmd) if allowAutoLogin && strings.Contains(cmdline, fmt.Sprintf("rancher.autologin=%s", tty)) { respawnConf.WriteString(fmt.Sprintf(" -n -l %s -o %s:tty%d", autologinBin, user, i)) } respawnConf.WriteString(fmt.Sprintf(" --noclear %s linux\n", tty)) } for _, tty := range []string{"ttyS0", "ttyS1", "ttyS2", "ttyS3", "ttyAMA0"} { if !strings.Contains(cmdline, fmt.Sprintf("console=%s", tty)) { continue } if !istty(tty) { continue } respawnConf.WriteString(gettyCmd) if allowAutoLogin && strings.Contains(cmdline, fmt.Sprintf("rancher.autologin=%s", tty)) { respawnConf.WriteString(fmt.Sprintf(" -n -l %s -o %s:%s", autologinBin, user, tty)) } respawnConf.WriteString(fmt.Sprintf(" %s\n", tty)) } if sshd { respawnConf.WriteString("/usr/sbin/sshd -D") } return respawnConf.String() } func writeRespawn(user string, sshd, recovery bool) error { cmdline, err := ioutil.ReadFile("/proc/cmdline") if err != nil { return err } respawn := generateRespawnConf(string(cmdline), user, sshd, recovery) files, err := ioutil.ReadDir("/etc/respawn.conf.d") if err == nil { for _, f := range files { p := path.Join("/etc/respawn.conf.d", f.Name()) content, err := ioutil.ReadFile(p) if err != nil { log.Errorf("Failed to read %s: %v", p, err) continue } respawn += fmt.Sprintf("\n%s", string(content)) } } else if !os.IsNotExist(err) { log.Error(err) } return ioutil.WriteFile("/etc/respawn.conf", []byte(respawn), 0644) } func modifySshdConfig(cfg *config.CloudConfig) error { _, err := os.Stat(sshdTplFile) if err == nil { os.Remove(sshdFile) sshdTpl, err := template.ParseFiles(sshdTplFile) if err != nil { return err } f, err := os.OpenFile(sshdFile, os.O_WRONLY|os.O_CREATE, 0644) if err != nil { return err } defer f.Close() config := map[string]string{} if cfg.Rancher.SSH.Port > 0 && cfg.Rancher.SSH.Port < 65355 { config["Port"] = strconv.Itoa(cfg.Rancher.SSH.Port) } if cfg.Rancher.SSH.ListenAddress != "" { config["ListenAddress"] = cfg.Rancher.SSH.ListenAddress } return sshdTpl.Execute(f, config) } else if os.IsNotExist(err) { return nil } return err } func setupSSH(cfg *config.CloudConfig) error { for _, keyType := range []string{"rsa", "ed25519"} { outputFile := fmt.Sprintf("/etc/ssh/ssh_host_%s_key", keyType) outputFilePub := fmt.Sprintf("/etc/ssh/ssh_host_%s_key.pub", keyType) if _, err := os.Stat(outputFile); err == nil { continue } saved, savedExists := cfg.Rancher.SSH.Keys[keyType] pub, pubExists := cfg.Rancher.SSH.Keys[keyType+"-pub"] if savedExists && pubExists { // TODO check permissions if err := util.WriteFileAtomic(outputFile, []byte(saved), 0600); err != nil { return err } if err := util.WriteFileAtomic(outputFilePub, []byte(pub), 0600); err != nil { return err } continue } cmd := exec.Command("bash", "-c", fmt.Sprintf("ssh-keygen -f %s -N '' -t %s", outputFile, keyType)) if err := cmd.Run(); err != nil { return err } savedBytes, err := ioutil.ReadFile(outputFile) if err != nil { return err } pubBytes, err := ioutil.ReadFile(outputFilePub) if err != nil { return err } config.Set(fmt.Sprintf("rancher.ssh.keys.%s", keyType), string(savedBytes)) config.Set(fmt.Sprintf("rancher.ssh.keys.%s-pub", keyType), string(pubBytes)) } return os.MkdirAll("/var/run/sshd", 0644) } func istty(name string) bool { if f, err := os.Open(fmt.Sprintf("/dev/%s", name)); err == nil { return terminal.IsTerminal(int(f.Fd())) } return false } ================================================ FILE: cmd/control/dev.go ================================================ package control import ( "fmt" "github.com/burmilla/os/pkg/util" "github.com/codegangsta/cli" ) func devAction(c *cli.Context) error { if len(c.Args()) > 0 { fmt.Println(util.ResolveDevice(c.Args()[0])) } return nil } ================================================ FILE: cmd/control/docker_init.go ================================================ package control import ( "fmt" "io/ioutil" "os" "path" "strings" "syscall" "time" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/codegangsta/cli" ) const ( dockerConf = "/var/lib/rancher/conf/docker" dockerDone = "/run/docker-done" dockerLog = "/var/log/docker.log" dockerCompletionLinkFile = "/usr/share/bash-completion/completions/docker" dockerCompletionFile = "/var/lib/rancher/engine/completion" ) func dockerInitAction(c *cli.Context) error { // TODO: this should be replaced by a "Console ready event watcher" for { if _, err := os.Stat(consoleDone); err == nil { break } time.Sleep(200 * time.Millisecond) } if _, err := os.Stat(dockerCompletionFile); err != nil { if _, err := os.Readlink(dockerCompletionLinkFile); err == nil { syscall.Unlink(dockerCompletionLinkFile) } } dockerBin := "" dockerPaths := []string{ "/usr/bin", "/opt/bin", "/usr/local/bin", "/var/lib/rancher/docker", } for _, binPath := range dockerPaths { if util.ExistsAndExecutable(path.Join(binPath, "dockerd")) { dockerBin = path.Join(binPath, "dockerd") break } } if dockerBin == "" { for _, binPath := range dockerPaths { if util.ExistsAndExecutable(path.Join(binPath, "docker")) { dockerBin = path.Join(binPath, "docker") break } } } if dockerBin == "" { err := fmt.Errorf("Failed to find either dockerd or docker binaries") log.Error(err) return err } log.Infof("Found %s", dockerBin) if err := syscall.Mount("", "/", "", syscall.MS_SHARED|syscall.MS_REC, ""); err != nil { log.Error(err) } if err := syscall.Mount("", "/run", "", syscall.MS_SHARED|syscall.MS_REC, ""); err != nil { log.Error(err) } mountInfo, err := ioutil.ReadFile("/proc/self/mountinfo") if err != nil { return err } for _, mount := range strings.Split(string(mountInfo), "\n") { if strings.Contains(mount, "/var/lib/user-docker /var/lib/docker") && strings.Contains(mount, "rootfs") { os.Setenv("DOCKER_RAMDISK", "true") } } cfg := config.LoadConfig() for _, link := range symLinkEngineBinary() { syscall.Unlink(link.newname) if _, err := os.Stat(link.oldname); err == nil { if err := os.Symlink(link.oldname, link.newname); err != nil { log.Error(err) } } } err = checkZfsBackingFS(cfg.Rancher.Docker.StorageDriver, cfg.Rancher.Docker.DataRoot) if err != nil { log.Fatal(err) } args := []string{ "bash", "-c", fmt.Sprintf(`[ -e %s ] && source %s; exec /usr/bin/dockerlaunch %s %s $DOCKER_OPTS >> %s 2>&1`, dockerConf, dockerConf, dockerBin, strings.Join(c.Args(), " "), dockerLog), } // TODO: this should be replaced by a "Docker ready event watcher" if err := ioutil.WriteFile(dockerDone, []byte(CurrentEngine()), 0644); err != nil { log.Error(err) } return syscall.Exec("/bin/bash", args, os.Environ()) } ================================================ FILE: cmd/control/engine.go ================================================ package control import ( "fmt" "io/ioutil" "net" "os" "path" "sort" "strconv" "strings" "github.com/burmilla/os/cmd/control/service" "github.com/burmilla/os/cmd/control/service/app" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/compose" "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/burmilla/os/pkg/util/network" "github.com/burmilla/os/pkg/util/versions" yaml "github.com/cloudfoundry-incubator/candiedyaml" "github.com/codegangsta/cli" "github.com/docker/docker/reference" "github.com/docker/engine-api/types" "github.com/docker/engine-api/types/filters" composeConfig "github.com/docker/libcompose/config" "github.com/docker/libcompose/project/options" composeYaml "github.com/docker/libcompose/yaml" "github.com/pkg/errors" "golang.org/x/net/context" ) func engineSubcommands() []cli.Command { return []cli.Command{ { Name: "switch", Usage: "switch user Docker engine without a reboot", Action: engineSwitch, Flags: []cli.Flag{ cli.BoolFlag{ Name: "force, f", Usage: "do not prompt for input", }, cli.BoolFlag{ Name: "no-pull", Usage: "don't pull console image", }, }, }, { Name: "create", Usage: "create Dind engine without a reboot", Description: "must switch user docker to 17.12.1 or earlier if using Dind", ArgsUsage: "", Before: preFlightValidate, Action: engineCreate, Flags: []cli.Flag{ cli.StringFlag{ Name: "version, v", Value: config.DefaultDind, Usage: fmt.Sprintf("set the version for the engine, %s are available", config.SupportedDinds), }, cli.StringFlag{ Name: "network", Usage: "set the network for the engine", }, cli.StringFlag{ Name: "fixed-ip", Usage: "set the fixed ip for the engine", }, cli.StringFlag{ Name: "ssh-port", Usage: "set the ssh port for the engine", }, cli.StringFlag{ Name: "authorized-keys", Usage: "set the ssh authorized_keys absolute path for the engine", }, }, }, { Name: "rm", Usage: "remove Dind engine without a reboot", ArgsUsage: "", Before: func(c *cli.Context) error { if len(c.Args()) != 1 { return errors.New("Must specify exactly one Docker engine to remove") } return nil }, Action: dindEngineRemove, Flags: []cli.Flag{ cli.IntFlag{ Name: "timeout,t", Usage: "specify a shutdown timeout in seconds", Value: 10, }, cli.BoolFlag{ Name: "force, f", Usage: "do not prompt for input", }, }, }, { Name: "enable", Usage: "set user Docker engine to be switched on next reboot", Action: engineEnable, }, { Name: "list", Usage: "list available Docker engines (include the Dind engines)", Flags: []cli.Flag{ cli.BoolFlag{ Name: "update, u", Usage: "update engine cache", }, }, Action: engineList, }, } } func engineSwitch(c *cli.Context) error { if len(c.Args()) != 1 { log.Fatal("Must specify exactly one Docker engine to switch to") } newEngine := c.Args()[0] cfg := config.LoadConfig() if newEngine == "latest" { engines := availableEngines(cfg, true) newEngine = engines[len(engines)-1] currentEngine := CurrentEngine() if newEngine == currentEngine { log.Infof("Latest engine %s is already running", newEngine) return nil } log.Infof("Switching to engine %s", newEngine) } else { validateEngine(newEngine, cfg) } project, err := compose.GetProject(cfg, true, false) if err != nil { log.Fatal(err) } if err = project.Stop(context.Background(), 10, "docker"); err != nil { log.Fatal(err) } if err = compose.LoadSpecialService(project, cfg, "docker", newEngine); err != nil { log.Fatal(err) } if err = project.Up(context.Background(), options.Up{}, "docker"); err != nil { log.Fatal(err) } if err := config.Set("rancher.docker.engine", newEngine); err != nil { log.Errorf("Failed to update rancher.docker.engine: %v", err) } return nil } func engineCreate(c *cli.Context) error { name := c.Args()[0] version := c.String("version") sshPort, _ := strconv.Atoi(c.String("ssh-port")) if sshPort <= 0 { sshPort = randomSSHPort() } authorizedKeys := c.String("authorized-keys") network := c.String("network") fixedIP := c.String("fixed-ip") // generate & create engine compose err := generateEngineCompose(name, version, sshPort, authorizedKeys, network, fixedIP) if err != nil { return err } // stage engine service cfg := config.LoadConfig() var enabledServices []string if val, ok := cfg.Rancher.ServicesInclude[name]; !ok || !val { cfg.Rancher.ServicesInclude[name] = true enabledServices = append(enabledServices, name) } if len(enabledServices) > 0 { if err := compose.StageServices(cfg, enabledServices...); err != nil { log.Fatal(err) } if err := config.Set("rancher.services_include", cfg.Rancher.ServicesInclude); err != nil { log.Fatal(err) } } // generate engine script err = util.GenerateDindEngineScript(name) if err != nil { log.Fatal(err) } return nil } func dindEngineRemove(c *cli.Context) error { if !c.Bool("force") { if !yes("Continue") { return nil } } // app.ProjectDelete needs to use this flag // Allow deletion of the Dind engine c.Set("force", "true") // Remove volumes associated with the Dind engine container c.Set("v", "true") name := c.Args()[0] cfg := config.LoadConfig() p, err := compose.GetProject(cfg, true, false) if err != nil { log.Fatalf("Get project failed: %v", err) } // 1. service stop err = app.ProjectStop(p, c) if err != nil { log.Fatalf("Stop project service failed: %v", err) } // 2. service delete err = app.ProjectDelete(p, c) if err != nil { log.Fatalf("Delete project service failed: %v", err) } // 3. service delete if _, ok := cfg.Rancher.ServicesInclude[name]; !ok { log.Fatalf("Failed to found enabled service %s", name) } delete(cfg.Rancher.ServicesInclude, name) if err = config.Set("rancher.services_include", cfg.Rancher.ServicesInclude); err != nil { log.Fatal(err) } // 4. remove service from file err = RemoveEngineFromCompose(name) if err != nil { log.Fatal(err) } // 5. remove dind engine script err = util.RemoveDindEngineScript(name) if err != nil { return err } return nil } func engineEnable(c *cli.Context) error { if len(c.Args()) != 1 { log.Fatal("Must specify exactly one Docker engine to enable") } newEngine := c.Args()[0] cfg := config.LoadConfig() validateEngine(newEngine, cfg) if err := compose.StageServices(cfg, newEngine); err != nil { return err } if err := config.Set("rancher.docker.engine", newEngine); err != nil { log.Errorf("Failed to update 'rancher.docker.engine': %v", err) } return nil } func engineList(c *cli.Context) error { cfg := config.LoadConfig() engines := availableEngines(cfg, c.Bool("update")) currentEngine := CurrentEngine() i := 1 for _, engine := range engines { if engine == currentEngine { if i == len(engines) { fmt.Printf("current %s (latest)\n", engine) } else { fmt.Printf("current %s\n", engine) } } else if engine == cfg.Rancher.Docker.Engine { if i == len(engines) { fmt.Printf("enabled %s (latest)\n", engine) } else { fmt.Printf("enabled %s\n", engine) } } else { if i == len(engines) { fmt.Printf("disabled %s (latest)\n", engine) } else { fmt.Printf("disabled %s\n", engine) } } i++ } // check the dind container client, err := docker.NewSystemClient() if err != nil { log.Warnf("Failed to detect dind: %v", err) return nil } filter := filters.NewArgs() filter.Add("label", config.UserDockerLabel) opts := types.ContainerListOptions{ All: true, Filter: filter, } containers, err := client.ContainerList(context.Background(), opts) if err != nil { log.Warnf("Failed to detect dind: %v", err) return nil } for _, c := range containers { if c.State == "running" { fmt.Printf("enabled %s\n", c.Labels[config.UserDockerLabel]) } else { fmt.Printf("disabled %s\n", c.Labels[config.UserDockerLabel]) } } return nil } func validateEngine(engine string, cfg *config.CloudConfig) { engines := availableEngines(cfg, false) if !service.IsLocalOrURL(engine) && !util.Contains(engines, engine) { log.Fatalf("%s is not a valid engine", engine) } } func availableEngines(cfg *config.CloudConfig, update bool) []string { if update { err := network.UpdateCaches(cfg.Rancher.Repositories.ToArray(), "engines") if err != nil { log.Debugf("Failed to update engine caches: %v", err) } } engines, err := network.GetEngines(cfg.Rancher.Repositories.ToArray()) if err != nil { log.Fatal(err) } sort.Strings(engines) return engines } // CurrentEngine gets the name of the docker that's running func CurrentEngine() (engine string) { // sudo system-docker inspect --format "{{.Config.Image}}" docker client, err := docker.NewSystemClient() if err != nil { log.Warnf("Failed to detect current docker: %v", err) return } info, err := client.ContainerInspect(context.Background(), "docker") if err != nil { log.Warnf("Failed to detect current docker: %v", err) return } // parse image name, then remove os- prefix and the engine suffix image, err := reference.ParseNamed(info.Config.Image) if err != nil { log.Warnf("Failed to detect current docker(%s): %v", info.Config.Image, err) return } if t, ok := image.(reference.NamedTagged); ok { tag := t.Tag() // compatible with some patch image tags, such as 17.12.1-1,17.06.2-1,... tag = strings.SplitN(tag, "-", 2)[0] if !strings.HasPrefix(tag, "1.") && versions.LessThan(tag, "18.09.0") { // >= 18.09.0, docker- // < 18.09.0 and >= 16.03, docker--ce // < 17.03, docker- tag = tag + "-ce" } return "docker-" + tag } return } func preFlightValidate(c *cli.Context) error { if len(c.Args()) != 1 { return errors.New("Must specify one engine name") } name := c.Args()[0] if name == "" { return errors.New("Must specify one engine name") } version := c.String("version") if version == "" { return errors.New("Must specify one engine version") } authorizedKeys := c.String("authorized-keys") if authorizedKeys != "" { if _, err := os.Stat(authorizedKeys); os.IsNotExist(err) { return errors.New("The authorized-keys should be an exist file, recommended to put in the /opt or /var/lib/rancher directory") } } network := c.String("network") if network == "" { return errors.New("Must specify network") } userDefineNetwork, err := CheckUserDefineNetwork(network) if err != nil { return err } fixedIP := c.String("fixed-ip") if fixedIP == "" { return errors.New("Must specify fix ip") } err = CheckUserDefineIPv4Address(fixedIP, *userDefineNetwork) if err != nil { return err } isVersionMatch := false for _, v := range config.SupportedDinds { if v == version { isVersionMatch = true break } } if !isVersionMatch { return errors.Errorf("Engine version not supported only %v are supported", config.SupportedDinds) } if c.String("ssh-port") != "" { port, err := strconv.Atoi(c.String("ssh-port")) if err != nil { return errors.Wrap(err, "Failed to convert ssh port to Int") } if port > 0 { addr, err := net.ResolveTCPAddr("tcp", "localhost:"+strconv.Itoa(port)) if err != nil { return errors.Errorf("Failed to resolve tcp addr: %v", err) } l, err := net.ListenTCP("tcp", addr) if err != nil { return errors.Errorf("Failed to listen tcp: %v", err) } defer l.Close() } } return nil } func randomSSHPort() int { addr, err := net.ResolveTCPAddr("tcp", "localhost:0") if err != nil { log.Errorf("Failed to resolve tcp addr: %v", err) return 0 } l, err := net.ListenTCP("tcp", addr) if err != nil { return 0 } defer l.Close() return l.Addr().(*net.TCPAddr).Port } func generateEngineCompose(name, version string, sshPort int, authorizedKeys, network, fixedIP string) error { if err := os.MkdirAll(path.Dir(config.MultiDockerConfFile), 0700); err != nil && !os.IsExist(err) { log.Errorf("Failed to create directory for file %s: %v", config.MultiDockerConfFile, err) return err } composeConfigs := map[string]composeConfig.ServiceConfigV1{} if _, err := os.Stat(config.MultiDockerConfFile); err == nil { // read from engine compose bytes, err := ioutil.ReadFile(config.MultiDockerConfFile) if err != nil { return err } err = yaml.Unmarshal(bytes, &composeConfigs) if err != nil { return err } } if err := os.MkdirAll(config.MultiDockerDataDir+"/"+name, 0700); err != nil && !os.IsExist(err) { log.Errorf("Failed to create directory for file %s: %v", config.MultiDockerDataDir+"/"+name, err) return err } volumes := []string{ "/lib/modules:/lib/modules", config.MultiDockerDataDir + "/" + name + ":" + config.MultiDockerDataDir + "/" + name, } if authorizedKeys != "" { volumes = append(volumes, authorizedKeys+":/root/.ssh/authorized_keys") } composeConfigs[name] = composeConfig.ServiceConfigV1{ Image: "${REGISTRY_DOMAIN}/" + version, Restart: "always", Privileged: true, Net: network, Ports: []string{strconv.Itoa(sshPort) + ":22"}, Volumes: volumes, VolumesFrom: []string{}, Command: composeYaml.Command{ "--storage-driver=overlay2", "--data-root=" + config.MultiDockerDataDir + "/" + name, "--host=unix://" + config.MultiDockerDataDir + "/" + name + "/docker-" + name + ".sock", }, Labels: composeYaml.SliceorMap{ "io.rancher.os.scope": "system", "io.rancher.os.after": "console", config.UserDockerLabel: name, config.UserDockerNetLabel: network, config.UserDockerFIPLabel: fixedIP, }, } bytes, err := yaml.Marshal(composeConfigs) if err != nil { return err } return ioutil.WriteFile(config.MultiDockerConfFile, bytes, 0640) } func RemoveEngineFromCompose(name string) error { composeConfigs := map[string]composeConfig.ServiceConfigV1{} if _, err := os.Stat(config.MultiDockerConfFile); err == nil { // read from engine compose bytes, err := ioutil.ReadFile(config.MultiDockerConfFile) if err != nil { return err } err = yaml.Unmarshal(bytes, &composeConfigs) if err != nil { return err } } delete(composeConfigs, name) bytes, err := yaml.Marshal(composeConfigs) if err != nil { return err } return ioutil.WriteFile(config.MultiDockerConfFile, bytes, 0640) } func CheckUserDefineNetwork(name string) (*types.NetworkResource, error) { systemClient, err := docker.NewSystemClient() if err != nil { return nil, err } networks, err := systemClient.NetworkList(context.Background(), types.NetworkListOptions{}) if err != nil { return nil, err } for _, network := range networks { if network.Name == name { return &network, nil } } return nil, errors.Errorf("Failed to found the user define network: %s", name) } func CheckUserDefineIPv4Address(ipv4 string, network types.NetworkResource) error { for _, config := range network.IPAM.Config { _, ipnet, _ := net.ParseCIDR(config.Subnet) if ipnet.Contains(net.ParseIP(ipv4)) { return nil } } return errors.Errorf("IP %s is not in the specified cidr", ipv4) } ================================================ FILE: cmd/control/entrypoint.go ================================================ package control import ( "os" "os/exec" "syscall" "github.com/burmilla/os/cmd/cloudinitexecute" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/codegangsta/cli" "golang.org/x/net/context" ) const ( ca = "/etc/ssl/certs/ca-certificates.crt" caBase = "/etc/ssl/certs/ca-certificates.crt.rancher" ) func entrypointAction(c *cli.Context) error { if _, err := os.Stat("/host/dev"); err == nil { cmd := exec.Command("mount", "--rbind", "/host/dev", "/dev") if err := cmd.Run(); err != nil { log.Errorf("Failed to mount /dev: %v", err) } } if err := util.FileCopy(caBase, ca); err != nil && !os.IsNotExist(err) { log.Error(err) } cfg := config.LoadConfig() shouldWriteFiles := false for _, file := range cfg.WriteFiles { if file.Container != "" { shouldWriteFiles = true } } if shouldWriteFiles { writeFiles(cfg) } setupCommandSymlinks() if len(os.Args) < 3 { return nil } binary, err := exec.LookPath(os.Args[2]) if err != nil { return err } return syscall.Exec(binary, os.Args[2:], os.Environ()) } func writeFiles(cfg *config.CloudConfig) error { id, err := util.GetCurrentContainerID() if err != nil { return err } client, err := docker.NewSystemClient() if err != nil { return err } info, err := client.ContainerInspect(context.Background(), id) if err != nil { return err } cloudinitexecute.WriteFiles(cfg, info.Name[1:]) return nil } func setupCommandSymlinks() { for _, link := range []symlink{ {config.RosBin, "/usr/bin/autologin"}, {config.RosBin, "/usr/bin/recovery"}, {config.RosBin, "/usr/bin/cloud-init-execute"}, {config.RosBin, "/usr/bin/cloud-init-save"}, {config.RosBin, "/usr/bin/dockerlaunch"}, {config.RosBin, "/usr/bin/respawn"}, {config.RosBin, "/usr/sbin/netconf"}, {config.RosBin, "/usr/sbin/wait-for-docker"}, {config.RosBin, "/usr/sbin/poweroff"}, {config.RosBin, "/usr/sbin/reboot"}, {config.RosBin, "/usr/sbin/halt"}, {config.RosBin, "/usr/sbin/shutdown"}, {config.RosBin, "/sbin/poweroff"}, {config.RosBin, "/sbin/reboot"}, {config.RosBin, "/sbin/halt"}, {config.RosBin, "/sbin/shutdown"}, } { os.Remove(link.newname) if err := os.Symlink(link.oldname, link.newname); err != nil { log.Error(err) } } } ================================================ FILE: cmd/control/env.go ================================================ package control import ( "log" "os" "os/exec" "syscall" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/util" "github.com/codegangsta/cli" ) func envAction(c *cli.Context) error { cfg := config.LoadConfig() args := c.Args() if len(args) == 0 { return nil } osEnv := os.Environ() envMap := make(map[string]string, len(cfg.Rancher.Environment)+len(osEnv)) for k, v := range cfg.Rancher.Environment { envMap[k] = v } for k, v := range util.KVPairs2Map(osEnv) { envMap[k] = v } if cmd, err := exec.LookPath(args[0]); err != nil { log.Fatal(err) } else { args[0] = cmd } if err := syscall.Exec(args[0], args, util.Map2KVPairs(envMap)); err != nil { log.Fatal(err) } return nil } ================================================ FILE: cmd/control/install/grub.go ================================================ package install import ( "html/template" "os" "os/exec" "path/filepath" "github.com/burmilla/os/pkg/log" ) func RunGrub(baseName, device string) error { log.Debugf("installGrub") //grub-install --boot-directory=${baseName}/boot ${device} cmd := exec.Command("grub-install", "--boot-directory="+baseName+"/boot", device) if err := cmd.Run(); err != nil { log.Errorf("%s", err) return err } return nil } func grubConfig(menu BootVars) error { log.Debugf("grubConfig") filetmpl, err := template.New("grub2config").Parse(`{{define "grub2menu"}}menuentry "{{.Name}}" { set root=(hd0,msdos1) linux /{{.bootDir}}vmlinuz-{{.Version}}-rancheros {{.KernelArgs}} {{.Append}} initrd /{{.bootDir}}initrd-{{.Version}}-rancheros } {{end}} set default="0" set timeout="{{.Timeout}}" {{if .Fallback}}set fallback={{.Fallback}}{{end}} {{- range .Entries}} {{template "grub2menu" .}} {{- end}} `) if err != nil { log.Errorf("grub2config %s", err) return err } cfgFile := filepath.Join(menu.BaseName, menu.BootDir+"grub/grub.cfg") log.Debugf("grubConfig written to %s", cfgFile) f, err := os.Create(cfgFile) if err != nil { return err } err = filetmpl.Execute(f, menu) if err != nil { return err } return nil } func PvGrubConfig(menu BootVars) error { log.Debugf("pvGrubConfig") filetmpl, err := template.New("grublst").Parse(`{{define "grubmenu"}} title BurmillaOS {{.Version}}-({{.Name}}) root (hd0) kernel /${bootDir}vmlinuz-{{.Version}}-rancheros {{.KernelArgs}} {{.Append}} initrd /${bootDir}initrd-{{.Version}}-rancheros {{end}} default 0 timeout {{.Timeout}} {{if .Fallback}}fallback {{.Fallback}}{{end}} hiddenmenu {{- range .Entries}} {{template "grubmenu" .}} {{- end}} `) if err != nil { log.Errorf("pv grublst: %s", err) return err } cfgFile := filepath.Join(menu.BaseName, menu.BootDir+"grub/menu.lst") log.Debugf("grubMenu written to %s", cfgFile) f, err := os.Create(cfgFile) if err != nil { log.Errorf("Create(%s) %s", cfgFile, err) return err } err = filetmpl.Execute(f, menu) if err != nil { log.Errorf("execute %s", err) return err } return nil } ================================================ FILE: cmd/control/install/install.go ================================================ package install import ( "os" "os/exec" "path/filepath" "strings" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" ) type MenuEntry struct { Name, BootDir, Version, KernelArgs, Append string } type BootVars struct { BaseName, BootDir string Timeout uint Fallback int Entries []MenuEntry } func MountDevice(baseName, device, partition string, raw bool) (string, string, error) { log.Debugf("mountdevice %s, raw %v", partition, raw) if partition == "" { if raw { log.Debugf("util.Mount (raw) %s, %s", partition, baseName) cmd := exec.Command("lsblk", "-no", "pkname", partition) log.Debugf("Run(%v)", cmd) cmd.Stderr = os.Stderr device := "" // TODO: out can == "" - this is used to "detect software RAID" which is terrible if out, err := cmd.Output(); err == nil { device = "/dev/" + strings.TrimSpace(string(out)) } log.Debugf("mountdevice return -> d: %s, p: %s", device, partition) return device, partition, util.Mount(partition, baseName, "", "") } //rootfs := partition // Don't use ResolveDevice - it can fail, whereas `blkid -L LABEL` works more often d, _, err := util.Blkid("RANCHER_BOOT") if err != nil { log.Errorf("Failed to run blkid: %s", err) } if d != "" { partition = d baseName = filepath.Join(baseName, config.BootDir) } else { partition = GetStatePartition() } cmd := exec.Command("lsblk", "-no", "pkname", partition) log.Debugf("Run(%v)", cmd) cmd.Stderr = os.Stderr // TODO: out can == "" - this is used to "detect software RAID" which is terrible if out, err := cmd.Output(); err == nil { device = "/dev/" + strings.TrimSpace(string(out)) } } os.MkdirAll(baseName, 0755) cmd := exec.Command("mount", partition, baseName) //cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr log.Debugf("mountdevice return2 -> d: %s, p: %s", device, partition) return device, partition, cmd.Run() } func GetStatePartition() string { cfg := config.LoadConfig() if dev := util.ResolveDevice(cfg.Rancher.State.Dev); dev != "" { // try the rancher.state.dev setting return dev } d, _, err := util.Blkid("RANCHER_STATE") if err != nil { log.Errorf("Failed to run blkid: %s", err) } return d } func GetDefaultPartition(device string) string { if strings.Contains(device, "nvme") { return device + "p1" } return device + "1" } ================================================ FILE: cmd/control/install/service.go ================================================ package install import ( "io/ioutil" "os" "strings" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/burmilla/os/pkg/util/network" yaml "github.com/cloudfoundry-incubator/candiedyaml" ) type ImageConfig struct { Image string `yaml:"image,omitempty"` } func GetCacheImageList(cloudconfig string, oldcfg *config.CloudConfig) []string { savedImages := make([]string, 0) bytes, err := readConfigFile(cloudconfig) if err != nil { log.WithFields(log.Fields{"err": err}).Fatal("Failed to read cloud-config") return savedImages } r := make(map[interface{}]interface{}) if err := yaml.Unmarshal(bytes, &r); err != nil { log.WithFields(log.Fields{"err": err}).Fatal("Failed to unmarshal cloud-config") return savedImages } newcfg := &config.CloudConfig{} if err := util.Convert(r, newcfg); err != nil { log.WithFields(log.Fields{"err": err}).Fatal("Failed to convert cloud-config") return savedImages } // services_include for key, value := range newcfg.Rancher.ServicesInclude { if value { serviceImage := getServiceImage(key, "", oldcfg, newcfg) if serviceImage != "" { savedImages = append(savedImages, serviceImage) } } } // console newConsole := newcfg.Rancher.Console if newConsole != "" && newConsole != "default" { consoleImage := getServiceImage(newConsole, "console", oldcfg, newcfg) if consoleImage != "" { savedImages = append(savedImages, consoleImage) } } // docker engine newEngine := newcfg.Rancher.Docker.Engine if newEngine != "" && newEngine != oldcfg.Rancher.Docker.Engine { engineImage := getServiceImage(newEngine, "docker", oldcfg, newcfg) if engineImage != "" { savedImages = append(savedImages, engineImage) } } return savedImages } func getServiceImage(service, svctype string, oldcfg, newcfg *config.CloudConfig) string { var ( serviceImage string bytes []byte err error ) if len(newcfg.Rancher.Repositories.ToArray()) > 0 { bytes, err = network.LoadServiceResource(service, true, newcfg) } else { bytes, err = network.LoadServiceResource(service, true, oldcfg) } if err != nil { log.WithFields(log.Fields{"err": err}).Fatal("Failed to load service resource") return serviceImage } imageConfig := map[interface{}]ImageConfig{} if err = yaml.Unmarshal(bytes, &imageConfig); err != nil { log.WithFields(log.Fields{"err": err}).Fatal("Failed to unmarshal service") return serviceImage } switch svctype { case "console": serviceImage = formatImage(imageConfig["console"].Image, oldcfg, newcfg) case "docker": serviceImage = formatImage(imageConfig["docker"].Image, oldcfg, newcfg) default: serviceImage = formatImage(imageConfig[service].Image, oldcfg, newcfg) } return serviceImage } func RunCacheScript(partition string, images []string) error { return util.RunScript("/scripts/cache-services.sh", partition, strings.Join(images, " ")) } func readConfigFile(file string) ([]byte, error) { content, err := ioutil.ReadFile(file) if err != nil { if os.IsNotExist(err) { err = nil content = []byte{} } else { return nil, err } } return content, err } func formatImage(image string, oldcfg, newcfg *config.CloudConfig) string { registryDomain := newcfg.Rancher.Environment["REGISTRY_DOMAIN"] if registryDomain == "" { registryDomain = oldcfg.Rancher.Environment["REGISTRY_DOMAIN"] } image = strings.Replace(image, "${REGISTRY_DOMAIN}", registryDomain, -1) image = strings.Replace(image, "${SUFFIX}", config.Suffix, -1) return image } ================================================ FILE: cmd/control/install/syslinux.go ================================================ package install import ( "bufio" "bytes" "html/template" "io/ioutil" "os" "path/filepath" "strings" "github.com/burmilla/os/pkg/log" ) func syslinuxConfig(menu BootVars) error { log.Debugf("syslinuxConfig") filetmpl, err := template.New("syslinuxconfig").Parse(`{{define "syslinuxmenu"}} LABEL {{.Name}} LINUX ../vmlinuz-{{.Version}}-rancheros APPEND {{.KernelArgs}} {{.Append}} INITRD ../initrd-{{.Version}}-rancheros {{end}} TIMEOUT 20 #2 seconds DEFAULT BurmillaOS-current {{- range .Entries}} {{template "syslinuxmenu" .}} {{- end}} `) if err != nil { log.Errorf("syslinuxconfig %s", err) return err } cfgFile := filepath.Join(menu.BaseName, menu.BootDir+"syslinux/syslinux.cfg") log.Debugf("syslinuxConfig written to %s", cfgFile) f, err := os.Create(cfgFile) if err != nil { log.Errorf("Create(%s) %s", cfgFile, err) return err } err = filetmpl.Execute(f, menu) if err != nil { return err } return nil } func ReadGlobalCfg(globalCfg string) (string, error) { append := "" buf, err := ioutil.ReadFile(globalCfg) if err != nil { return append, err } s := bufio.NewScanner(bytes.NewReader(buf)) for s.Scan() { line := strings.TrimSpace(s.Text()) if strings.HasPrefix(line, "APPEND") { append = strings.TrimSpace(strings.TrimPrefix(line, "APPEND")) } } return append, nil } func ReadSyslinuxCfg(currentCfg string) (string, string, error) { vmlinuzFile := "" initrdFile := "" // Need to parse currentCfg for the lines: // KERNEL ../vmlinuz-4.9.18-rancher^M // INITRD ../initrd-41e02e6-dirty^M buf, err := ioutil.ReadFile(currentCfg) if err != nil { return vmlinuzFile, initrdFile, err } DIST := filepath.Dir(currentCfg) s := bufio.NewScanner(bytes.NewReader(buf)) for s.Scan() { line := strings.TrimSpace(s.Text()) if strings.HasPrefix(line, "KERNEL") { vmlinuzFile = strings.TrimSpace(strings.TrimPrefix(line, "KERNEL")) vmlinuzFile = filepath.Join(DIST, filepath.Base(vmlinuzFile)) } if strings.HasPrefix(line, "INITRD") { initrdFile = strings.TrimSpace(strings.TrimPrefix(line, "INITRD")) initrdFile = filepath.Join(DIST, filepath.Base(initrdFile)) } } return vmlinuzFile, initrdFile, err } ================================================ FILE: cmd/control/install.go ================================================ package control import ( "bufio" "bytes" "crypto/md5" "fmt" "io" "io/ioutil" "os" "os/exec" "path/filepath" "runtime" "strings" "github.com/burmilla/os/cmd/control/install" "github.com/burmilla/os/cmd/power" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/dfs" // TODO: move CopyFile into util or something. "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/codegangsta/cli" "github.com/pkg/errors" ) var installCommand = cli.Command{ Name: "install", Usage: "install BurmillaOS to disk", HideHelp: true, Action: installAction, Flags: []cli.Flag{ cli.StringFlag{ // TODO: need to validate ? -i burmilla/os:v0.3.1 just sat there. Name: "image, i", Usage: `install from a certain image (e.g., 'rancher/os:v0.7.0') use 'ros os list' to see what versions are available.`, }, cli.StringFlag{ Name: "install-type, t", Usage: `generic: (Default) Creates 1 ext4 partition and installs BurmillaOS (syslinux) amazon-ebs: Installs BurmillaOS and sets up PV-GRUB gptsyslinux: partition and format disk (gpt), then install BurmillaOS and setup Syslinux `, }, cli.StringFlag{ Name: "cloud-config, c", Usage: "cloud-config yml file - needed for SSH authorized keys", }, cli.StringFlag{ Name: "device, d", Usage: "storage device", }, cli.StringFlag{ Name: "partition, p", Usage: "partition to install to", }, cli.StringFlag{ Name: "statedir", Usage: "install to rancher.state.directory", }, cli.BoolFlag{ Name: "force, f", Usage: "[ DANGEROUS! Data loss can happen ] partition/format without prompting", }, cli.BoolFlag{ Name: "no-reboot", Usage: "do not reboot after install", }, cli.StringFlag{ Name: "append, a", Usage: "append additional kernel parameters", }, cli.StringFlag{ Name: "rollback, r", Usage: "rollback version", Hidden: true, }, cli.BoolFlag{ Name: "isoinstallerloaded", Usage: "INTERNAL use only: mount the iso to get kernel and initrd", Hidden: true, }, cli.BoolFlag{ Name: "kexec, k", Usage: "reboot using kexec", }, cli.BoolFlag{ Name: "save, s", Usage: "save services and images for next booting", }, cli.BoolFlag{ Name: "debug", Usage: "Run installer with debug output", }, }, } func installAction(c *cli.Context) error { log.InitLogger() debug := c.Bool("debug") if debug { log.Info("Log level is debug") originalLevel := log.GetLevel() defer log.SetLevel(originalLevel) log.SetLevel(log.DebugLevel) } if runtime.GOARCH != "amd64" { log.Fatalf("ros install / upgrade only supported on 'amd64', not '%s'", runtime.GOARCH) } if c.Args().Present() { log.Fatalf("invalid arguments %v", c.Args()) } kappend := strings.TrimSpace(c.String("append")) force := c.Bool("force") kexec := c.Bool("kexec") reboot := !c.Bool("no-reboot") isoinstallerloaded := c.Bool("isoinstallerloaded") image := c.String("image") cfg := config.LoadConfig() if image == "" { image = fmt.Sprintf("%s:%s%s", cfg.Rancher.Upgrade.Image, config.Version, config.Suffix) image = formatImage(image, cfg) } installType := c.String("install-type") if installType == "" { log.Info("No install type specified...defaulting to generic") installType = "generic" } if installType == "rancher-upgrade" || installType == "upgrade" { installType = "upgrade" // rancher-upgrade is redundant! force = true // the os.go upgrade code already asks reboot = false isoinstallerloaded = true // OMG this flag is aweful - kill it with fire } device := c.String("device") partition := c.String("partition") statedir := c.String("statedir") if statedir != "" && installType != "noformat" { log.Fatalf("--statedir %s requires --type noformat", statedir) } if installType != "noformat" && installType != "raid" && installType != "bootstrap" && installType != "upgrade" { // These can use RANCHER_BOOT or RANCHER_STATE labels.. if device == "" { log.Fatal("Can not proceed without -d specified") } } cloudConfig := c.String("cloud-config") if cloudConfig == "" { if installType != "upgrade" { // TODO: I wonder if its plausible to merge a new cloud-config into an existing one on upgrade - so for now, i'm only turning off the warning log.Warn("Cloud-config not provided: you might need to provide cloud-config on boot with ssh_authorized_keys") } } else { os.MkdirAll("/opt", 0755) uc := "/opt/user_config.yml" if strings.HasPrefix(cloudConfig, "http://") || strings.HasPrefix(cloudConfig, "https://") { if err := util.HTTPDownloadToFile(cloudConfig, uc); err != nil { log.WithFields(log.Fields{"cloudConfig": cloudConfig, "error": err}).Fatal("Failed to http get cloud-config") } } else { if err := util.FileCopy(cloudConfig, uc); err != nil { log.WithFields(log.Fields{"cloudConfig": cloudConfig, "error": err}).Fatal("Failed to copy cloud-config") } } cloudConfig = uc } savedImages := []string{} if c.Bool("save") && cloudConfig != "" && installType != "upgrade" { savedImages = install.GetCacheImageList(cloudConfig, cfg) log.Debugf("Will cache these images: %s", savedImages) } if err := runInstall(image, installType, cloudConfig, device, partition, statedir, kappend, force, kexec, isoinstallerloaded, debug, savedImages); err != nil { log.WithFields(log.Fields{"err": err}).Fatal("Failed to run install") return err } if !kexec && reboot && (force || yes("Continue with reboot")) { log.Info("Rebooting") power.Reboot() } return nil } func runInstall(image, installType, cloudConfig, device, partition, statedir, kappend string, force, kexec, isoinstallerloaded, debug bool, savedImages []string) error { fmt.Printf("Installing from %s\n", image) if !force { if util.IsRunningInTty() && !yes("Continue") { log.Infof("Not continuing with installation due to user not saying 'yes'") os.Exit(1) } } useIso := false // --isoinstallerloaded is used if the ros has created the installer container from and image that was on the booted iso if !isoinstallerloaded { log.Infof("start !isoinstallerloaded") if _, err := os.Stat("/dist/initrd-" + config.Version); os.IsNotExist(err) { deviceName, deviceType, err := getBootIso() if err != nil { log.Errorf("Failed to get boot iso: %v", err) fmt.Println("There is no boot iso drive, terminate the task") return err } if err = mountBootIso(deviceName, deviceType); err != nil { log.Debugf("Failed to mountBootIso: %v", err) } else { log.Infof("trying to load /bootiso/rancheros/installer.tar.gz") if _, err := os.Stat("/bootiso/rancheros/"); err == nil { cmd := exec.Command("system-docker", "load", "-i", "/bootiso/rancheros/installer.tar.gz") cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr if err := cmd.Run(); err != nil { log.Infof("failed to load images from /bootiso/rancheros: %v", err) } else { log.Infof("Loaded images from /bootiso/rancheros/installer.tar.gz") //TODO: add if os-installer:latest exists - we might have loaded a full installer? useIso = true // now use the installer image cfg := config.LoadConfig() if image == cfg.Rancher.Upgrade.Image+":"+config.Version+config.Suffix { // TODO: fix the fullinstaller Dockerfile to use the ${VERSION}${SUFFIX} image = cfg.Rancher.Upgrade.Image + "-installer" + ":latest" } } } // TODO: also poke around looking for the /boot/vmlinuz and initrd... } log.Infof("starting installer container for %s (new)", image) installerCmd := []string{ "run", "--rm", "--net=host", "--privileged", // bind mount host fs to access its ros, vmlinuz, initrd and /dev (udev isn't running in container) "-v", "/:/host", "--volumes-from=all-volumes", image, // "install", "-t", installType, "-d", device, "-i", image, // TODO: this isn't used - I'm just using it to over-ride the defaulting } // Need to call the inner container with force - the outer one does the "are you sure" installerCmd = append(installerCmd, "-f") // The outer container does the reboot (if needed) installerCmd = append(installerCmd, "--no-reboot") if cloudConfig != "" { installerCmd = append(installerCmd, "-c", cloudConfig) } if kappend != "" { installerCmd = append(installerCmd, "-a", kappend) } if useIso { installerCmd = append(installerCmd, "--isoinstallerloaded=1") } if kexec { installerCmd = append(installerCmd, "--kexec") } if debug { installerCmd = append(installerCmd, "--debug") } if partition != "" { installerCmd = append(installerCmd, "--partition", partition) } if statedir != "" { installerCmd = append(installerCmd, "--statedir", statedir) } if len(savedImages) > 0 { installerCmd = append(installerCmd, "--save") } // TODO: mount at /mnt for shared mount? if useIso { util.Unmount("/bootiso") } cmd := exec.Command("system-docker", installerCmd...) log.Debugf("Run(%v)", cmd) cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr return cmd.Run() } } log.Debugf("running installation") if partition == "" { if installType == "generic" || installType == "syslinux" || installType == "gptsyslinux" { diskType := "msdos" if installType == "gptsyslinux" { diskType = "gpt" } log.Debugf("running setDiskpartitions") err := setDiskpartitions(device, diskType) if err != nil { log.Errorf("error setDiskpartitions %s", err) return err } // use the bind mounted host filesystem to get access to the /dev/vda1 device that udev on the host sets up (TODO: can we run a udevd inside the container? `mknod b 253 1 /dev/vda1` doesn't work) device = "/host" + device //# TODO: Change this to a number so that users can specify. //# Will need to make it so that our builds and packer APIs remain consistent. partition = install.GetDefaultPartition(device) } } if installType == "upgrade" { isoinstallerloaded = false } if isoinstallerloaded { log.Debugf("running isoinstallerloaded...") // TODO: detect if its not mounted and then optionally mount? deviceName, deviceType, err := getBootIso() if err != nil { log.Errorf("Failed to get boot iso: %v", err) fmt.Println("There is no boot iso drive, terminate the task") return err } if err := mountBootIso(deviceName, deviceType); err != nil { log.Errorf("error mountBootIso %s", err) //return err } } err := layDownOS(image, installType, cloudConfig, device, partition, statedir, kappend, kexec) if err != nil { log.Errorf("error layDownOS %s", err) return err } if len(savedImages) > 0 { return install.RunCacheScript(partition, savedImages) } return nil } func getDeviceByLabel(label string) (string, string) { d, t, err := util.Blkid(label) if err != nil { log.Warnf("Failed to run blkid for %s", label) return "", "" } return d, t } func getBootIso() (string, string, error) { deviceName := "/dev/sr0" deviceType := "iso9660" // Our ISO LABEL is RancherOS // But some tools(like rufus) will change LABEL to RANCHEROS for _, label := range []string{"RancherOS", "RANCHEROS"} { d, t := getDeviceByLabel(label) if d != "" { deviceName = d deviceType = t continue } } // Check the sr deive if exist if _, err := os.Stat(deviceName); os.IsNotExist(err) { return "", "", err } return deviceName, deviceType, nil } func mountBootIso(deviceName, deviceType string) error { mountsFile, err := os.Open("/proc/mounts") if err != nil { return errors.Wrap(err, "Failed to read /proc/mounts") } defer mountsFile.Close() if partitionMounted(deviceName, mountsFile) { return nil } os.MkdirAll("/bootiso", 0755) cmd := exec.Command("mount", "-t", deviceType, deviceName, "/bootiso") log.Debugf("mount (%#v)", cmd) var outBuf, errBuf bytes.Buffer cmd.Stdout = &outBuf cmd.Stderr = &errBuf err = cmd.Run() if err != nil { return errors.Wrapf(err, "Tried and failed to mount %s: stderr output: %s", deviceName, errBuf.String()) } log.Debugf("Mounted %s, output: %s", deviceName, outBuf.String()) return nil } func layDownOS(image, installType, cloudConfig, device, partition, statedir, kappend string, kexec bool) error { // ENV == installType //[[ "$ARCH" == "arm" && "$ENV" != "upgrade" ]] && ENV=arm // image == burmilla/os:v0.7.0_arm // TODO: remove the _arm suffix (but watch out, its not always there..) VERSION := image[strings.Index(image, ":")+1:] var FILES []string DIST := "/dist" //${DIST:-/dist} //cloudConfig := SCRIPTS_DIR + "/conf/empty.yml" //${cloudConfig:-"${SCRIPTS_DIR}/conf/empty.yml"} CONSOLE := "tty0" baseName := "/mnt/new_img" kernelArgs := "printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait transparent_hugepage=madvise iommu=pt intel_iommu=off scsi_mod.use_blk_mq=1 apparmor=1 security=apparmor panic=10" // console="+CONSOLE if statedir != "" { kernelArgs = kernelArgs + " rancher.state.directory=" + statedir } // unmount on trap defer util.Unmount(baseName) diskType := "msdos" if installType == "gptsyslinux" { diskType = "gpt" } switch installType { case "syslinux": fallthrough case "gptsyslinux": fallthrough case "generic": log.Debugf("formatAndMount") var err error device, _, err = formatAndMount(baseName, device, partition) if err != nil { log.Errorf("formatAndMount %s", err) return err } err = installSyslinux(device, baseName, diskType) if err != nil { log.Errorf("installSyslinux %s", err) return err } err = seedData(baseName, cloudConfig, FILES) if err != nil { log.Errorf("seedData %s", err) return err } case "arm": var err error _, _, err = formatAndMount(baseName, device, partition) if err != nil { return err } seedData(baseName, cloudConfig, FILES) case "amazon-ebs-pv": fallthrough case "amazon-ebs-hvm": CONSOLE = "ttyS0" var err error device, _, err = formatAndMount(baseName, device, partition) if err != nil { return err } if installType == "amazon-ebs-hvm" { installSyslinux(device, baseName, diskType) } //# AWS Networking recommends disabling. seedData(baseName, cloudConfig, FILES) case "googlecompute": CONSOLE = "ttyS0" var err error device, _, err = formatAndMount(baseName, device, partition) if err != nil { return err } installSyslinux(device, baseName, diskType) seedData(baseName, cloudConfig, FILES) case "noformat": var err error device, _, err = install.MountDevice(baseName, device, partition, false) if err != nil { return err } installSyslinux(device, baseName, diskType) if err := os.MkdirAll(filepath.Join(baseName, statedir), 0755); err != nil { return err } err = seedData(baseName, cloudConfig, FILES) if err != nil { log.Errorf("seedData %s", err) return err } case "raid": var err error device, _, err = install.MountDevice(baseName, device, partition, false) if err != nil { return err } installSyslinux(device, baseName, diskType) case "bootstrap": CONSOLE = "ttyS0" var err error _, _, err = install.MountDevice(baseName, device, partition, true) if err != nil { return err } kernelArgs = kernelArgs + " rancher.cloud_init.datasources=[ec2,gce]" case "rancher-upgrade": installType = "upgrade" // rancher-upgrade is redundant fallthrough case "upgrade": var err error device, _, err = install.MountDevice(baseName, device, partition, false) if err != nil { return err } log.Debugf("upgrading - %s, %s, %s", device, baseName, diskType) // TODO: detect pv-grub, and don't kill it with syslinux upgradeBootloader(device, baseName, diskType) default: return fmt.Errorf("unexpected install type %s", installType) } kernelArgs = kernelArgs + " console=" + CONSOLE if kappend == "" { preservedAppend, _ := ioutil.ReadFile(filepath.Join(baseName, config.BootDir, "append")) kappend = string(preservedAppend) } else { ioutil.WriteFile(filepath.Join(baseName, config.BootDir, "append"), []byte(kappend), 0644) } if installType == "amazon-ebs-pv" { menu := install.BootVars{ BaseName: baseName, BootDir: config.BootDir, Timeout: 0, Fallback: 0, // need to be conditional on there being a 'rollback'? Entries: []install.MenuEntry{ install.MenuEntry{ Name: "BurmillaOS-current", BootDir: config.BootDir, Version: VERSION, KernelArgs: kernelArgs, Append: kappend, }, }, } install.PvGrubConfig(menu) } log.Debugf("installRancher") _, err := installRancher(baseName, VERSION, DIST, kernelArgs+" "+kappend) if err != nil { log.Errorf("%s", err) return err } log.Debugf("installRancher done") if kexec { power.Kexec(false, filepath.Join(baseName, config.BootDir), kernelArgs+" "+kappend) } return nil } // files is an array of 'sourcefile:destination' - but i've not seen any examples of it being used. func seedData(baseName, cloudData string, files []string) error { log.Debugf("seedData") _, err := os.Stat(baseName) if err != nil { return err } stateSeedDir := "state_seed" cloudConfigBase := "/var/lib/rancher/conf/cloud-config.d" cloudConfigDir := "" // If there is a separate boot partition, cloud-config should be written to RANCHER_STATE partition. bootPartition, _, err := util.Blkid("RANCHER_BOOT") if err != nil { log.Errorf("Failed to run blkid: %s", err) } if bootPartition != "" { stateSeedFullPath := filepath.Join(baseName, stateSeedDir) if err = os.MkdirAll(stateSeedFullPath, 0700); err != nil { return err } defer util.Unmount(stateSeedFullPath) statePartition := install.GetStatePartition() cmd := exec.Command("mount", statePartition, stateSeedFullPath) //cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr log.Debugf("seedData: mount %s to %s", statePartition, stateSeedFullPath) if err = cmd.Run(); err != nil { return err } cloudConfigDir = filepath.Join(baseName, stateSeedDir, cloudConfigBase) } else { cloudConfigDir = filepath.Join(baseName, cloudConfigBase) } if err = os.MkdirAll(cloudConfigDir, 0700); err != nil { return err } if !strings.HasSuffix(cloudData, "empty.yml") { if err = dfs.CopyFile(cloudData, cloudConfigDir, filepath.Base(cloudData)); err != nil { return err } } for _, f := range files { e := strings.Split(f, ":") if err = dfs.CopyFile(e[0], baseName, e[1]); err != nil { return err } } return nil } // set-disk-partitions is called with device == **/dev/sda** func setDiskpartitions(device, diskType string) error { log.Debugf("setDiskpartitions") d := strings.Split(device, "/") if len(d) != 3 { return fmt.Errorf("bad device name (%s)", device) } deviceName := d[2] file, err := os.Open("/proc/partitions") if err != nil { log.Debugf("failed to read /proc/partitions %s", err) return err } defer file.Close() exists := false haspartitions := false scanner := bufio.NewScanner(file) for scanner.Scan() { str := scanner.Text() last := strings.LastIndex(str, " ") if last > -1 { dev := str[last+1:] if strings.HasPrefix(dev, deviceName) { if dev == deviceName { exists = true } else { haspartitions = true } } } } if !exists { return fmt.Errorf("disk %s not found: %s", device, err) } if haspartitions { log.Debugf("device %s already partitioned - checking if any are mounted", device) file, err := os.Open("/proc/mounts") if err != nil { log.Errorf("failed to read /proc/mounts %s", err) return err } defer file.Close() if partitionMounted(device, file) { err = fmt.Errorf("partition %s mounted, cannot repartition", device) log.Errorf("%s", err) return err } cmd := exec.Command("system-docker", "ps", "-q") var outb bytes.Buffer cmd.Stdout = &outb if err := cmd.Run(); err != nil { log.Printf("ps error: %s", err) return err } for _, image := range strings.Split(outb.String(), "\n") { if image == "" { continue } r, w := io.Pipe() go func() { // TODO: consider a timeout // TODO:some of these containers don't have cat / shell cmd := exec.Command("system-docker", "exec", image, "cat /proc/mount") cmd.Stdout = w if err := cmd.Run(); err != nil { log.Debugf("%s cat %s", image, err) } w.Close() }() if partitionMounted(device, r) { err = fmt.Errorf("partition %s mounted in %s, cannot repartition", device, image) log.Errorf("k? %s", err) return err } } } //do it! log.Debugf("running dd device: %s", device) cmd := exec.Command("dd", "if=/dev/zero", "of="+device, "bs=512", "count=2048") //cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr if err := cmd.Run(); err != nil { log.Errorf("dd error %s", err) return err } log.Debugf("running partprobe: %s", device) cmd = exec.Command("partprobe", device) //cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr if err := cmd.Run(); err != nil { log.Errorf("Failed to partprobe device %s: %v", device, err) return err } log.Debugf("making single RANCHER_STATE partition, device: %s", device) cmd = exec.Command("parted", "-s", "-a", "optimal", device, "mklabel "+diskType, "--", "mkpart primary ext4 1 -1") cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr if err := cmd.Run(); err != nil { log.Errorf("Failed to parted device %s: %v", device, err) return err } return setBootable(device, diskType) } func partitionMounted(device string, file io.Reader) bool { scanner := bufio.NewScanner(file) for scanner.Scan() { str := scanner.Text() // /dev/sdb1 /data ext4 rw,relatime,errors=remount-ro,data=ordered 0 0 ele := strings.Split(str, " ") if len(ele) > 5 { if strings.HasPrefix(ele[0], device) { return true } } if err := scanner.Err(); err != nil { log.Errorf("scanner %s", err) return false } } return false } func formatdevice(device, partition string) error { log.Debugf("formatdevice %s", partition) //mkfs.ext4 -F -i 4096 -L RANCHER_STATE ${partition} // -O ^64bit: for syslinux: http://www.syslinux.org/wiki/index.php?title=Filesystem#ext cmd := exec.Command("mkfs.ext4", "-F", "-i", "4096", "-O", "^64bit", "-L", "RANCHER_STATE", partition) log.Debugf("Run(%v)", cmd) cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr if err := cmd.Run(); err != nil { log.Errorf("mkfs.ext4: %s", err) return err } return nil } func formatAndMount(baseName, device, partition string) (string, string, error) { log.Debugf("formatAndMount") err := formatdevice(device, partition) if err != nil { log.Errorf("formatdevice %s", err) return device, partition, err } device, partition, err = install.MountDevice(baseName, device, partition, false) if err != nil { log.Errorf("mountdevice %s", err) return device, partition, err } return device, partition, nil } func setBootable(device, diskType string) error { // TODO make conditional - if there is a bootable device already, don't break it // TODO: make RANCHER_BOOT bootable - it might not be device 1 bootflag := "boot" if diskType == "gpt" { bootflag = "legacy_boot" } log.Debugf("making device 1 on %s bootable as %s", device, diskType) cmd := exec.Command("parted", "-s", "-a", "optimal", device, "set 1 "+bootflag+" on") cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr if err := cmd.Run(); err != nil { log.Errorf("parted: %s", err) return err } return nil } func upgradeBootloader(device, baseName, diskType string) error { log.Debugf("start upgradeBootloader") grubDir := filepath.Join(baseName, config.BootDir, "grub") if _, err := os.Stat(grubDir); os.IsNotExist(err) { log.Debugf("%s does not exist - no need to upgrade bootloader", grubDir) // we've already upgraded // TODO: in v0.9.0, need to detect what version syslinux we have return nil } // deal with systems which were previously upgraded, then rolled back, and are now being re-upgraded grubBackup := filepath.Join(baseName, config.BootDir, "grub_backup") if err := os.RemoveAll(grubBackup); err != nil { log.Errorf("RemoveAll (%s): %s", grubBackup, err) return err } backupSyslinuxDir := filepath.Join(baseName, config.BootDir, "syslinux_backup") if _, err := os.Stat(backupSyslinuxDir); !os.IsNotExist(err) { backupSyslinuxLdlinuxSys := filepath.Join(backupSyslinuxDir, "ldlinux.sys") if _, err := os.Stat(backupSyslinuxLdlinuxSys); !os.IsNotExist(err) { //need a privileged container that can chattr -i ldlinux.sys cmd := exec.Command("chattr", "-i", backupSyslinuxLdlinuxSys) if err := cmd.Run(); err != nil { log.Errorf("%s", err) return err } } if err := os.RemoveAll(backupSyslinuxDir); err != nil { log.Errorf("RemoveAll (%s): %s", backupSyslinuxDir, err) return err } } if err := os.Rename(grubDir, grubBackup); err != nil { log.Errorf("Rename(%s): %s", grubDir, err) return err } syslinuxDir := filepath.Join(baseName, config.BootDir, "syslinux") // it seems that v0.5.0 didn't have a syslinux dir, while 0.7 does if _, err := os.Stat(syslinuxDir); !os.IsNotExist(err) { if err := os.Rename(syslinuxDir, backupSyslinuxDir); err != nil { log.Infof("error Rename(%s, %s): %s", syslinuxDir, backupSyslinuxDir, err) } else { //mv the old syslinux into linux-previous.cfg oldSyslinux, err := ioutil.ReadFile(filepath.Join(backupSyslinuxDir, "syslinux.cfg")) if err != nil { log.Infof("error read(%s / syslinux.cfg): %s", backupSyslinuxDir, err) } else { cfg := string(oldSyslinux) //DEFAULT BurmillaOS-current // //LABEL BurmillaOS-current // LINUX ../vmlinuz-v0.7.1-rancheros // APPEND rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait console=tty0 rancher.password=rancher // INITRD ../initrd-v0.7.1-rancheros cfg = strings.Replace(cfg, "current", "previous", -1) // TODO consider removing the APPEND line - as the global.cfg should have the same result ioutil.WriteFile(filepath.Join(baseName, config.BootDir, "linux-current.cfg"), []byte(cfg), 0644) lines := strings.Split(cfg, "\n") for _, line := range lines { line = strings.TrimSpace(line) if strings.HasPrefix(line, "APPEND") { log.Errorf("write new (%s) %s", filepath.Join(baseName, config.BootDir, "global.cfg"), err) // TODO: need to append any extra's the user specified ioutil.WriteFile(filepath.Join(baseName, config.BootDir, "global.cfg"), []byte(cfg), 0644) break } } } } } return installSyslinux(device, baseName, diskType) } func installSyslinux(device, baseName, diskType string) error { log.Debugf("installSyslinux(%s)", device) mbrFile := "mbr.bin" if diskType == "gpt" { mbrFile = "gptmbr.bin" } //dd bs=440 count=1 if=/usr/lib/syslinux/mbr/mbr.bin of=${device} // ubuntu: /usr/lib/syslinux/mbr/mbr.bin // alpine: /usr/share/syslinux/mbr.bin if device == "/dev/" { log.Debugf("installSyslinuxRaid(%s)", device) //RAID - assume sda&sdb //TODO: fix this - not sure how to detect what disks should have mbr - perhaps we need a param // perhaps just assume and use the devices that make up the raid - mdadm device = "/dev/sda" if err := setBootable(device, diskType); err != nil { log.Errorf("setBootable(%s, %s): %s", device, diskType, err) //return err } cmd := exec.Command("dd", "bs=440", "count=1", "if=/usr/share/syslinux/"+mbrFile, "of="+device) if err := cmd.Run(); err != nil { log.Errorf("%s", err) return err } device = "/dev/sdb" if err := setBootable(device, diskType); err != nil { log.Errorf("setBootable(%s, %s): %s", device, diskType, err) //return err } cmd = exec.Command("dd", "bs=440", "count=1", "if=/usr/share/syslinux/"+mbrFile, "of="+device) if err := cmd.Run(); err != nil { log.Errorf("%s", err) return err } } else { if err := setBootable(device, diskType); err != nil { log.Errorf("setBootable(%s, %s): %s", device, diskType, err) //return err } log.Debugf("installSyslinux(%s)", device) cmd := exec.Command("dd", "bs=440", "count=1", "if=/usr/share/syslinux/"+mbrFile, "of="+device) log.Debugf("Run(%v)", cmd) if err := cmd.Run(); err != nil { log.Errorf("dd: %s", err) return err } } sysLinuxDir := filepath.Join(baseName, config.BootDir, "syslinux") if err := os.MkdirAll(sysLinuxDir, 0755); err != nil { log.Errorf("MkdirAll(%s)): %s", sysLinuxDir, err) //return err } //cp /usr/lib/syslinux/modules/bios/* ${baseName}/${bootDir}syslinux files, _ := ioutil.ReadDir("/usr/share/syslinux/") for _, file := range files { if file.IsDir() { continue } if err := dfs.CopyFile(filepath.Join("/usr/share/syslinux/", file.Name()), sysLinuxDir, file.Name()); err != nil { log.Errorf("copy syslinux: %s", err) return err } } //extlinux --install ${baseName}/${bootDir}syslinux cmd := exec.Command("extlinux", "--install", sysLinuxDir) if device == "/dev/" { //extlinux --install --raid ${baseName}/${bootDir}syslinux cmd = exec.Command("extlinux", "--install", "--raid", sysLinuxDir) } //cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr log.Debugf("Run(%v)", cmd) if err := cmd.Run(); err != nil { log.Errorf("extlinux: %s", err) return err } return nil } func different(existing, new string) bool { // assume existing file exists if _, err := os.Stat(new); os.IsNotExist(err) { return true } data, err := ioutil.ReadFile(existing) if err != nil { return true } newData, err := ioutil.ReadFile(new) if err != nil { return true } md5sum := md5.Sum(data) newmd5sum := md5.Sum(newData) if md5sum != newmd5sum { return true } return false } func installRancher(baseName, VERSION, DIST, kappend string) (string, error) { log.Debugf("installRancher") // detect if there already is a linux-current.cfg, if so, move it to linux-previous.cfg, currentCfg := filepath.Join(baseName, config.BootDir, "linux-current.cfg") if _, err := os.Stat(currentCfg); !os.IsNotExist(err) { existingCfg := filepath.Join(DIST, "linux-current.cfg") // only remove previous if there is a change to the current if different(currentCfg, existingCfg) { previousCfg := filepath.Join(baseName, config.BootDir, "linux-previous.cfg") if _, err := os.Stat(previousCfg); !os.IsNotExist(err) { if err := os.Remove(previousCfg); err != nil { return currentCfg, err } } os.Rename(currentCfg, previousCfg) // TODO: now that we're parsing syslinux.cfg files, maybe we can delete old kernels and initrds } } // The image/ISO have all the files in it - the syslinux cfg's and the kernel&initrd, so we can copy them all from there files, _ := ioutil.ReadDir(DIST) for _, file := range files { if file.IsDir() { continue } // TODO: should overwrite anything other than the global.cfg overwrite := true if file.Name() == "global.cfg" { overwrite = false } if err := dfs.CopyFileOverwrite(filepath.Join(DIST, file.Name()), filepath.Join(baseName, config.BootDir), file.Name(), overwrite); err != nil { log.Errorf("copy %s: %s", file.Name(), err) //return err } } // the general INCLUDE syslinuxcfg isolinuxFile := filepath.Join(DIST, "isolinux", "isolinux.cfg") syslinuxDir := filepath.Join(baseName, config.BootDir, "syslinux") if err := dfs.CopyFileOverwrite(isolinuxFile, syslinuxDir, "syslinux.cfg", true); err != nil { log.Errorf("copy global syslinux.cfgS%s: %s", "syslinux.cfg", err) //return err } else { log.Debugf("installRancher copy global syslinux.cfgS OK") } // The global.cfg INCLUDE - useful for over-riding the APPEND line globalFile := filepath.Join(baseName, config.BootDir, "global.cfg") if _, err := os.Stat(globalFile); !os.IsNotExist(err) { err := ioutil.WriteFile(globalFile, []byte("APPEND "+kappend), 0644) if err != nil { log.Errorf("write (%s) %s", "global.cfg", err) return currentCfg, err } } return currentCfg, nil } ================================================ FILE: cmd/control/os.go ================================================ package control import ( "fmt" "io/ioutil" "net/url" "os" "runtime" "strings" "github.com/burmilla/os/cmd/power" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/compose" "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/burmilla/os/pkg/util/network" yaml "github.com/cloudfoundry-incubator/candiedyaml" "github.com/codegangsta/cli" dockerClient "github.com/docker/engine-api/client" composeConfig "github.com/docker/libcompose/config" "github.com/docker/libcompose/project/options" "golang.org/x/net/context" ) type Images struct { Current string `yaml:"current,omitempty"` Available []string `yaml:"available,omitempty"` } func osSubcommands() []cli.Command { return []cli.Command{ { Name: "upgrade", Usage: "upgrade to latest version", Action: osUpgrade, Flags: []cli.Flag{ cli.BoolFlag{ Name: "stage, s", Usage: "Only stage the new upgrade, don't apply it", }, cli.StringFlag{ Name: "image, i", Usage: "upgrade to a certain image", }, cli.BoolFlag{ Name: "force, f", Usage: "do not prompt for input", }, cli.BoolFlag{ Name: "no-reboot", Usage: "do not reboot after upgrade", }, cli.BoolFlag{ Name: "kexec, k", Usage: "reboot using kexec", }, cli.StringFlag{ Name: "append", Usage: "append additional kernel parameters", }, cli.BoolFlag{ Name: "upgrade-console", Usage: "upgrade console even if persistent", }, cli.BoolFlag{ Name: "debug", Usage: "Run installer with debug output", }, }, }, { Name: "list", Usage: "list the current available versions", Flags: []cli.Flag{ cli.BoolFlag{ Name: "update, u", Usage: "update engine cache", }, }, Action: osMetaDataGet, }, { Name: "version", Usage: "show the currently installed version", Action: osVersion, }, } } func getImages(update bool) (*Images, error) { upgradeURL, err := getUpgradeURL() if err != nil { return nil, err } var body []byte if strings.HasPrefix(upgradeURL, "/") { body, err = ioutil.ReadFile(upgradeURL) if err != nil { return nil, err } } else { u, err := url.Parse(upgradeURL) if err != nil { return nil, err } q := u.Query() q.Set("current", config.Version) if hypervisor := util.GetHypervisor(); hypervisor == "" { q.Set("hypervisor", hypervisor) } u.RawQuery = q.Encode() upgradeURL = u.String() if update { _, err := network.UpdateCache(upgradeURL) if err != nil { log.Errorf("Failed to update os caches: %v", err) } } body, err = network.LoadFromNetwork(upgradeURL) if err != nil { return nil, err } } images, err := parseBody(body) if err != nil { return nil, err } cfg := config.LoadConfig() images.Current = formatImage(images.Current, cfg) for i := len(images.Available) - 1; i >= 0; i-- { images.Available[i] = formatImage(images.Available[i], cfg) } return images, nil } func osMetaDataGet(c *cli.Context) error { images, err := getImages(c.Bool("update")) if err != nil { log.Fatal(err) } client, err := docker.NewSystemClient() if err != nil { log.Fatal(err) } cfg := config.LoadConfig() runningName := cfg.Rancher.Upgrade.Image + ":" + config.Version runningName = formatImage(runningName, cfg) foundRunning := false for i := len(images.Available) - 1; i >= 0; i-- { image := images.Available[i] _, _, err := client.ImageInspectWithRaw(context.Background(), image, false) local := "local" if dockerClient.IsErrImageNotFound(err) { local = "remote" } available := "available" if image == images.Current { available = "latest" } var running string if image == runningName { foundRunning = true running = "running" } fmt.Println(image, local, available, running) } if !foundRunning { fmt.Println(config.Version, "running") } return nil } func getLatestImage() (string, error) { images, err := getImages(false) if err != nil { return "", err } return images.Current, nil } func osUpgrade(c *cli.Context) error { if runtime.GOARCH != "amd64" { log.Fatalf("ros install / upgrade only supported on 'amd64', not '%s'", runtime.GOARCH) } if isExist := checkGlobalCfg(); !isExist { log.Fatalf("ros upgrade cannot be supported") } image := c.String("image") if image == "" { var err error image, err = getLatestImage() if err != nil { log.Fatal(err) } if image == "" { log.Fatal("Failed to find latest image") } } if c.Args().Present() { log.Fatalf("invalid arguments %v", c.Args()) } if err := startUpgradeContainer( image, c.Bool("stage"), c.Bool("force"), !c.Bool("no-reboot"), c.Bool("kexec"), c.Bool("upgrade-console"), c.Bool("debug"), c.String("append"), ); err != nil { log.Fatal(err) } return nil } func osVersion(c *cli.Context) error { fmt.Println(config.Version) return nil } func startUpgradeContainer(image string, stage, force, reboot, kexec, upgradeConsole, debug bool, kernelArgs string) error { command := []string{ "-t", "rancher-upgrade", "-r", config.Version, } if kexec { command = append(command, "--kexec") } if debug { command = append(command, "--debug") } kernelArgs = strings.TrimSpace(kernelArgs) if kernelArgs != "" { command = append(command, "-a", kernelArgs) } if upgradeConsole { if err := config.Set("rancher.force_console_rebuild", true); err != nil { log.Fatal(err) } } fmt.Printf("Upgrading to %s\n", image) confirmation := "Continue" imageSplit := strings.Split(image, ":") if len(imageSplit) > 1 && imageSplit[1] == config.Version+config.Suffix { confirmation = fmt.Sprintf("Already at version %s. Continue anyway", imageSplit[1]) } if !force && !yes(confirmation) { os.Exit(1) } container, err := compose.CreateService(nil, "os-upgrade", &composeConfig.ServiceConfigV1{ LogDriver: "json-file", Privileged: true, Net: "host", Pid: "host", Image: image, Labels: map[string]string{ config.ScopeLabel: config.System, }, Command: command, }) if err != nil { return err } client, err := docker.NewSystemClient() if err != nil { return err } // Only pull image if not found locally if _, _, err := client.ImageInspectWithRaw(context.Background(), image, false); err != nil { if err := container.Pull(context.Background()); err != nil { return err } } if !stage { // If there is already an upgrade container, delete it // Up() should to this, but currently does not due to a bug if err := container.Delete(context.Background(), options.Delete{}); err != nil { return err } if err := container.Up(context.Background(), options.Up{}); err != nil { return err } if err := container.Log(context.Background(), true); err != nil { return err } if err := container.Delete(context.Background(), options.Delete{}); err != nil { return err } if reboot && (force || yes("Continue with reboot")) { log.Info("Rebooting") power.Reboot() } } return nil } func parseBody(body []byte) (*Images, error) { update := &Images{} err := yaml.Unmarshal(body, update) if err != nil { return nil, err } return update, nil } func getUpgradeURL() (string, error) { cfg := config.LoadConfig() return cfg.Rancher.Upgrade.URL, nil } ================================================ FILE: cmd/control/preload.go ================================================ package control import ( "compress/gzip" "context" "fmt" "io" "io/ioutil" "os" "path" "regexp" "strings" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/log" "github.com/codegangsta/cli" dockerClient "github.com/docker/engine-api/client" "github.com/docker/engine-api/types" ) const ( userImagesPreloadDirectory = "/var/lib/rancher/preload/docker" ) func preloadImagesAction(c *cli.Context) error { err := PreloadImages(docker.NewDefaultClient, userImagesPreloadDirectory) if err != nil { log.Errorf("Failed to preload user images: %v", err) } return err } func shouldLoad(file string) bool { if strings.HasSuffix(file, ".done") { return false } if _, err := os.Stat(fmt.Sprintf("%s.done", file)); err == nil { return false } return true } func PreloadImages(clientFactory func() (dockerClient.APIClient, error), imagesDir string) error { var client dockerClient.APIClient clientInitialized := false if _, err := os.Stat(imagesDir); os.IsNotExist(err) { if err = os.MkdirAll(imagesDir, 0755); err != nil { return err } } else if err != nil { return err } // try to load predefined user images if imagesDir == userImagesPreloadDirectory { oldUserImgName := path.Join(config.ImagesPath, config.UserImages) userImgfile, err := os.Stat(oldUserImgName) if err == nil { newUserImgName := path.Join(userImagesPreloadDirectory, userImgfile.Name()) if _, err = os.Stat(newUserImgName); os.IsNotExist(err) { if err := os.Symlink(oldUserImgName, newUserImgName); err != nil { log.Error(err) } } } } files, err := ioutil.ReadDir(imagesDir) if err != nil { return err } for _, file := range files { filename := path.Join(imagesDir, file.Name()) if !shouldLoad(filename) { log.Infof("Skipping to preload the file: %s", filename) continue } image, err := os.Open(filename) if err != nil { return err } defer image.Close() var imageReader io.Reader imageReader = image match, err := regexp.MatchString(".t?gz$", file.Name()) if err != nil { return err } if match { imageReader, err = gzip.NewReader(image) if err != nil { return err } } if !clientInitialized { client, err = clientFactory() if err != nil { return err } clientInitialized = true } var imageLoadResponse types.ImageLoadResponse if imageLoadResponse, err = client.ImageLoad(context.Background(), imageReader, false); err != nil { return err } cfg := config.LoadConfig() if cfg.Rancher.PreloadWait { if _, err := ioutil.ReadAll(imageLoadResponse.Body); err != nil { return err } } log.Infof("Finished to load image %s", filename) log.Infof("Creating done stamp file for image %s", filename) doneStamp, err := os.Create(fmt.Sprintf("%s.done", filename)) if err != nil { return err } defer doneStamp.Close() log.Infof("Finished to created the done stamp file for image %s", filename) } return nil } ================================================ FILE: cmd/control/recovery_init.go ================================================ package control import ( "os" "os/exec" "syscall" "github.com/burmilla/os/pkg/log" "github.com/codegangsta/cli" ) func recoveryInitAction(c *cli.Context) error { if err := writeRespawn("root", false, true); err != nil { log.Error(err) } respawnBinPath, err := exec.LookPath("respawn") if err != nil { return err } return syscall.Exec(respawnBinPath, []string{"respawn", "-f", "/etc/respawn.conf"}, os.Environ()) } ================================================ FILE: cmd/control/service/app/app.go ================================================ package app import ( "fmt" "os" "os/signal" "strings" "syscall" "github.com/burmilla/os/pkg/log" "github.com/codegangsta/cli" "github.com/docker/libcompose/project" "github.com/docker/libcompose/project/options" "golang.org/x/net/context" ) func ProjectPs(p project.APIProject, c *cli.Context) error { qFlag := c.Bool("q") allInfo, err := p.Ps(context.Background(), qFlag, c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } os.Stdout.WriteString(allInfo.String(!qFlag)) return nil } func ProjectStop(p project.APIProject, c *cli.Context) error { err := p.Stop(context.Background(), c.Int("timeout"), c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } return nil } func ProjectDown(p project.APIProject, c *cli.Context) error { options := options.Down{ RemoveVolume: c.Bool("volumes"), RemoveImages: options.ImageType(c.String("rmi")), RemoveOrphans: c.Bool("remove-orphans"), } err := p.Down(context.Background(), options, c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } return nil } func ProjectBuild(p project.APIProject, c *cli.Context) error { config := options.Build{ NoCache: c.Bool("no-cache"), ForceRemove: c.Bool("force-rm"), Pull: c.Bool("pull"), } err := p.Build(context.Background(), config, c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } return nil } func ProjectCreate(p project.APIProject, c *cli.Context) error { options := options.Create{ NoRecreate: c.Bool("no-recreate"), ForceRecreate: c.Bool("force-recreate"), NoBuild: c.Bool("no-build"), } err := p.Create(context.Background(), options, c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } return nil } func ProjectUp(p project.APIProject, c *cli.Context) error { options := options.Up{ Create: options.Create{ NoRecreate: c.Bool("no-recreate"), ForceRecreate: c.Bool("force-recreate"), NoBuild: c.Bool("no-build"), }, } ctx, cancelFun := context.WithCancel(context.Background()) err := p.Up(ctx, options, c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } if c.Bool("foreground") { signalChan := make(chan os.Signal, 1) cleanupDone := make(chan bool) signal.Notify(signalChan, syscall.SIGINT, syscall.SIGTERM) errChan := make(chan error) go func() { errChan <- p.Log(ctx, true, c.Args()...) }() go func() { select { case <-signalChan: fmt.Printf("\nGracefully stopping...\n") cancelFun() ProjectStop(p, c) cleanupDone <- true case err := <-errChan: if err != nil { log.Fatal(err) } cleanupDone <- true } }() <-cleanupDone return nil } return nil } func ProjectStart(p project.APIProject, c *cli.Context) error { err := p.Start(context.Background(), c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } return nil } func ProjectRestart(p project.APIProject, c *cli.Context) error { err := p.Restart(context.Background(), c.Int("timeout"), c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } return nil } func ProjectLog(p project.APIProject, c *cli.Context) error { err := p.Log(context.Background(), c.Bool("follow"), c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } return nil } func ProjectPull(p project.APIProject, c *cli.Context) error { err := p.Pull(context.Background(), c.Args()...) if err != nil && !c.Bool("ignore-pull-failures") { return cli.NewExitError(err.Error(), 1) } return nil } func ProjectDelete(p project.APIProject, c *cli.Context) error { options := options.Delete{ RemoveVolume: c.Bool("v"), } if !c.Bool("force") { options.BeforeDeleteCallback = func(stoppedContainers []string) bool { fmt.Printf("Going to remove %v\nAre you sure? [yN]\n", strings.Join(stoppedContainers, ", ")) var answer string _, err := fmt.Scanln(&answer) if err != nil { log.Error(err) return false } if answer != "y" && answer != "Y" { return false } return true } } err := p.Delete(context.Background(), options, c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } return nil } func ProjectKill(p project.APIProject, c *cli.Context) error { err := p.Kill(context.Background(), c.String("signal"), c.Args()...) if err != nil { return cli.NewExitError(err.Error(), 1) } return nil } ================================================ FILE: cmd/control/service/command/command.go ================================================ package command import ( "errors" "github.com/burmilla/os/cmd/control/service/app" "github.com/codegangsta/cli" composeApp "github.com/docker/libcompose/cli/app" ) func verifyOneOrMoreServices(c *cli.Context) error { if len(c.Args()) == 0 { return errors.New("Must specify one or more services") } return nil } func CreateCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "create", Usage: "Create services", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectCreate), Flags: []cli.Flag{ cli.BoolFlag{ Name: "no-recreate", Usage: "If containers already exist, don't recreate them. Incompatible with --force-recreate.", }, cli.BoolFlag{ Name: "force-recreate", Usage: "Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate.", }, cli.BoolFlag{ Name: "no-build", Usage: "Don't build an image, even if it's missing.", }, }, } } func BuildCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "build", Usage: "Build or rebuild services", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectBuild), Flags: []cli.Flag{ cli.BoolFlag{ Name: "no-cache", Usage: "Do not use cache when building the image", }, cli.BoolFlag{ Name: "force-rm", Usage: "Always remove intermediate containers", }, cli.BoolFlag{ Name: "pull", Usage: "Always attempt to pull a newer version of the image", }, }, } } func PsCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "ps", Usage: "List containers", Action: composeApp.WithProject(factory, app.ProjectPs), Flags: []cli.Flag{ cli.BoolFlag{ Name: "q", Usage: "Only display IDs", }, }, } } func UpCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "up", Usage: "Create and start containers", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectUp), Flags: []cli.Flag{ cli.BoolFlag{ Name: "foreground", Usage: "Run in foreground and log", }, cli.BoolFlag{ Name: "no-build", Usage: "Don't build an image, even if it's missing.", }, cli.BoolFlag{ Name: "no-recreate", Usage: "If containers already exist, don't recreate them. Incompatible with --force-recreate.", }, cli.BoolFlag{ Name: "force-recreate", Usage: "Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate.", }, }, } } func StartCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "start", Usage: "Start services", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectStart), Flags: []cli.Flag{ cli.BoolTFlag{ Name: "foreground", Usage: "Run in foreground and log", }, }, } } func PullCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "pull", Usage: "Pulls service images", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectPull), Flags: []cli.Flag{ cli.BoolFlag{ Name: "ignore-pull-failures", Usage: "Pull what it can and ignores images with pull failures.", }, }, } } func LogsCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "logs", Usage: "View output from containers", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectLog), Flags: []cli.Flag{ cli.IntFlag{ Name: "lines", Usage: "number of lines to tail", Value: 100, }, cli.BoolFlag{ Name: "follow, f", Usage: "Follow log output.", }, }, } } func RestartCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "restart", Usage: "Restart services", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectRestart), Flags: []cli.Flag{ cli.IntFlag{ Name: "timeout,t", Usage: "Specify a shutdown timeout in seconds.", Value: 10, }, }, } } func StopCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "stop", Usage: "Stop services", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectStop), Flags: []cli.Flag{ cli.IntFlag{ Name: "timeout,t", Usage: "Specify a shutdown timeout in seconds.", Value: 10, }, }, } } func DownCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "down", Usage: "Stop and remove containers, networks, images, and volumes", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectDown), Flags: []cli.Flag{ cli.BoolFlag{ Name: "volumes,v", Usage: "Remove data volumes", }, cli.StringFlag{ Name: "rmi", Usage: "Remove images, type may be one of: 'all' to remove all images, or 'local' to remove only images that don't have an custom name set by the `image` field", }, cli.BoolFlag{ Name: "remove-orphans", Usage: "Remove containers for services not defined in the Compose file", }, }, } } func RmCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "rm", Usage: "Delete services", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectDelete), Flags: []cli.Flag{ cli.BoolFlag{ Name: "force,f", Usage: "Allow deletion of all services", }, cli.BoolFlag{ Name: "v", Usage: "Remove volumes associated with containers", }, }, } } func KillCommand(factory composeApp.ProjectFactory) cli.Command { return cli.Command{ Name: "kill", Usage: "Kill containers", Before: verifyOneOrMoreServices, Action: composeApp.WithProject(factory, app.ProjectKill), Flags: []cli.Flag{ cli.StringFlag{ Name: "signal,s", Usage: "SIGNAL to send to the container", Value: "SIGKILL", }, }, } } ================================================ FILE: cmd/control/service/service.go ================================================ package service import ( "fmt" "strings" "github.com/burmilla/os/cmd/control/service/command" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/compose" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/burmilla/os/pkg/util/network" "github.com/codegangsta/cli" dockerApp "github.com/docker/libcompose/cli/docker/app" "github.com/docker/libcompose/project" ) type projectFactory struct { } func (p *projectFactory) Create(c *cli.Context) (project.APIProject, error) { cfg := config.LoadConfig() return compose.GetProject(cfg, true, false) } func beforeApp(c *cli.Context) error { if c.GlobalBool("verbose") { log.SetLevel(log.DebugLevel) } return nil } func Commands() cli.Command { factory := &projectFactory{} app := cli.Command{} app.Name = "service" app.ShortName = "s" app.Before = beforeApp app.Flags = append(dockerApp.DockerClientFlags(), cli.BoolFlag{ Name: "verbose,debug", }) app.Subcommands = append(serviceSubCommands(), command.BuildCommand(factory), command.CreateCommand(factory), command.UpCommand(factory), command.StartCommand(factory), command.LogsCommand(factory), command.RestartCommand(factory), command.StopCommand(factory), command.RmCommand(factory), command.PullCommand(factory), command.KillCommand(factory), command.PsCommand(factory), ) return app } func serviceSubCommands() []cli.Command { return []cli.Command{ { Name: "enable", Usage: "turn on an service", Action: enable, }, { Name: "disable", Usage: "turn off an service", Action: disable, }, { Name: "list", Usage: "list services and state", Flags: []cli.Flag{ cli.BoolFlag{ Name: "all, a", Usage: "list all services and state", }, cli.BoolFlag{ Name: "update, u", Usage: "update service cache", }, }, Action: list, }, { Name: "delete", Usage: "delete a service", Action: del, }, } } func updateIncludedServices(cfg *config.CloudConfig) error { return config.Set("rancher.services_include", cfg.Rancher.ServicesInclude) } func disable(c *cli.Context) error { changed := false cfg := config.LoadConfig() for _, service := range c.Args() { validateService(service, cfg) if _, ok := cfg.Rancher.ServicesInclude[service]; !ok { continue } cfg.Rancher.ServicesInclude[service] = false changed = true } if changed { if err := updateIncludedServices(cfg); err != nil { log.Fatal(err) } } return nil } func del(c *cli.Context) error { changed := false cfg := config.LoadConfig() for _, service := range c.Args() { validateService(service, cfg) if _, ok := cfg.Rancher.ServicesInclude[service]; !ok { continue } delete(cfg.Rancher.ServicesInclude, service) changed = true } if changed { if err := updateIncludedServices(cfg); err != nil { log.Fatal(err) } } return nil } func enable(c *cli.Context) error { cfg := config.LoadConfig() var enabledServices []string for _, service := range c.Args() { validateService(service, cfg) if val, ok := cfg.Rancher.ServicesInclude[service]; !ok || !val { if isLocal(service) && !strings.HasPrefix(service, "/var/lib/rancher/conf") { log.Fatalf("ERROR: Service should be in path /var/lib/rancher/conf") } cfg.Rancher.ServicesInclude[service] = true enabledServices = append(enabledServices, service) } } if len(enabledServices) > 0 { if err := compose.StageServices(cfg, enabledServices...); err != nil { log.Fatal(err) } if err := updateIncludedServices(cfg); err != nil { log.Fatal(err) } } return nil } func list(c *cli.Context) error { cfg := config.LoadConfig() clone := make(map[string]bool) for service, enabled := range cfg.Rancher.ServicesInclude { clone[service] = enabled } services := availableService(cfg, c.Bool("update")) if c.Bool("all") { for service := range cfg.Rancher.Services { fmt.Printf("enabled %s\n", service) } } for _, service := range services { if enabled, ok := clone[service]; ok { delete(clone, service) if enabled { fmt.Printf("enabled %s\n", service) } else { fmt.Printf("disabled %s\n", service) } } else { fmt.Printf("disabled %s\n", service) } } for service, enabled := range clone { if enabled { fmt.Printf("enabled %s\n", service) } else { fmt.Printf("disabled %s\n", service) } } return nil } func isLocal(service string) bool { return strings.HasPrefix(service, "/") } func IsLocalOrURL(service string) bool { return isLocal(service) || strings.HasPrefix(service, "http:/") || strings.HasPrefix(service, "https:/") } // ValidService checks to see if the service definition exists func ValidService(service string, cfg *config.CloudConfig) bool { services := availableService(cfg, false) if !IsLocalOrURL(service) && !util.Contains(services, service) { return false } return true } func validateService(service string, cfg *config.CloudConfig) { if !ValidService(service, cfg) { log.Fatalf("%s is not a valid service", service) } } func availableService(cfg *config.CloudConfig, update bool) []string { if update { err := network.UpdateCaches(cfg.Rancher.Repositories.ToArray(), "services") if err != nil { log.Debugf("Failed to update service caches: %v", err) } } services, err := network.GetServices(cfg.Rancher.Repositories.ToArray()) if err != nil { log.Fatalf("Failed to get services: %v", err) } return services } ================================================ FILE: cmd/control/switch_console.go ================================================ package control import ( "errors" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/compose" "github.com/burmilla/os/pkg/log" "github.com/codegangsta/cli" "github.com/docker/libcompose/project/options" "golang.org/x/net/context" ) func switchConsoleAction(c *cli.Context) error { if len(c.Args()) != 1 { return errors.New("Must specify exactly one existing container") } newConsole := c.Args()[0] cfg := config.LoadConfig() project, err := compose.GetProject(cfg, true, false) if err != nil { return err } // stop docker and console to avoid zombie process if err = project.Stop(context.Background(), 10, "docker"); err != nil { log.Errorf("Failed to stop Docker: %v", err) } if err = project.Stop(context.Background(), 10, "console"); err != nil { log.Errorf("Failed to stop console: %v", err) } if newConsole != "default" { if err = compose.LoadSpecialService(project, cfg, "console", newConsole); err != nil { return err } } if err = config.Set("rancher.console", newConsole); err != nil { log.Errorf("Failed to update 'rancher.console': %v", err) } if err = project.Up(context.Background(), options.Up{ Log: true, }, "console"); err != nil { return err } if err = project.Start(context.Background(), "docker"); err != nil { log.Errorf("Failed to start Docker: %v", err) } return nil } ================================================ FILE: cmd/control/tlsconf.go ================================================ package control import ( "io/ioutil" "os" "path/filepath" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/codegangsta/cli" machineUtil "github.com/docker/machine/utils" ) const ( NAME string = "rancher" BITS int = 2048 ServerTLSPath string = "/etc/docker/tls" ClientTLSPath string = "/home/rancher/.docker" Cert string = "cert.pem" Key string = "key.pem" ServerCert string = "server-cert.pem" ServerKey string = "server-key.pem" CaCert string = "ca.pem" CaKey string = "ca-key.pem" ) func tlsConfCommands() []cli.Command { return []cli.Command{ { Name: "generate", ShortName: "gen", Usage: "generates new set of TLS configuration certs", Action: tlsConfCreate, Flags: []cli.Flag{ cli.StringSliceFlag{ Name: "hostname, H", Usage: "the hostname for which you want to generate the certificate", Value: &cli.StringSlice{"localhost"}, }, cli.BoolFlag{ Name: "server, s", Usage: "generate the server keys instead of client keys", }, cli.StringFlag{ Name: "dir, d", Usage: "the directory to save/read the certs to/from", Value: "", }, }, }, } } func writeCerts(generateServer bool, hostname []string, certPath, keyPath, caCertPath, caKeyPath string) error { if !generateServer { return machineUtil.GenerateCert([]string{""}, certPath, keyPath, caCertPath, caKeyPath, NAME, BITS) } if err := machineUtil.GenerateCert(hostname, certPath, keyPath, caCertPath, caKeyPath, NAME, BITS); err != nil { return err } cert, err := ioutil.ReadFile(certPath) if err != nil { return err } key, err := ioutil.ReadFile(keyPath) if err != nil { return err } // certPath, keyPath are already written to by machineUtil.GenerateCert() if err := config.Set("rancher.docker.server_cert", string(cert)); err != nil { return err } return config.Set("rancher.docker.server_key", string(key)) } func writeCaCerts(cfg *config.CloudConfig, caCertPath, caKeyPath string) error { if cfg.Rancher.Docker.CACert == "" { if err := machineUtil.GenerateCACertificate(caCertPath, caKeyPath, NAME, BITS); err != nil { return err } caCert, err := ioutil.ReadFile(caCertPath) if err != nil { return err } caKey, err := ioutil.ReadFile(caKeyPath) if err != nil { return err } // caCertPath, caKeyPath are already written to by machineUtil.GenerateCACertificate() if err := config.Set("rancher.docker.ca_cert", string(caCert)); err != nil { return err } if err := config.Set("rancher.docker.ca_key", string(caKey)); err != nil { return err } } else { cfg = config.LoadConfig() if err := util.WriteFileAtomic(caCertPath, []byte(cfg.Rancher.Docker.CACert), 0400); err != nil { return err } if err := util.WriteFileAtomic(caKeyPath, []byte(cfg.Rancher.Docker.CAKey), 0400); err != nil { return err } } return nil } func tlsConfCreate(c *cli.Context) error { err := generate(c) if err != nil { log.Fatal(err) } return nil } func generate(c *cli.Context) error { generateServer := c.Bool("server") outDir := c.String("dir") hostnames := c.StringSlice("hostname") return Generate(generateServer, outDir, hostnames) } func Generate(generateServer bool, outDir string, hostnames []string) error { if outDir == "" { if generateServer { outDir = ServerTLSPath } else { outDir = ClientTLSPath } log.Infof("Out directory (-d, --dir) not specified, using default: %s", outDir) } caCertPath := filepath.Join(outDir, CaCert) caKeyPath := filepath.Join(outDir, CaKey) certPath := filepath.Join(outDir, Cert) keyPath := filepath.Join(outDir, Key) if generateServer { certPath = filepath.Join(outDir, ServerCert) keyPath = filepath.Join(outDir, ServerKey) } if _, err := os.Stat(outDir); os.IsNotExist(err) { if err := os.MkdirAll(outDir, 0700); err != nil { return err } } cfg := config.LoadConfig() if err := writeCaCerts(cfg, caCertPath, caKeyPath); err != nil { return err } if err := writeCerts(generateServer, hostnames, certPath, keyPath, caCertPath, caKeyPath); err != nil { return err } if !generateServer { if err := filepath.Walk(outDir, func(path string, info os.FileInfo, err error) error { return os.Chown(path, 1100, 1100) // rancher:rancher }); err != nil { return err } } return nil } ================================================ FILE: cmd/control/udevsettle.go ================================================ package control import ( "io/ioutil" "os" "os/exec" "path/filepath" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/codegangsta/cli" ) func udevSettleAction(c *cli.Context) { if err := extraRules(); err != nil { log.Error(err) } if err := UdevSettle(); err != nil { log.Fatal(err) } } func extraRules() error { cfg := config.LoadConfig() if len(cfg.Rancher.Network.ModemNetworks) > 0 { rules, err := ioutil.ReadDir(config.UdevRulesExtrasDir) if err != nil { return err } for _, r := range rules { if r.IsDir() || filepath.Ext(r.Name()) != ".rules" { continue } err := os.Symlink(filepath.Join(config.UdevRulesExtrasDir, r.Name()), filepath.Join(config.UdevRulesDir, r.Name())) if err != nil { return err } } } else { rules, err := ioutil.ReadDir(config.UdevRulesDir) if err != nil { return err } for _, r := range rules { if r.IsDir() || (filepath.Ext(r.Name()) != ".rules") || (r.Mode()&os.ModeSymlink != 0) { continue } err := os.Remove(filepath.Join(config.UdevRulesDir, r.Name())) if err != nil { return err } } } return nil } func UdevSettle() error { cmd := exec.Command("udevd", "--daemon") defer exec.Command("killall", "udevd").Run() cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr if err := cmd.Run(); err != nil { return err } cmd = exec.Command("udevadm", "trigger", "--action=add") cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr if err := cmd.Run(); err != nil { return err } cmd = exec.Command("udevadm", "settle") cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr return cmd.Run() } ================================================ FILE: cmd/control/user_docker.go ================================================ package control import ( "io" "io/ioutil" "os" "path" "path/filepath" "syscall" "time" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/compose" rosDocker "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/codegangsta/cli" composeClient "github.com/docker/libcompose/docker/client" "github.com/docker/libcompose/project" "golang.org/x/net/context" ) const ( defaultStorageContext = "console" dockerPidFile = "/var/run/docker.pid" sourceDirectory = "/engine" destDirectory = "/var/lib/rancher/engine" pluginsSourceDirectory = "/engine-plugins" pluginsDestDirectory = "/var/lib/rancher/engine-plugins" dockerCompletionFName = "completion" ) var ( dockerCommand = []string{ "ros", "docker-init", } ) func userDockerAction(c *cli.Context) error { if err := copyBinaries(sourceDirectory, destDirectory); err != nil { return err } if err := copyBinaries(pluginsSourceDirectory, pluginsDestDirectory); err != nil { return err } if err := syscall.Mount("/host/sys", "/sys", "", syscall.MS_BIND|syscall.MS_REC, ""); err != nil { return err } cfg := config.LoadConfig() return startDocker(cfg) } func copyBinaries(source, dest string) error { if err := os.MkdirAll(dest, 0755); err != nil { return err } files, err := ioutil.ReadDir(dest) if err != nil { return err } for _, file := range files { if err = os.RemoveAll(path.Join(dest, file.Name())); err != nil { return err } } files, err = ioutil.ReadDir(source) if err != nil { return err } for _, file := range files { sourceFile := path.Join(source, file.Name()) destFile := path.Join(dest, file.Name()) in, err := os.Open(sourceFile) if err != nil { return err } out, err := os.Create(destFile) if err != nil { return err } if _, err = io.Copy(out, in); err != nil { return err } if err = out.Sync(); err != nil { return err } if err = in.Close(); err != nil { return err } if err = out.Close(); err != nil { return err } if file.Name() == dockerCompletionFName { if err := os.Chmod(destFile, 0644); err != nil { return err } } else { if err := os.Chmod(destFile, 0751); err != nil { return err } } } return nil } func writeConfigCerts(cfg *config.CloudConfig) error { outDir := ServerTLSPath if err := os.MkdirAll(outDir, 0700); err != nil { return err } caCertPath := filepath.Join(outDir, CaCert) caKeyPath := filepath.Join(outDir, CaKey) serverCertPath := filepath.Join(outDir, ServerCert) serverKeyPath := filepath.Join(outDir, ServerKey) if cfg.Rancher.Docker.CACert != "" { if err := util.WriteFileAtomic(caCertPath, []byte(cfg.Rancher.Docker.CACert), 0400); err != nil { return err } if err := util.WriteFileAtomic(caKeyPath, []byte(cfg.Rancher.Docker.CAKey), 0400); err != nil { return err } } if cfg.Rancher.Docker.ServerCert != "" { if err := util.WriteFileAtomic(serverCertPath, []byte(cfg.Rancher.Docker.ServerCert), 0400); err != nil { return err } if err := util.WriteFileAtomic(serverKeyPath, []byte(cfg.Rancher.Docker.ServerKey), 0400); err != nil { return err } } return nil } func startDocker(cfg *config.CloudConfig) error { storageContext := cfg.Rancher.Docker.StorageContext if storageContext == "" { storageContext = defaultStorageContext } log.Infof("Starting Docker in context: %s", storageContext) p, err := compose.GetProject(cfg, true, false) if err != nil { return err } pid, err := waitForPid(storageContext, p) if err != nil { return err } log.Infof("%s PID %d", storageContext, pid) client, err := rosDocker.NewSystemClient() if err != nil { return err } dockerCfg := cfg.Rancher.Docker args := dockerCfg.FullArgs() log.Debugf("User Docker args: %v", args) if dockerCfg.TLS { if err := writeConfigCerts(cfg); err != nil { return err } } info, err := client.ContainerInspect(context.Background(), storageContext) if err != nil { return err } cmd := []string{"system-docker-runc", "exec", "--", info.ID, "env"} log.Info(dockerCfg.AppendEnv()) cmd = append(cmd, dockerCfg.AppendEnv()...) cmd = append(cmd, dockerCommand...) cmd = append(cmd, args...) log.Infof("Running %v", cmd) return syscall.Exec("/usr/bin/system-docker-runc", cmd, os.Environ()) } func waitForPid(service string, project *project.Project) (int, error) { log.Infof("Getting PID for service: %s", service) for { if pid, err := getPid(service, project); err != nil || pid == 0 { log.Infof("Waiting for %s : %d : %v", service, pid, err) time.Sleep(1 * time.Second) } else { return pid, err } } } func getPid(service string, project *project.Project) (int, error) { s, err := project.CreateService(service) if err != nil { return 0, err } containers, err := s.Containers(context.Background()) if err != nil { return 0, err } if len(containers) == 0 { return 0, nil } client, err := composeClient.Create(composeClient.Options{ Host: config.SystemDockerHost, }) if err != nil { return 0, err } id, err := containers[0].ID() if err != nil { return 0, err } info, err := client.ContainerInspect(context.Background(), id) if err != nil || info.ID == "" { return 0, err } if info.State.Running { return info.State.Pid, nil } return 0, nil } ================================================ FILE: cmd/control/util.go ================================================ package control import ( "bufio" "fmt" "io/ioutil" "os" "strings" "time" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/pkg/errors" ) func yes(question string) bool { fmt.Printf("%s [y/N]: ", question) in := bufio.NewReader(os.Stdin) line, err := in.ReadString('\n') if err != nil { log.Fatal(err) } return strings.ToLower(line[0:1]) == "y" } func formatImage(image string, cfg *config.CloudConfig) string { domainRegistry := cfg.Rancher.Environment["REGISTRY_DOMAIN"] if domainRegistry != "docker.io" && domainRegistry != "" { return fmt.Sprintf("%s/%s", domainRegistry, image) } return image } func symLinkEngineBinary() []symlink { baseSymlink := []symlink{ {"/usr/share/ros/os-release", "/usr/lib/os-release"}, {"/usr/share/ros/os-release", "/etc/os-release"}, {"/var/lib/rancher/engine/docker", "/usr/bin/docker"}, {"/var/lib/rancher/engine/dockerd", "/usr/bin/dockerd"}, {"/var/lib/rancher/engine/docker-init", "/usr/bin/docker-init"}, {"/var/lib/rancher/engine/docker-proxy", "/usr/bin/docker-proxy"}, // >= 18.09.0 {"/var/lib/rancher/engine/containerd", "/usr/bin/containerd"}, {"/var/lib/rancher/engine/ctr", "/usr/bin/ctr"}, {"/var/lib/rancher/engine/containerd-shim", "/usr/bin/containerd-shim"}, {"/var/lib/rancher/engine/runc", "/usr/bin/runc"}, // < 18.09.0 {"/var/lib/rancher/engine/docker-containerd", "/usr/bin/docker-containerd"}, {"/var/lib/rancher/engine/docker-containerd-ctr", "/usr/bin/docker-containerd-ctr"}, {"/var/lib/rancher/engine/docker-containerd-shim", "/usr/bin/docker-containerd-shim"}, {"/var/lib/rancher/engine/docker-runc", "/usr/bin/docker-runc"}, // Docker CLI plugins {"/var/lib/rancher/engine-plugins/docker-compose", "/usr/bin/docker-compose"}, {"/var/lib/rancher/engine-plugins/docker-compose", "/usr/libexec/docker/cli-plugins/docker-compose"}, {"/var/lib/rancher/engine-plugins/docker-buildx", "/usr/libexec/docker/cli-plugins/docker-buildx"}, } return baseSymlink } func checkZfsBackingFS(driver, dir string) error { if driver != "zfs" { return nil } for i := 0; i < 4; i++ { mountInfo, err := ioutil.ReadFile("/proc/self/mountinfo") if err != nil { continue } for _, mount := range strings.Split(string(mountInfo), "\n") { if strings.Contains(mount, dir) && strings.Contains(mount, driver) { return nil } } time.Sleep(1 * time.Second) } return errors.Errorf("BackingFS: %s not match storage-driver: %s", dir, driver) } func checkGlobalCfg() bool { _, err := os.Stat("/proc/1/root/boot/global.cfg") if err == nil || os.IsExist(err) { return true } return false } ================================================ FILE: cmd/init/init.go ================================================ //go:build linux // +build linux package init import ( "fmt" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/dfs" "github.com/burmilla/os/pkg/init/b2d" "github.com/burmilla/os/pkg/init/cloudinit" "github.com/burmilla/os/pkg/init/configfiles" "github.com/burmilla/os/pkg/init/debug" "github.com/burmilla/os/pkg/init/docker" "github.com/burmilla/os/pkg/init/env" "github.com/burmilla/os/pkg/init/fsmount" "github.com/burmilla/os/pkg/init/hypervisor" "github.com/burmilla/os/pkg/init/modules" "github.com/burmilla/os/pkg/init/one" "github.com/burmilla/os/pkg/init/prepare" "github.com/burmilla/os/pkg/init/recovery" "github.com/burmilla/os/pkg/init/sharedroot" "github.com/burmilla/os/pkg/init/switchroot" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/sysinit" ) func MainInit() { log.InitLogger() // TODO: this breaks and does nothing if the cfg is invalid (or is it due to threading?) defer func() { if r := recover(); r != nil { fmt.Printf("Starting Recovery console: %v\n", r) recovery.Recovery(nil) } }() if err := RunInit(); err != nil { log.Fatal(err) } } func RunInit() error { initFuncs := config.CfgFuncs{ {Name: "set env", Func: env.Init}, {Name: "preparefs", Func: prepare.FS}, {Name: "save init cmdline", Func: prepare.SaveCmdline}, {Name: "mount OEM", Func: fsmount.MountOem}, {Name: "debug save cfg", Func: debug.PrintAndLoadConfig}, {Name: "load modules", Func: modules.LoadModules}, {Name: "recovery console", Func: recovery.LoadRecoveryConsole}, {Name: "b2d env", Func: b2d.B2D}, {Name: "mount STATE and bootstrap", Func: fsmount.MountStateAndBootstrap}, {Name: "cloud-init", Func: cloudinit.CloudInit}, {Name: "read cfg and log files", Func: configfiles.ReadConfigFiles}, {Name: "switchroot", Func: switchroot.SwitchRoot}, {Name: "mount OEM2", Func: fsmount.MountOem}, {Name: "mount BOOT", Func: fsmount.MountBoot}, {Name: "write cfg and log files", Func: configfiles.WriteConfigFiles}, {Name: "b2d Env", Func: b2d.Env}, {Name: "hypervisor tools", Func: hypervisor.Tools}, {Name: "preparefs2", Func: prepare.FS}, {Name: "load modules2", Func: modules.LoadModules}, {Name: "set proxy env", Func: env.Proxy}, {Name: "setupSharedRoot", Func: sharedroot.Setup}, {Name: "sysinit", Func: sysinit.RunSysInit}, } cfg, err := config.ChainCfgFuncs(nil, initFuncs) if err != nil { recovery.Recovery(err) } launchConfig, args := docker.GetLaunchConfig(cfg, &cfg.Rancher.SystemDocker) launchConfig.Fork = !cfg.Rancher.SystemDocker.Exec //launchConfig.NoLog = true log.Info("Launching System Docker") _, err = dfs.LaunchDocker(launchConfig, config.SystemDockerBin, args...) if err != nil { log.Errorf("Error Launching System Docker: %s", err) recovery.Recovery(err) return err } // Code never gets here - rancher.system_docker.exec=true return one.PidOne() } ================================================ FILE: cmd/network/network.go ================================================ package network import ( "fmt" "io/ioutil" "os" "os/signal" "path/filepath" "strconv" "syscall" "text/template" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/hostname" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/netconf" "github.com/docker/libnetwork/resolvconf" "golang.org/x/net/context" ) var funcMap = template.FuncMap{ "addFunc": func(a, b int) string { return strconv.Itoa(a + b) }, } func Main() { log.InitLogger() cfg := config.LoadConfig() ApplyNetworkConfig(cfg) log.Infof("Restart syslog") client, err := docker.NewSystemClient() if err != nil { log.Error(err) } if err := client.ContainerRestart(context.Background(), "syslog", 10); err != nil { log.Error(err) } signalChan := make(chan os.Signal, 1) signal.Notify(signalChan, syscall.SIGTERM) <-signalChan log.Info("Received SIGTERM, shutting down") netconf.StopWpaSupplicant() netconf.StopDhcpcd() } func ApplyNetworkConfig(cfg *config.CloudConfig) { log.Infof("Apply Network Config") userSetDNS := len(cfg.Rancher.Network.DNS.Nameservers) > 0 || len(cfg.Rancher.Network.DNS.Search) > 0 if err := hostname.SetHostnameFromCloudConfig(cfg); err != nil { log.Errorf("Failed to set hostname from cloud config: %v", err) } userSetHostname := cfg.Hostname != "" if cfg.Rancher.Network.DHCPTimeout <= 0 { cfg.Rancher.Network.DHCPTimeout = cfg.Rancher.Defaults.Network.DHCPTimeout } // Always generate dhcpcd.conf to support NTP and hostname configuration coming from DHCP generateDhcpcdFiles(cfg) // In order to handle the STATIC mode in Wi-Fi network, we have to update the dhcpcd.conf file. // https://wiki.archlinux.org/index.php/dhcpcd#Static_profile if len(cfg.Rancher.Network.WifiNetworks) > 0 { generateWpaFiles(cfg) } dhcpSetDNS, err := netconf.ApplyNetworkConfigs(&cfg.Rancher.Network, userSetHostname, userSetDNS) if err != nil { log.Errorf("Failed to apply network configs(by netconf): %v", err) } if dhcpSetDNS { log.Infof("DNS set by DHCP") } if !userSetDNS && !dhcpSetDNS { // only write 8.8.8.8,8.8.4.4 as a last resort log.Infof("Writing default resolv.conf - no user setting, and no DHCP setting") if _, err := resolvconf.Build("/etc/resolv.conf", cfg.Rancher.Defaults.Network.DNS.Nameservers, cfg.Rancher.Defaults.Network.DNS.Search, nil); err != nil { log.Errorf("Failed to write resolv.conf (!userSetDNS and !dhcpSetDNS): %v", err) } } if userSetDNS { if _, err := resolvconf.Build("/etc/resolv.conf", cfg.Rancher.Network.DNS.Nameservers, cfg.Rancher.Network.DNS.Search, nil); err != nil { log.Errorf("Failed to write resolv.conf (userSetDNS): %v", err) } else { log.Infof("writing to /etc/resolv.conf: nameservers: %v, search: %v", cfg.Rancher.Network.DNS.Nameservers, cfg.Rancher.Network.DNS.Search) } } resolve, err := ioutil.ReadFile("/etc/resolv.conf") log.Debugf("Resolve.conf == [%s], %v", resolve, err) log.Infof("Apply Network Config SyncHostname") if err := hostname.SyncHostname(); err != nil { log.Errorf("Failed to sync hostname: %v", err) } } func generateDhcpcdFiles(cfg *config.CloudConfig) { networks := cfg.Rancher.Network.WifiNetworks interfaces := cfg.Rancher.Network.Interfaces configs := make(map[string]netconf.WifiNetworkConfig) for k, v := range interfaces { if c, ok := networks[v.WifiNetwork]; ok && c.Address != "" { configs[k] = c } } f, err := os.Create(config.DHCPCDConfigFile) defer f.Close() if err != nil { log.Errorf("Failed to open file: %s err: %v", config.DHCPCDConfigFile, err) } templateFiles := []string{config.DHCPCDTemplateFile} templateName := filepath.Base(templateFiles[0]) p := template.Must(template.New(templateName).ParseFiles(templateFiles...)) if err = p.Execute(f, configs); err != nil { log.Errorf("Failed to wrote wpa configuration to %s: %v", config.DHCPCDConfigFile, err) } } func generateWpaFiles(cfg *config.CloudConfig) { networks := cfg.Rancher.Network.WifiNetworks interfaces := cfg.Rancher.Network.Interfaces for k, v := range interfaces { if v.WifiNetwork != "" { configs := make(map[string]netconf.WifiNetworkConfig) filename := fmt.Sprintf(config.WPAConfigFile, k) f, err := os.Create(filename) if err != nil { log.Errorf("Failed to open file: %s err: %v", filename, err) } if c, ok := networks[v.WifiNetwork]; ok { configs[v.WifiNetwork] = c } templateFiles := []string{config.WPATemplateFile} templateName := filepath.Base(templateFiles[0]) p := template.Must(template.New(templateName).Funcs(funcMap).ParseFiles(templateFiles...)) if err = p.Execute(f, configs); err != nil { log.Errorf("Failed to wrote wpa configuration to %s: %v", filename, err) } f.Close() } } } ================================================ FILE: cmd/power/power.go ================================================ package power import ( "errors" "fmt" "os" "path/filepath" "strconv" "strings" "syscall" "time" "github.com/burmilla/os/cmd/control/install" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/util" "github.com/docker/engine-api/types" "github.com/docker/engine-api/types/container" "github.com/docker/engine-api/types/filters" "golang.org/x/net/context" ) // You can't shutdown the system from a process in console because we want to stop the console container. // If you do that you kill yourself. So we spawn a separate container to do power operations // This can up because on shutdown we want ssh to gracefully die, terminating ssh connections and not just hanging tcp session // // Be careful of container name. only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed func runDocker(name string) error { if os.ExpandEnv("${IN_DOCKER}") == "true" { return nil } client, err := docker.NewSystemClient() if err != nil { return err } cmd := os.Args log.Debugf("runDocker cmd: %s", cmd) if name == "" { name = filepath.Base(os.Args[0]) } containerName := strings.TrimPrefix(strings.Join(strings.Split(name, "/"), "-"), "-") existing, err := client.ContainerInspect(context.Background(), containerName) if err == nil && existing.ID != "" { // remove the old version of reboot err := client.ContainerRemove(context.Background(), types.ContainerRemoveOptions{ ContainerID: existing.ID, }) if err != nil { return err } } currentContainerID, err := util.GetCurrentContainerID() if err != nil { return err } currentContainer, err := client.ContainerInspect(context.Background(), currentContainerID) if err != nil { return err } powerContainer, err := client.ContainerCreate(context.Background(), &container.Config{ Image: currentContainer.Config.Image, Cmd: cmd, Env: []string{ "IN_DOCKER=true", }, }, &container.HostConfig{ PidMode: "host", NetworkMode: "none", VolumesFrom: []string{ currentContainer.ID, }, Privileged: true, }, nil, containerName) if err != nil { return err } err = client.ContainerStart(context.Background(), powerContainer.ID) if err != nil { return err } reader, err := client.ContainerLogs(context.Background(), types.ContainerLogsOptions{ ContainerID: powerContainer.ID, ShowStderr: true, ShowStdout: true, Follow: true, }) if err != nil { log.Fatal(err) } for { p := make([]byte, 4096) n, err := reader.Read(p) if err != nil { log.Error(err) if n == 0 { reader.Close() break } } if n > 0 { fmt.Print(string(p)) } } if err != nil { log.Fatal(err) } os.Exit(0) return nil } func reboot(name string, force bool, code uint) { if os.Geteuid() != 0 { log.Fatalf("%s: Need to be root", os.Args[0]) } cfg := config.LoadConfig() // Validate config if !force { _, validationErrors, err := config.LoadConfigWithError() if err != nil { log.Fatal(err) } if validationErrors != nil && !validationErrors.Valid() { for _, validationError := range validationErrors.Errors() { log.Error(validationError) } return } } // Add shutdown timeout timeoutValue := cfg.Rancher.ShutdownTimeout if timeoutValue == 0 { timeoutValue = 60 } if timeoutValue < 5 { timeoutValue = 5 } log.Infof("Setting %s timeout to %d (rancher.shutdown_timeout set to %d)", os.Args[0], timeoutValue, cfg.Rancher.ShutdownTimeout) go func() { timeout := time.After(time.Duration(timeoutValue) * time.Second) tick := time.Tick(100 * time.Millisecond) // Keep trying until we're timed out or got a result or got an error for { select { // Got a timeout! fail with a timeout error case <-timeout: log.Errorf("Container shutdown taking too long, forcing %s.", os.Args[0]) syscall.Sync() syscall.Reboot(int(code)) case <-tick: fmt.Printf(".") } } }() // reboot -f should work even when system-docker is having problems if !force { if kexecFlag || previouskexecFlag || kexecAppendFlag != "" { // pass through the cmdline args name = "" } if err := runDocker(name); err != nil { log.Fatal(err) } } if kexecFlag || previouskexecFlag || kexecAppendFlag != "" { // need to mount boot dir, or `system-docker run -v /:/host -w /host/boot` ? baseName := "/mnt/new_img" _, _, err := install.MountDevice(baseName, "", "", false) if err != nil { log.Errorf("ERROR: can't Kexec: %s", err) return } defer util.Unmount(baseName) Kexec(previouskexecFlag, filepath.Join(baseName, config.BootDir), kexecAppendFlag) return } if !force { err := shutDownContainers() if err != nil { log.Error(err) } } syscall.Sync() err := syscall.Reboot(int(code)) if err != nil { log.Fatal(err) } } func shutDownContainers() error { var err error shutDown := true timeout := 2 for i, arg := range os.Args { if arg == "-f" || arg == "--f" || arg == "--force" { shutDown = false } if arg == "-t" || arg == "--t" || arg == "--timeout" { if len(os.Args) > i+1 { t, err := strconv.Atoi(os.Args[i+1]) if err != nil { return err } timeout = t } else { log.Error("please specify a timeout") } } } if !shutDown { return nil } client, err := docker.NewSystemClient() if err != nil { return err } filter := filters.NewArgs() filter.Add("status", "running") opts := types.ContainerListOptions{ All: true, Filter: filter, } containers, err := client.ContainerList(context.Background(), opts) if err != nil { return err } currentContainerID, err := util.GetCurrentContainerID() if err != nil { return err } var stopErrorStrings []string consoleContainerIdx := -1 for idx, container := range containers { if container.ID == currentContainerID { continue } if container.Names[0] == "/console" { consoleContainerIdx = idx continue } log.Infof("Stopping %s : %s", container.Names[0], container.ID[:12]) stopErr := client.ContainerStop(context.Background(), container.ID, timeout) if stopErr != nil { log.Errorf("------- Error Stopping %s : %s", container.Names[0], stopErr.Error()) stopErrorStrings = append(stopErrorStrings, " ["+container.ID+"] "+stopErr.Error()) } } // lets see what containers are still running and only wait on those containers, err = client.ContainerList(context.Background(), opts) if err != nil { return err } var waitErrorStrings []string for idx, container := range containers { if container.ID == currentContainerID { continue } if container.Names[0] == "/console" { consoleContainerIdx = idx continue } log.Infof("Waiting %s : %s", container.Names[0], container.ID[:12]) _, waitErr := client.ContainerWait(context.Background(), container.ID) if waitErr != nil { log.Errorf("------- Error Waiting %s : %s", container.Names[0], waitErr.Error()) waitErrorStrings = append(waitErrorStrings, " ["+container.ID+"] "+waitErr.Error()) } } // and now stop the console if consoleContainerIdx != -1 { container := containers[consoleContainerIdx] log.Infof("Console Stopping %v : %s", container.Names, container.ID[:12]) stopErr := client.ContainerStop(context.Background(), container.ID, timeout) if stopErr != nil { log.Errorf("------- Error Stopping %v : %s", container.Names, stopErr.Error()) stopErrorStrings = append(stopErrorStrings, " ["+container.ID+"] "+stopErr.Error()) } log.Infof("Console Waiting %v : %s", container.Names, container.ID[:12]) _, waitErr := client.ContainerWait(context.Background(), container.ID) if waitErr != nil { log.Errorf("------- Error Waiting %v : %s", container.Names, waitErr.Error()) waitErrorStrings = append(waitErrorStrings, " ["+container.ID+"] "+waitErr.Error()) } } if len(waitErrorStrings) != 0 || len(stopErrorStrings) != 0 { return errors.New("error while stopping \n1. STOP Errors [" + strings.Join(stopErrorStrings, ",") + "] \n2. WAIT Errors [" + strings.Join(waitErrorStrings, ",") + "]") } return nil } ================================================ FILE: cmd/power/shutdown.go ================================================ package power import ( "fmt" "os" "os/exec" "path/filepath" "syscall" "github.com/burmilla/os/cmd/control/install" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/codegangsta/cli" ) var ( haltFlag bool poweroffFlag bool rebootFlag bool forceFlag bool kexecFlag bool previouskexecFlag bool kexecAppendFlag string ) func Shutdown() { log.InitLogger() app := cli.NewApp() app.Name = filepath.Base(os.Args[0]) app.Usage = fmt.Sprintf("%s BurmillaOS\nbuilt: %s", app.Name, config.BuildDate) app.Version = config.Version app.Author = "Project Burmilla\n\tRancher Labs, Inc." app.EnableBashCompletion = true app.Action = shutdown app.Flags = []cli.Flag{ // --no-wall // Do not send wall message before halt, power-off, // reboot. // halt, poweroff, reboot ONLY // -f, --force // Force immediate halt, power-off, reboot. Do not // contact the init system. cli.BoolFlag{ Name: "f, force", Usage: "Force immediate halt, power-off, reboot. Do not contact the init system.", Destination: &forceFlag, }, // -w, --wtmp-only // Only write wtmp shutdown entry, do not actually // halt, power-off, reboot. // -d, --no-wtmp // Do not write wtmp shutdown entry. // -n, --no-sync // Don't sync hard disks/storage media before halt, // power-off, reboot. // shutdown ONLY // -h // Equivalent to --poweroff, unless --halt is // specified. // -k // Do not halt, power-off, reboot, just write wall // message. // -c // Cancel a pending shutdown. This may be used // cancel the effect of an invocation of shutdown // with a time argument that is not "+0" or "now". } // -H, --halt // Halt the machine. if app.Name == "halt" { app.Flags = append(app.Flags, cli.BoolTFlag{ Name: "H, halt", Usage: "halt the machine", Destination: &haltFlag, }) } else { app.Flags = append(app.Flags, cli.BoolFlag{ Name: "H, halt", Usage: "halt the machine", Destination: &haltFlag, }) } // -P, --poweroff // Power-off the machine (the default for shutdown cmd). if app.Name == "poweroff" { app.Flags = append(app.Flags, cli.BoolTFlag{ Name: "P, poweroff", Usage: "poweroff the machine", Destination: &poweroffFlag, }) } else { // shutdown -h // Equivalent to --poweroff if app.Name == "shutdown" { app.Flags = append(app.Flags, cli.BoolFlag{ Name: "h", Usage: "poweroff the machine", Destination: &poweroffFlag, }) } app.Flags = append(app.Flags, cli.BoolFlag{ Name: "P, poweroff", Usage: "poweroff the machine", Destination: &poweroffFlag, }) } // -r, --reboot // Reboot the machine. if app.Name == "reboot" { app.Flags = append(app.Flags, cli.BoolTFlag{ Name: "r, reboot", Usage: "reboot after shutdown", Destination: &rebootFlag, }) // OR? maybe implement it as a `kexec` cli tool? app.Flags = append(app.Flags, cli.BoolFlag{ Name: "kexec", Usage: "kexec the default RancherOS cfg", Destination: &kexecFlag, }) app.Flags = append(app.Flags, cli.BoolFlag{ Name: "kexec-previous", Usage: "kexec the previous RancherOS cfg", Destination: &previouskexecFlag, }) app.Flags = append(app.Flags, cli.StringFlag{ Name: "kexec-append", Usage: "kexec using the specified kernel boot params (ignores global.cfg)", Destination: &kexecAppendFlag, }) } else { app.Flags = append(app.Flags, cli.BoolFlag{ Name: "r, reboot", Usage: "reboot after shutdown", Destination: &rebootFlag, }) } //TODO: add the time and msg flags... app.HideHelp = true app.Run(os.Args) } func Kexec(previous bool, bootDir, append string) error { cfg := "linux-current.cfg" if previous { cfg = "linux-previous.cfg" } cfgFile := filepath.Join(bootDir, cfg) vmlinuzFile, initrdFile, err := install.ReadSyslinuxCfg(cfgFile) if err != nil { log.Errorf("%s", err) return err } globalCfgFile := filepath.Join(bootDir, "global.cfg") if append == "" { append, err = install.ReadGlobalCfg(globalCfgFile) if err != nil { log.Errorf("%s", err) return err } } // TODO: read global.cfg if append == "" // kexec -l ${DIST}/vmlinuz --initrd=${DIST}/initrd --append="${kernelArgs} ${APPEND}" -f cmd := exec.Command( "kexec", "-l", vmlinuzFile, "--initrd", initrdFile, "--append", append, "-f") log.Debugf("Run(%#v)", cmd) cmd.Stderr = os.Stderr if _, err := cmd.Output(); err != nil { log.Errorf("Failed to kexec: %s", err) return err } log.Infof("kexec'd to new install") return nil } // Reboot is used by installation / upgrade // TODO: add kexec option func Reboot() { os.Args = []string{"reboot"} reboot("reboot", false, syscall.LINUX_REBOOT_CMD_RESTART) } func shutdown(c *cli.Context) error { // the shutdown command's default is poweroff var powerCmd uint powerCmd = syscall.LINUX_REBOOT_CMD_POWER_OFF if rebootFlag { powerCmd = syscall.LINUX_REBOOT_CMD_RESTART } else if poweroffFlag { powerCmd = syscall.LINUX_REBOOT_CMD_POWER_OFF } else if haltFlag { powerCmd = syscall.LINUX_REBOOT_CMD_HALT } timeArg := c.Args().Get(0) // We may be called via an absolute path, so check that now and make sure we // don't pass the wrong app name down. Aside from the logic in the immediate // context here, the container name is derived from how we were called and // cannot contain slashes. appName := filepath.Base(c.App.Name) if appName == "shutdown" && timeArg != "" { if timeArg != "now" && timeArg != "+0" { err := fmt.Errorf("Sorry, can't parse '%s' as time value (only 'now' supported)", timeArg) log.Error(err) return err } // TODO: if there are more params, LOG them } reboot(appName, forceFlag, powerCmd) return nil } ================================================ FILE: cmd/respawn/respawn.go ================================================ package respawn import ( "fmt" "io" "io/ioutil" "os" "os/exec" "os/signal" "runtime" "strings" "sync" "syscall" "time" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/log" "github.com/codegangsta/cli" ) var ( running = true processes = map[int]*os.Process{} processLock = sync.Mutex{} ) func Main() { log.InitLogger() runtime.GOMAXPROCS(1) runtime.LockOSThread() app := cli.NewApp() app.Name = os.Args[0] app.Usage = fmt.Sprintf("%s BurmillaOS\nbuilt: %s", app.Name, config.BuildDate) app.Version = config.Version app.Author = "Project Burmilla\n\tRancher Labs, Inc." app.Flags = []cli.Flag{ cli.StringFlag{ Name: "file, f", Usage: "Optional config file to load", }, } app.Action = run log.Infof("%s, %s", app.Usage, app.Version) fmt.Printf("%s, %s", app.Usage, app.Version) app.Run(os.Args) } func setupSigterm() { sigtermChan := make(chan os.Signal, 1) signal.Notify(sigtermChan, syscall.SIGTERM) go func() { for range sigtermChan { termPids() } }() } func run(c *cli.Context) error { setupSigterm() var stream io.Reader = os.Stdin var err error inputFileName := c.String("file") if inputFileName != "" { stream, err = os.Open(inputFileName) if err != nil { log.Fatal(err) } } input, err := ioutil.ReadAll(stream) if err != nil { panic(err) } lines := strings.Split(string(input), "\n") doneChannel := make(chan string, len(lines)) for _, line := range lines { if strings.TrimSpace(line) == "" || strings.Index(strings.TrimSpace(line), "#") == 0 { continue } go execute(line, doneChannel) } for i := 0; i < len(lines); i++ { line := <-doneChannel log.Infof("FINISHED: %s", line) fmt.Printf("FINISHED: %s", line) } return nil } func addProcess(process *os.Process) { processLock.Lock() defer processLock.Unlock() processes[process.Pid] = process } func removeProcess(process *os.Process) { processLock.Lock() defer processLock.Unlock() delete(processes, process.Pid) } func termPids() { running = false processLock.Lock() defer processLock.Unlock() for _, process := range processes { log.Infof("sending SIGTERM to %d", process.Pid) process.Signal(syscall.SIGTERM) } } func execute(line string, doneChannel chan string) { defer func() { doneChannel <- line }() start := time.Now() count := 0 args := strings.Split(line, " ") for { cmd := exec.Command(args[0], args[1:]...) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr cmd.SysProcAttr = &syscall.SysProcAttr{ Setsid: true, } if err := cmd.Start(); err == nil { addProcess(cmd.Process) if err = cmd.Wait(); err != nil { log.Errorf("Wait cmd to exit: %s, err: %v", line, err) } removeProcess(cmd.Process) } else { log.Errorf("Start cmd: %s, err: %v", line, err) } if !running { log.Infof("%s : not restarting, exiting", line) break } count++ if count > 10 { if time.Now().Sub(start) <= (1 * time.Second) { log.Errorf("%s : restarted too fast, not executing", line) break } count = 0 start = time.Now() } } } ================================================ FILE: cmd/sysinit/sysinit.go ================================================ package sysinit import ( "io/ioutil" "os" "github.com/burmilla/os/pkg/log" "github.com/burmilla/os/pkg/sysinit" ) func Main() { log.InitLogger() resolve, err := ioutil.ReadFile("/etc/resolv.conf") log.Infof("Resolv.conf == [%s], %v", resolve, err) log.Infof("Exec %v", os.Args) if err := sysinit.SysInit(); err != nil { log.Fatal(err) } } ================================================ FILE: cmd/wait/wait.go ================================================ package wait import ( "os" "github.com/burmilla/os/config" "github.com/burmilla/os/pkg/docker" "github.com/burmilla/os/pkg/log" ) func Main() { log.InitLogger() _, err := docker.NewClient(config.DockerHost) if err != nil { log.Errorf("Failed to connect to Docker") os.Exit(1) } log.Infof("Docker is ready") } ================================================ FILE: config/cloudinit/.gitignore ================================================ *.swp bin/ coverage/ gopath/ ================================================ FILE: config/cloudinit/.travis.yml ================================================ language: go matrix: include: - go: 1.5 env: GO15VENDOREXPERIMENT=1 - go: 1.6 script: - ./test ================================================ FILE: config/cloudinit/CONTRIBUTING.md ================================================ # How to Contribute CoreOS projects are [Apache 2.0 licensed](LICENSE) and accept contributions via GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted. # Certificate of Origin By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the [DCO](DCO) file for details. # Email and Chat The project currently uses the general CoreOS email list and IRC channel: - Email: [coreos-dev](https://groups.google.com/forum/#!forum/coreos-dev) - IRC: #[coreos](irc://irc.freenode.org:6667/#coreos) IRC channel on freenode.org ## Getting Started - Fork the repository on GitHub - Read the [README](README.md) for build and test instructions - Play with the project, submit bugs, submit patches! ## Contribution Flow This is a rough outline of what a contributor's workflow looks like: - Create a topic branch from where you want to base your work (usually master). - Make commits of logical units. - Make sure your commit messages are in the proper format (see below). - Push your changes to a topic branch in your fork of the repository. - Make sure the tests pass, and add any new tests as appropriate. - Submit a pull request to the original repository. Thanks for your contributions! ### Format of the Commit Message We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ``` environment: write new keys in consistent order Go 1.3 randomizes the ordering of keys when iterating over a map. Sort the keys to make this ordering consistent. Fixes #38 ``` The format can be described more formally as follows: ``` :