Full Code of google-coral/tflite for AI

master eced31ac01e9 cached
17 files
51.0 KB
13.4k tokens
38 symbols
1 requests
Download .txt
Repository: google-coral/tflite
Branch: master
Commit: eced31ac01e9
Files: 17
Total size: 51.0 KB

Directory structure:
gitextract_s01zy75z/

├── LICENSE
├── README.md
├── cpp/
│   └── examples/
│       ├── classification/
│       │   ├── Makefile
│       │   └── classify.cc
│       └── lstpu/
│           ├── BUILD
│           ├── Makefile
│           ├── README.md
│           ├── WORKSPACE
│           └── lstpu.cc
└── python/
    └── examples/
        ├── classification/
        │   ├── README.md
        │   ├── classify.py
        │   ├── classify_image.py
        │   └── install_requirements.sh
        └── detection/
            ├── README.md
            ├── detect.py
            ├── detect_image.py
            └── install_requirements.sh

================================================
FILE CONTENTS
================================================

================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright 2018 Google LLC

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
# Coral examples using TensorFlow Lite API

This repo contains example code for running inference on [Coral
devices](https://coral.withgoogle.com/products) using the [TensorFlow Lite
API](https://www.tensorflow.org/lite). Each example executes a different type of
model, such as an image classification or object detection model.

For instructions to set up and run the code, see the README inside each example.


================================================
FILE: cpp/examples/classification/Makefile
================================================
# This is a Makefile to cross-compile classify.cc example.
# 1. Download latest Edge TPU runtime archive from https://coral.ai/software/
#    and extract next to the Makefile:
#    $ wget https://dl.google.com/coral/edgetpu_api/edgetpu_runtime_20200710.zip
#    $ unzip edgetpu_runtime_20200710.zip
# 2. Download TensorFlow to the Linux machine:
#    $ git clone https://github.com/tensorflow/tensorflow.git
# 3. Download external dependencies for TensorFlow Lite:
#    $ tensorflow/tensorflow/lite/tools/make/download_dependencies.sh
# 4. Cross-compile TensorFlow Lite for aarch64:
#    $ tensorflow/tensorflow/lite/tools/make/build_aarch64_lib.sh
# 5. Cross-compile classify.cc example for aarch64:
#    $ TENSORFLOW_DIR=<location> make
# 6. Copy the following files to Coral Dev board:
#      * Generated 'classify' binary
#      * Model file 'mobilenet_v1_1.0_224_quant_edgetpu.tflite' from 'test_data'
#      * Label file 'imagenet_labels.txt ' from 'test_data'
#      * Image file 'resized_cat.bmp' from 'test_data'
#    and finally run 'classify' binary on the board:
#    $ classify mobilenet_v1_1.0_224_quant_edgetpu.tflite \
#               imagenet_labels.txt \
#               resized_cat.bmp \
#               0.0001
#    INFO: Initialized TensorFlow Lite runtime.
#    INFO: Replacing 1 node(s) with delegate (EdgeTpuDelegateForCustomOp) node, yielding 1 partitions.
#    0.81641 286  Egyptian cat
#    0.10938 283  tiger cat
#    0.03516 282  tabby, tabby cat
#    0.01172 812  space heater
#    0.00781 754  radiator
#    0.00391 540  doormat, welcome mat
#    0.00391 285  Siamese cat, Siamese
MAKEFILE_DIR := $(realpath $(dir $(lastword $(MAKEFILE_LIST))))
TENSORFLOW_DIR ?=

classify: classify.cc
	aarch64-linux-gnu-g++ -std=c++11 -o classify classify.cc \
	-I$(MAKEFILE_DIR)/edgetpu_runtime/libedgetpu/ \
	-I$(TENSORFLOW_DIR) \
	-I$(TENSORFLOW_DIR)/tensorflow/lite/tools/make/downloads/flatbuffers/include \
	-L$(TENSORFLOW_DIR)/tensorflow/lite/tools/make/gen/linux_aarch64/lib \
	-L$(MAKEFILE_DIR)/edgetpu_runtime/libedgetpu/direct/aarch64/ \
	-ltensorflow-lite -l:libedgetpu.so.1.0 -lpthread -lm -ldl

clean:
	rm -f classify


================================================
FILE: cpp/examples/classification/classify.cc
================================================
#include <algorithm>
#include <cassert>
#include <fstream>
#include <iomanip>
#include <iostream>
#include <numeric>
#include <string>
#include <utility>
#include <vector>

#include "edgetpu_c.h"
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"

namespace {
constexpr size_t kBmpFileHeaderSize = 14;
constexpr size_t kBmpInfoHeaderSize = 40;
constexpr size_t kBmpHeaderSize = kBmpFileHeaderSize + kBmpInfoHeaderSize;

int32_t ToInt32(const char p[4]) {
  return (p[3] << 24) | (p[2] << 16) | (p[1] << 8) | p[0];
}

std::vector<uint8_t> ReadBmpImage(const char* filename,
                                  int* out_width = nullptr,
                                  int* out_height = nullptr,
                                  int* out_channels = nullptr) {
  assert(filename);

  std::ifstream file(filename, std::ios::binary);
  if (!file) return {};  // Open failed.

  char header[kBmpHeaderSize];
  if (!file.read(header, sizeof(header))) return {};  // Read failed.

  const char* file_header = header;
  const char* info_header = header + kBmpFileHeaderSize;

  if (file_header[0] != 'B' || file_header[1] != 'M')
    return {};  // Invalid file type.

  const int channels = info_header[14] / 8;
  if (channels != 1 && channels != 3) return {};  // Unsupported bits per pixel.

  if (ToInt32(&info_header[16]) != 0) return {};  // Unsupported compression.

  const uint32_t offset = ToInt32(&file_header[10]);
  if (offset > kBmpHeaderSize &&
      !file.seekg(offset - kBmpHeaderSize, std::ios::cur))
    return {};  // Seek failed.

  int width = ToInt32(&info_header[4]);
  if (width < 0) return {};  // Invalid width.

  int height = ToInt32(&info_header[8]);
  const bool top_down = height < 0;
  if (top_down) height = -height;

  const int line_bytes = width * channels;
  const int line_padding_bytes =
      4 * ((8 * channels * width + 31) / 32) - line_bytes;
  std::vector<uint8_t> image(line_bytes * height);
  for (int i = 0; i < height; ++i) {
    uint8_t* line = &image[(top_down ? i : (height - 1 - i)) * line_bytes];
    if (!file.read(reinterpret_cast<char*>(line), line_bytes))
      return {};  // Read failed.
    if (!file.seekg(line_padding_bytes, std::ios::cur))
      return {};  // Seek failed.
    if (channels == 3) {
      for (int j = 0; j < width; ++j) std::swap(line[3 * j], line[3 * j + 2]);
    }
  }

  if (out_width) *out_width = width;
  if (out_height) *out_height = height;
  if (out_channels) *out_channels = channels;
  return image;
}

std::vector<std::string> ReadLabels(const std::string& filename) {
  std::ifstream file(filename);
  if (!file) return {};  // Open failed.

  std::vector<std::string> lines;
  for (std::string line; std::getline(file, line);) lines.emplace_back(line);
  return lines;
}

std::string GetLabel(const std::vector<std::string>& labels, int label) {
  if (label >= 0 && label < labels.size()) return labels[label];
  return std::to_string(label);
}

std::vector<float> Dequantize(const TfLiteTensor& tensor) {
  const auto* data = reinterpret_cast<const uint8_t*>(tensor.data.data);
  std::vector<float> result(tensor.bytes);
  for (int i = 0; i < tensor.bytes; ++i)
    result[i] = tensor.params.scale * (data[i] - tensor.params.zero_point);
  return result;
}

std::vector<std::pair<int, float>> Sort(const std::vector<float>& scores,
                                        float threshold) {
  std::vector<const float*> ptrs(scores.size());
  std::iota(ptrs.begin(), ptrs.end(), scores.data());
  auto end = std::partition(ptrs.begin(), ptrs.end(),
                            [=](const float* v) { return *v >= threshold; });
  std::sort(ptrs.begin(), end,
            [](const float* a, const float* b) { return *a > *b; });

  std::vector<std::pair<int, float>> result;
  result.reserve(end - ptrs.begin());
  for (auto it = ptrs.begin(); it != end; ++it)
    result.emplace_back(*it - scores.data(), **it);
  return result;
}
}  // namespace

int main(int argc, char* argv[]) {
  if (argc != 5) {
    std::cerr << argv[0]
              << " <model_file> <label_file> <image_file> <threshold>"
              << std::endl;
    return 1;
  }

  const std::string model_file = argv[1];
  const std::string label_file = argv[2];
  const std::string image_file = argv[3];
  const float threshold = std::stof(argv[4]);

  // Find TPU device.
  size_t num_devices;
  std::unique_ptr<edgetpu_device, decltype(&edgetpu_free_devices)> devices(
      edgetpu_list_devices(&num_devices), &edgetpu_free_devices);

  if (num_devices == 0) {
    std::cerr << "No connected TPU found" << std::endl;
    return 1;
  }
  const auto& device = devices.get()[0];

  // Load labels.
  auto labels = ReadLabels(label_file);
  if (labels.empty()) {
    std::cerr << "Cannot read labels from " << label_file << std::endl;
    return 1;
  }

  // Load image.
  int image_bpp, image_width, image_height;
  auto image =
      ReadBmpImage(image_file.c_str(), &image_width, &image_height, &image_bpp);
  if (image.empty()) {
    std::cerr << "Cannot read image from " << image_file << std::endl;
    return 1;
  }

  // Load model.
  auto model = tflite::FlatBufferModel::BuildFromFile(model_file.c_str());
  if (!model) {
    std::cerr << "Cannot read model from " << model_file << std::endl;
    return 1;
  }

  // Create interpreter.
  tflite::ops::builtin::BuiltinOpResolver resolver;
  std::unique_ptr<tflite::Interpreter> interpreter;
  if (tflite::InterpreterBuilder(*model, resolver)(&interpreter) != kTfLiteOk) {
    std::cerr << "Cannot create interpreter" << std::endl;
    return 1;
  }

  auto* delegate =
      edgetpu_create_delegate(device.type, device.path, nullptr, 0);
  interpreter->ModifyGraphWithDelegate({delegate, edgetpu_free_delegate});

  // Allocate tensors.
  if (interpreter->AllocateTensors() != kTfLiteOk) {
    std::cerr << "Cannot allocate interpreter tensors" << std::endl;
    return 1;
  }

  // Set interpreter input.
  const auto* input_tensor = interpreter->input_tensor(0);
  if (input_tensor->type != kTfLiteUInt8 ||           //
      input_tensor->dims->data[0] != 1 ||             //
      input_tensor->dims->data[1] != image_height ||  //
      input_tensor->dims->data[2] != image_width ||   //
      input_tensor->dims->data[3] != image_bpp) {
    std::cerr << "Input tensor shape does not match input image" << std::endl;
    return 1;
  }

  std::copy(image.begin(), image.end(),
            interpreter->typed_input_tensor<uint8_t>(0));

  // Run inference.
  if (interpreter->Invoke() != kTfLiteOk) {
    std::cerr << "Cannot invoke interpreter" << std::endl;
    return 1;
  }

  // Get interpreter output.
  auto results = Sort(Dequantize(*interpreter->output_tensor(0)), threshold);
  for (auto& result : results)
    std::cout << std::setw(7) << std::fixed << std::setprecision(5)
              << result.second << GetLabel(labels, result.first) << std::endl;

  return 0;
}


================================================
FILE: cpp/examples/lstpu/BUILD
================================================
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
cc_binary(
    name = "lstpu",
    srcs = ["lstpu.cc"],
    deps = [
        "@edgetpu//libedgetpu:header",
    ],
    copts = ["-Iexternal/edgetpu/libedgetpu"],
)


================================================
FILE: cpp/examples/lstpu/Makefile
================================================
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
SHELL := /bin/bash
MAKEFILE_DIR := $(realpath $(dir $(lastword $(MAKEFILE_LIST))))
# Allowed CPU values: k8, armv7a, aarch64
CPU ?= k8
# Allowed COMPILATION_MODE values: opt, dbg
COMPILATION_MODE ?= opt

BAZEL_OUT_DIR :=  $(MAKEFILE_DIR)/bazel-out/$(CPU)-$(COMPILATION_MODE)/bin
BAZEL_BUILD_FLAGS := --crosstool_top=@crosstool//:toolchains \
                     --compilation_mode=$(COMPILATION_MODE) \
                     --compiler=gcc \
                     --cpu=$(CPU) \
                     --linkopt=-L$(shell bazel info output_base)/external/edgetpu/libedgetpu/direct/$(CPU) \
                     --linkopt=-l:libedgetpu.so.1

lstpu:
	bazel build $(BAZEL_BUILD_FLAGS) //:lstpu

clean:
	bazel clean


================================================
FILE: cpp/examples/lstpu/README.md
================================================
# Simple C++ code example

This example shows how to build a simple C++ program that uses the Edge TPU
runtime library to lists available Edge TPU devices. (It does not perform an
inference.)

## Requirements

You need to install [Bazel](https://bazel.build/) in order to build the binary.
Follow the [Bazel install
instructions](https://docs.bazel.build/versions/master/install.html)

The example is configured to use the cross-compilation toolchain definition from
[crosstool](https://github.com/google-coral/crosstool), but you don't need
to download that repo.

## Compile the example

For the native compilation you need to install at least `build-essential`
package:

```
sudo apt-get install -y build-essential
```

Then run `make` command.

For cross-compilation you need to install `build-essential` packages for
the corresponding architectures:

```
sudo apt-get install -y crossbuild-essential-armhf \
                        crossbuild-essential-arm64
```

Then run `make CPU=armv7a` or `make CPU=aarch64`.

Find the output binary file inside `blaze-out` directory.


================================================
FILE: cpp/examples/lstpu/WORKSPACE
================================================
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
workspace(name = "lstpu")

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

http_archive(
    name = "io_bazel_rules_closure",
    sha256 = "5b00383d08dd71f28503736db0500b6fb4dda47489ff5fc6bed42557c07c6ba9",
    strip_prefix = "rules_closure-308b05b2419edb5c8ee0471b67a40403df940149",
    urls = [
        "https://storage.googleapis.com/mirror.tensorflow.org/github.com/bazelbuild/rules_closure/archive/308b05b2419edb5c8ee0471b67a40403df940149.tar.gz",
        "https://github.com/bazelbuild/rules_closure/archive/308b05b2419edb5c8ee0471b67a40403df940149.tar.gz",  # 2019-06-13
    ],
)

http_archive(
    name = "bazel_skylib",
    sha256 = "1dde365491125a3db70731e25658dfdd3bc5dbdfd11b840b3e987ecf043c7ca0",
    urls = ["https://github.com/bazelbuild/bazel-skylib/releases/download/0.9.0/bazel_skylib-0.9.0.tar.gz"],
)

TENSORFLOW_COMMIT = "d855adfc5a0195788bf5f92c3c7352e638aa1109";
TENSORFLOW_SHA256 = "b8a691dbea2bb028fa8f7ce407b70ad236dae0a8705c8010dc7bad8af7e93bac"
http_archive(
    name = "org_tensorflow",
    sha256 = TENSORFLOW_SHA256,
    strip_prefix = "tensorflow-" + TENSORFLOW_COMMIT,
    urls = [
        "https://github.com/tensorflow/tensorflow/archive/" + TENSORFLOW_COMMIT + ".tar.gz",
    ],
)

load("@org_tensorflow//tensorflow:workspace.bzl", "tf_workspace")
tf_workspace(tf_repo_name = "org_tensorflow")

http_archive(
    name = "edgetpu",
    sha256 = "dc5eb443fa1b4132f6828fc0796169e0595643d415b585351839d3c4f796e6a8",
    strip_prefix = "edgetpu-14237f65ba07b7b1d8287e9f60dd20c88562871a",
    urls = [
        "https://github.com/google-coral/edgetpu/archive/14237f65ba07b7b1d8287e9f60dd20c88562871a.tar.gz",
    ]
)

http_archive(
    name = "coral_crosstool",
    sha256 = "cb31b1417ccdcf7dd9fca5ec63e1571672372c30427730255997a547569d2feb",
    strip_prefix = "crosstool-9e00d5be43bf001f883b5700f5d04882fea00229",
    urls = [
        "https://github.com/google-coral/crosstool/archive/9e00d5be43bf001f883b5700f5d04882fea00229.tar.gz",
    ],
)
load("@coral_crosstool//:configure.bzl", "cc_crosstool")
cc_crosstool(name = "crosstool")


================================================
FILE: cpp/examples/lstpu/lstpu.cc
================================================
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//      http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <iostream>
#include <memory>

#include "edgetpu_c.h"

std::string ToString(edgetpu_device_type type) {
  switch (type) {
    case EDGETPU_APEX_PCI:
      return "PCI";
    case EDGETPU_APEX_USB:
      return "USB";
  }
  return "Unknown";
}

int main(int argc, char* argv[]) {
  size_t num_devices;
  std::unique_ptr<edgetpu_device, decltype(&edgetpu_free_devices)> devices(
      edgetpu_list_devices(&num_devices), &edgetpu_free_devices);

  for (size_t i = 0; i < num_devices; ++i) {
    const auto& device = devices.get()[i];
    std::cout << i << " " << ToString(device.type) << " " << device.path
              << std::endl;
  }

  return 0;
}


================================================
FILE: python/examples/classification/README.md
================================================
# Image classification example on Coral with TensorFlow Lite

This example uses [TensorFlow Lite](https://tensorflow.org/lite) with Python
to run an image classification model with acceleration on the Edge TPU, using a
Coral device such as the
[USB Accelerator](https://coral.withgoogle.com/products/accelerator) or
[Dev Board](https://coral.withgoogle.com/products/dev-board).

The Python script takes arguments for the model, labels file, and image
you want to process. It then prints the model's prediction for what the
image is to the terminal screen.

## Set up your device

1.  First, be sure you have completed the [setup instructions for your Coral
    device](https://coral.withgoogle.com/docs/accelerator/get-started/).

    Importantly, you should have the latest TensorFlow Lite runtime installed,
    as per the [Python quickstart](
    https://www.tensorflow.org/lite/guide/python).

2.  Clone this Git repo onto your computer:

    ```
    mkdir google-coral && cd google-coral

    git clone https://github.com/google-coral/tflite --depth 1
    ```

3.  Install this example's dependencies:

    ```
    cd tflite/python/examples/classification

    ./install_requirements.sh
    ```

## Run the code

Use this command to run image classification with the model and photo
downloaded by the above script (photo shown in figure 1):

```
python3 classify_image.py \
  --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
  --labels models/inat_bird_labels.txt \
  --input images/parrot.jpg
```

<img width="200"
     src="https://github.com/google-coral/edgetpu/raw/master/test_data/parrot.jpg" />
<br><b>Figure 1.</b> parrot.jpg

You should see results like this:

```.language-bash
Initializing TF Lite interpreter...
INFO: Initialized TensorFlow Lite runtime.
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
11.8ms
3.0ms
2.8ms
2.9ms
2.9ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.76562
```

To demonstrate varying inference speeds, the example repeats the same inference
five times. Your inference speeds might be different based on your host platform
and whether you're using the USB Accelerator with a USB 2.0 or 3.0 connection.

To compare the performance when not using the Edge TPU, try
running it again with the model that's *not* compiled for the Edge TPU:

```
python3 classify_image.py \
  --model models/mobilenet_v2_1.0_224_inat_bird_quant.tflite \
  --labels models/inat_bird_labels.txt \
  --input images/parrot.jpg
```



================================================
FILE: python/examples/classification/classify.py
================================================
# Lint as: python3
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Functions to work with classification models."""

import collections
import operator
import numpy as np

Class = collections.namedtuple('Class', ['id', 'score'])


def input_details(interpreter, key):
  """Returns input details by specified key."""
  return interpreter.get_input_details()[0][key]


def input_size(interpreter):
  """Returns input image size as (width, height) tuple."""
  _, height, width, _ = input_details(interpreter, 'shape')
  return width, height


def input_tensor(interpreter):
  """Returns input tensor view as numpy array of shape (height, width, 3)."""
  tensor_index = input_details(interpreter, 'index')
  return interpreter.tensor(tensor_index)()[0]


def output_tensor(interpreter, dequantize=True):
  """Returns output tensor of classification model.

  Integer output tensor is dequantized by default.

  Args:
    interpreter: tflite.Interpreter;
    dequantize: bool; whether to dequantize integer output tensor.

  Returns:
    Output tensor as numpy array.
  """
  output_details = interpreter.get_output_details()[0]
  output_data = np.squeeze(interpreter.tensor(output_details['index'])())

  if dequantize and np.issubdtype(output_details['dtype'], np.integer):
    scale, zero_point = output_details['quantization']
    return scale * (output_data - zero_point)

  return output_data


def set_input(interpreter, data):
  """Copies data to input tensor."""
  input_tensor(interpreter)[:, :] = data


def get_output(interpreter, top_k=1, score_threshold=0.0):
  """Returns no more than top_k classes with score >= score_threshold."""
  scores = output_tensor(interpreter)
  classes = [
      Class(i, scores[i])
      for i in np.argpartition(scores, -top_k)[-top_k:]
      if scores[i] >= score_threshold
  ]
  return sorted(classes, key=operator.itemgetter(1), reverse=True)


================================================
FILE: python/examples/classification/classify_image.py
================================================
# Lint as: python3
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""Example using TF Lite to classify a given image using an Edge TPU.

   To run this code, you must attach an Edge TPU attached to the host and
   install the Edge TPU runtime (`libedgetpu.so`) and `tflite_runtime`. For
   device setup instructions, see g.co/coral/setup.

   Example usage (use `install_requirements.sh` to get these files):
   ```
   python3 classify_image.py \
     --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite  \
     --labels models/inat_bird_labels.txt \
     --input images/parrot.jpg
   ```
"""

import argparse
import time

from PIL import Image

import classify
import tflite_runtime.interpreter as tflite
import platform

EDGETPU_SHARED_LIB = {
  'Linux': 'libedgetpu.so.1',
  'Darwin': 'libedgetpu.1.dylib',
  'Windows': 'edgetpu.dll'
}[platform.system()]


def load_labels(path, encoding='utf-8'):
  """Loads labels from file (with or without index numbers).

  Args:
    path: path to label file.
    encoding: label file encoding.
  Returns:
    Dictionary mapping indices to labels.
  """
  with open(path, 'r', encoding=encoding) as f:
    lines = f.readlines()
    if not lines:
      return {}

    if lines[0].split(' ', maxsplit=1)[0].isdigit():
      pairs = [line.split(' ', maxsplit=1) for line in lines]
      return {int(index): label.strip() for index, label in pairs}
    else:
      return {index: line.strip() for index, line in enumerate(lines)}


def make_interpreter(model_file):
  model_file, *device = model_file.split('@')
  return tflite.Interpreter(
      model_path=model_file,
      experimental_delegates=[
          tflite.load_delegate(EDGETPU_SHARED_LIB,
                               {'device': device[0]} if device else {})
      ])


def main():
  parser = argparse.ArgumentParser(
      formatter_class=argparse.ArgumentDefaultsHelpFormatter)
  parser.add_argument(
      '-m', '--model', required=True, help='File path of .tflite file.')
  parser.add_argument(
      '-i', '--input', required=True, help='Image to be classified.')
  parser.add_argument(
      '-l', '--labels', help='File path of labels file.')
  parser.add_argument(
      '-k', '--top_k', type=int, default=1,
      help='Max number of classification results')
  parser.add_argument(
      '-t', '--threshold', type=float, default=0.0,
      help='Classification score threshold')
  parser.add_argument(
      '-c', '--count', type=int, default=5,
      help='Number of times to run inference')
  args = parser.parse_args()

  labels = load_labels(args.labels) if args.labels else {}

  interpreter = make_interpreter(args.model)
  interpreter.allocate_tensors()

  size = classify.input_size(interpreter)
  image = Image.open(args.input).convert('RGB').resize(size, Image.ANTIALIAS)
  classify.set_input(interpreter, image)

  print('----INFERENCE TIME----')
  print('Note: The first inference on Edge TPU is slow because it includes',
        'loading the model into Edge TPU memory.')
  for _ in range(args.count):
    start = time.perf_counter()
    interpreter.invoke()
    inference_time = time.perf_counter() - start
    classes = classify.get_output(interpreter, args.top_k, args.threshold)
    print('%.1fms' % (inference_time * 1000))

  print('-------RESULTS--------')
  for klass in classes:
    print('%s: %.5f' % (labels.get(klass.id, klass.id), klass.score))


if __name__ == '__main__':
  main()


================================================
FILE: python/examples/classification/install_requirements.sh
================================================
#!/bin/bash
#
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly TEST_DATA_URL=https://github.com/google-coral/edgetpu/raw/master/test_data

# Install required Python packages,
# but not on Mendel (Dev Board)—it has these already and shouldn't use pip
if [[ ! -f /etc/mendel_version ]]; then
  if ! python3 -m pip --version > /dev/null; then
    echo "Install pip first by following https://pip.pypa.io/en/stable/installing/ guide."
    exit 1
  fi
  python3 -m pip install numpy Pillow
fi

# Get TF Lite model and labels
MODEL_DIR="${SCRIPT_DIR}/models"
mkdir -p "${MODEL_DIR}"

(cd "${MODEL_DIR}" && \
curl -OL "${TEST_DATA_URL}/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite" \
     -OL "${TEST_DATA_URL}/mobilenet_v2_1.0_224_inat_bird_quant.tflite" \
     -OL "${TEST_DATA_URL}/inat_bird_labels.txt")

# Get example image
IMAGE_DIR="${SCRIPT_DIR}/images"
mkdir -p "${IMAGE_DIR}"

(cd "${IMAGE_DIR}" && \
curl -OL "${TEST_DATA_URL}/parrot.jpg")


================================================
FILE: python/examples/detection/README.md
================================================
# Object detection example on Coral with TensorFlow Lite

This example uses [TensorFlow Lite](https://tensorflow.org/lite) with Python
to run an object detection model with acceleration on the Edge TPU, using a
Coral device such as the
[USB Accelerator](https://coral.withgoogle.com/products/accelerator) or
[Dev Board](https://coral.withgoogle.com/products/dev-board).

The Python script takes arguments for the model, labels file, and image
you want to process. It then prints each detected object and the location
coordinates, and saves/displays the original image with bounding boxes and
labels drawn on top.

## Set up your device

1.  First, be sure you have completed the [setup instructions for your Coral
    device](https://coral.withgoogle.com/docs/accelerator/get-started/).

    Importantly, you should have the latest TensorFlow Lite runtime installed,
    as per the [Python quickstart](
    https://www.tensorflow.org/lite/guide/python).

2.  Clone this Git repo onto your computer:

    ```
    mkdir google-coral && cd google-coral

    git clone https://github.com/google-coral/tflite --depth 1
    ```

3.  Install this example's dependencies:

    ```
    cd tflite/python/examples/detection

    ./install_requirements.sh
    ```

## Run the code

Use this command to run object detection with the model and photo
downloaded by the above script (photo shown in figure 1):

```
python3 detect_image.py \
  --model models/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite \
  --labels models/coco_labels.txt \
  --input images/grace_hopper.bmp \
  --output images/grace_hopper_processed.bmp
```

<img width="200"
     src="https://github.com/google-coral/edgetpu/raw/master/test_data/grace_hopper.bmp" />
<br><b>Figure 1.</b> grace_hopper.bmp

You should see results like this:

```
INFO: Initialized TensorFlow Lite runtime.
----INFERENCE TIME----
Note: The first inference is slow because it includes loading the model into Edge TPU memory.
33.92 ms
19.71 ms
19.91 ms
19.91 ms
19.90 ms
-------RESULTS--------
tie
  id:     31
  score:  0.83984375
  bbox:   BBox(xmin=228, ymin=421, xmax=293, ymax=545)
person
  id:     0
  score:  0.83984375
  bbox:   BBox(xmin=2, ymin=5, xmax=513, ymax=596)
```

To demonstrate varying inference speeds, the example repeats the same inference
five times. Your inference speeds might be different based on your host platform
and whether you're using the USB Accelerator with a USB 2.0 or 3.0 connection.

To compare the performance when not using the Edge TPU, try
running it again with the model that's *not* compiled for the Edge TPU:

```
python3 detect_image.py \
  --model models/ssd_mobilenet_v2_coco_quant_postprocess.tflite \
  --labels models/coco_labels.txt \
  --input images/grace_hopper.bmp
```


================================================
FILE: python/examples/detection/detect.py
================================================
# Lint as: python3
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Functions to work with detection models."""

import collections
import numpy as np

Object = collections.namedtuple('Object', ['id', 'score', 'bbox'])


class BBox(collections.namedtuple('BBox', ['xmin', 'ymin', 'xmax', 'ymax'])):
  """Bounding box.

  Represents a rectangle which sides are either vertical or horizontal, parallel
  to the x or y axis.
  """
  __slots__ = ()

  @property
  def width(self):
    """Returns bounding box width."""
    return self.xmax - self.xmin

  @property
  def height(self):
    """Returns bounding box height."""
    return self.ymax - self.ymin

  @property
  def area(self):
    """Returns bound box area."""
    return self.width * self.height

  @property
  def valid(self):
    """Returns whether bounding box is valid or not.

    Valid bounding box has xmin <= xmax and ymin <= ymax which is equivalent to
    width >= 0 and height >= 0.
    """
    return self.width >= 0 and self.height >= 0

  def scale(self, sx, sy):
    """Returns scaled bounding box."""
    return BBox(xmin=sx * self.xmin,
                ymin=sy * self.ymin,
                xmax=sx * self.xmax,
                ymax=sy * self.ymax)

  def translate(self, dx, dy):
    """Returns translated bounding box."""
    return BBox(xmin=dx + self.xmin,
                ymin=dy + self.ymin,
                xmax=dx + self.xmax,
                ymax=dy + self.ymax)

  def map(self, f):
    """Returns bounding box modified by applying f for each coordinate."""
    return BBox(xmin=f(self.xmin),
                ymin=f(self.ymin),
                xmax=f(self.xmax),
                ymax=f(self.ymax))

  @staticmethod
  def intersect(a, b):
    """Returns the intersection of two bounding boxes (may be invalid)."""
    return BBox(xmin=max(a.xmin, b.xmin),
                ymin=max(a.ymin, b.ymin),
                xmax=min(a.xmax, b.xmax),
                ymax=min(a.ymax, b.ymax))

  @staticmethod
  def union(a, b):
    """Returns the union of two bounding boxes (always valid)."""
    return BBox(xmin=min(a.xmin, b.xmin),
                ymin=min(a.ymin, b.ymin),
                xmax=max(a.xmax, b.xmax),
                ymax=max(a.ymax, b.ymax))

  @staticmethod
  def iou(a, b):
    """Returns intersection-over-union value."""
    intersection = BBox.intersect(a, b)
    if not intersection.valid:
      return 0.0
    area = intersection.area
    return area / (a.area + b.area - area)


def input_size(interpreter):
  """Returns input image size as (width, height) tuple."""
  _, height, width, _ = interpreter.get_input_details()[0]['shape']
  return width, height


def input_tensor(interpreter):
  """Returns input tensor view as numpy array of shape (height, width, 3)."""
  tensor_index = interpreter.get_input_details()[0]['index']
  return interpreter.tensor(tensor_index)()[0]


def set_input(interpreter, size, resize):
  """Copies a resized and properly zero-padded image to the input tensor.

  Args:
    interpreter: Interpreter object.
    size: original image size as (width, height) tuple.
    resize: a function that takes a (width, height) tuple, and returns an RGB
      image resized to those dimensions.
  Returns:
    Actual resize ratio, which should be passed to `get_output` function.
  """
  width, height = input_size(interpreter)
  w, h = size
  scale = min(width / w, height / h)
  w, h = int(w * scale), int(h * scale)
  tensor = input_tensor(interpreter)
  tensor.fill(0)  # padding
  _, _, channel = tensor.shape
  tensor[:h, :w] = np.reshape(resize((w, h)), (h, w, channel))
  return scale, scale


def output_tensor(interpreter, i):
  """Returns output tensor view."""
  tensor = interpreter.tensor(interpreter.get_output_details()[i]['index'])()
  return np.squeeze(tensor)


def get_output(interpreter, score_threshold, image_scale=(1.0, 1.0)):
  """Returns list of detected objects."""
  boxes = output_tensor(interpreter, 0)
  class_ids = output_tensor(interpreter, 1)
  scores = output_tensor(interpreter, 2)
  count = int(output_tensor(interpreter, 3))

  width, height = input_size(interpreter)
  image_scale_x, image_scale_y = image_scale
  sx, sy = width / image_scale_x, height / image_scale_y

  def make(i):
    ymin, xmin, ymax, xmax = boxes[i]
    return Object(
        id=int(class_ids[i]),
        score=float(scores[i]),
        bbox=BBox(xmin=xmin,
                  ymin=ymin,
                  xmax=xmax,
                  ymax=ymax).scale(sx, sy).map(int))

  return [make(i) for i in range(count) if scores[i] >= score_threshold]


================================================
FILE: python/examples/detection/detect_image.py
================================================
# Lint as: python3
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Example using TF Lite to detect objects in a given image."""

import argparse
import time

from PIL import Image
from PIL import ImageDraw

import detect
import tflite_runtime.interpreter as tflite
import platform

EDGETPU_SHARED_LIB = {
  'Linux': 'libedgetpu.so.1',
  'Darwin': 'libedgetpu.1.dylib',
  'Windows': 'edgetpu.dll'
}[platform.system()]


def load_labels(path, encoding='utf-8'):
  """Loads labels from file (with or without index numbers).

  Args:
    path: path to label file.
    encoding: label file encoding.
  Returns:
    Dictionary mapping indices to labels.
  """
  with open(path, 'r', encoding=encoding) as f:
    lines = f.readlines()
    if not lines:
      return {}

    if lines[0].split(' ', maxsplit=1)[0].isdigit():
      pairs = [line.split(' ', maxsplit=1) for line in lines]
      return {int(index): label.strip() for index, label in pairs}
    else:
      return {index: line.strip() for index, line in enumerate(lines)}


def make_interpreter(model_file):
  model_file, *device = model_file.split('@')
  return tflite.Interpreter(
      model_path=model_file,
      experimental_delegates=[
          tflite.load_delegate(EDGETPU_SHARED_LIB,
                               {'device': device[0]} if device else {})
      ])


def draw_objects(draw, objs, labels):
  """Draws the bounding box and label for each object."""
  for obj in objs:
    bbox = obj.bbox
    draw.rectangle([(bbox.xmin, bbox.ymin), (bbox.xmax, bbox.ymax)],
                   outline='red')
    draw.text((bbox.xmin + 10, bbox.ymin + 10),
              '%s\n%.2f' % (labels.get(obj.id, obj.id), obj.score),
              fill='red')


def main():
  parser = argparse.ArgumentParser(
      formatter_class=argparse.ArgumentDefaultsHelpFormatter)
  parser.add_argument('-m', '--model', required=True,
                      help='File path of .tflite file.')
  parser.add_argument('-i', '--input', required=True,
                      help='File path of image to process.')
  parser.add_argument('-l', '--labels',
                      help='File path of labels file.')
  parser.add_argument('-t', '--threshold', type=float, default=0.4,
                      help='Score threshold for detected objects.')
  parser.add_argument('-o', '--output',
                      help='File path for the result image with annotations')
  parser.add_argument('-c', '--count', type=int, default=5,
                      help='Number of times to run inference')
  args = parser.parse_args()

  labels = load_labels(args.labels) if args.labels else {}
  interpreter = make_interpreter(args.model)
  interpreter.allocate_tensors()

  image = Image.open(args.input)
  scale = detect.set_input(interpreter, image.size,
                           lambda size: image.resize(size, Image.ANTIALIAS))

  print('----INFERENCE TIME----')
  print('Note: The first inference is slow because it includes',
        'loading the model into Edge TPU memory.')
  for _ in range(args.count):
    start = time.perf_counter()
    interpreter.invoke()
    inference_time = time.perf_counter() - start
    objs = detect.get_output(interpreter, args.threshold, scale)
    print('%.2f ms' % (inference_time * 1000))

  print('-------RESULTS--------')
  if not objs:
    print('No objects detected')

  for obj in objs:
    print(labels.get(obj.id, obj.id))
    print('  id:    ', obj.id)
    print('  score: ', obj.score)
    print('  bbox:  ', obj.bbox)

  if args.output:
    image = image.convert('RGB')
    draw_objects(ImageDraw.Draw(image), objs, labels)
    image.save(args.output)
    image.show()


if __name__ == '__main__':
  main()


================================================
FILE: python/examples/detection/install_requirements.sh
================================================
#!/bin/bash
#
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly TEST_DATA_URL=https://github.com/google-coral/edgetpu/raw/master/test_data

# Install required Python packages,
# but not on Mendel (Dev Board)—it has these already and shouldn't use pip
if [[ ! -f /etc/mendel_version ]]; then
  if ! python3 -m pip --version > /dev/null; then
    echo "Install pip first by following https://pip.pypa.io/en/stable/installing/ guide."
    exit 1
  fi
  python3 -m pip install numpy Pillow
fi

# If running Raspberry Pi, also install 'imagemagick' to display images
MODEL=$(tr -d '\0' < /proc/device-tree/model)
if [[ "${MODEL}" == "Raspberry Pi"* ]]; then
  sudo apt-get install imagemagick
fi

# Get TF Lite model and labels
MODEL_DIR="${SCRIPT_DIR}/models"
mkdir -p "${MODEL_DIR}"
(cd "${MODEL_DIR}"
curl -OL "${TEST_DATA_URL}/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite" \
     -OL "${TEST_DATA_URL}/ssd_mobilenet_v2_coco_quant_postprocess.tflite" \
     -OL "${TEST_DATA_URL}/coco_labels.txt")

# Get example image
IMAGE_DIR="${SCRIPT_DIR}/images"
mkdir -p "${IMAGE_DIR}"
(cd "${IMAGE_DIR}"
curl -OL "${TEST_DATA_URL}/grace_hopper.bmp")
Download .txt
gitextract_s01zy75z/

├── LICENSE
├── README.md
├── cpp/
│   └── examples/
│       ├── classification/
│       │   ├── Makefile
│       │   └── classify.cc
│       └── lstpu/
│           ├── BUILD
│           ├── Makefile
│           ├── README.md
│           ├── WORKSPACE
│           └── lstpu.cc
└── python/
    └── examples/
        ├── classification/
        │   ├── README.md
        │   ├── classify.py
        │   ├── classify_image.py
        │   └── install_requirements.sh
        └── detection/
            ├── README.md
            ├── detect.py
            ├── detect_image.py
            └── install_requirements.sh
Download .txt
SYMBOL INDEX (38 symbols across 6 files)

FILE: cpp/examples/classification/classify.cc
  function ToInt32 (line 21) | int32_t ToInt32(const char p[4]) {
  function ReadBmpImage (line 25) | std::vector<uint8_t> ReadBmpImage(const char* filename,
  function ReadLabels (line 81) | std::vector<std::string> ReadLabels(const std::string& filename) {
  function GetLabel (line 90) | std::string GetLabel(const std::vector<std::string>& labels, int label) {
  function Dequantize (line 95) | std::vector<float> Dequantize(const TfLiteTensor& tensor) {
  function Sort (line 103) | std::vector<std::pair<int, float>> Sort(const std::vector<float>& scores,
  function main (line 120) | int main(int argc, char* argv[]) {

FILE: cpp/examples/lstpu/lstpu.cc
  function ToString (line 19) | std::string ToString(edgetpu_device_type type) {
  function main (line 29) | int main(int argc, char* argv[]) {

FILE: python/examples/classification/classify.py
  function input_details (line 24) | def input_details(interpreter, key):
  function input_size (line 29) | def input_size(interpreter):
  function input_tensor (line 35) | def input_tensor(interpreter):
  function output_tensor (line 41) | def output_tensor(interpreter, dequantize=True):
  function set_input (line 63) | def set_input(interpreter, data):
  function get_output (line 68) | def get_output(interpreter, top_k=1, score_threshold=0.0):

FILE: python/examples/classification/classify_image.py
  function load_labels (line 46) | def load_labels(path, encoding='utf-8'):
  function make_interpreter (line 67) | def make_interpreter(model_file):
  function main (line 77) | def main():

FILE: python/examples/detection/detect.py
  class BBox (line 23) | class BBox(collections.namedtuple('BBox', ['xmin', 'ymin', 'xmax', 'ymax...
    method width (line 32) | def width(self):
    method height (line 37) | def height(self):
    method area (line 42) | def area(self):
    method valid (line 47) | def valid(self):
    method scale (line 55) | def scale(self, sx, sy):
    method translate (line 62) | def translate(self, dx, dy):
    method map (line 69) | def map(self, f):
    method intersect (line 77) | def intersect(a, b):
    method union (line 85) | def union(a, b):
    method iou (line 93) | def iou(a, b):
  function input_size (line 102) | def input_size(interpreter):
  function input_tensor (line 108) | def input_tensor(interpreter):
  function set_input (line 114) | def set_input(interpreter, size, resize):
  function output_tensor (line 136) | def output_tensor(interpreter, i):
  function get_output (line 142) | def get_output(interpreter, score_threshold, image_scale=(1.0, 1.0)):

FILE: python/examples/detection/detect_image.py
  function load_labels (line 34) | def load_labels(path, encoding='utf-8'):
  function make_interpreter (line 55) | def make_interpreter(model_file):
  function draw_objects (line 65) | def draw_objects(draw, objs, labels):
  function main (line 76) | def main():
Condensed preview — 17 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (55K chars).
[
  {
    "path": "LICENSE",
    "chars": 11340,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 412,
    "preview": "# Coral examples using TensorFlow Lite API\n\nThis repo contains example code for running inference on [Coral\ndevices](htt"
  },
  {
    "path": "cpp/examples/classification/Makefile",
    "chars": 2147,
    "preview": "# This is a Makefile to cross-compile classify.cc example.\n# 1. Download latest Edge TPU runtime archive from https://co"
  },
  {
    "path": "cpp/examples/classification/classify.cc",
    "chars": 6965,
    "preview": "#include <algorithm>\n#include <cassert>\n#include <fstream>\n#include <iomanip>\n#include <iostream>\n#include <numeric>\n#in"
  },
  {
    "path": "cpp/examples/lstpu/BUILD",
    "chars": 739,
    "preview": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this f"
  },
  {
    "path": "cpp/examples/lstpu/Makefile",
    "chars": 1284,
    "preview": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this f"
  },
  {
    "path": "cpp/examples/lstpu/README.md",
    "chars": 1078,
    "preview": "# Simple C++ code example\n\nThis example shows how to build a simple C++ program that uses the Edge TPU\nruntime library t"
  },
  {
    "path": "cpp/examples/lstpu/WORKSPACE",
    "chars": 2664,
    "preview": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this f"
  },
  {
    "path": "cpp/examples/lstpu/lstpu.cc",
    "chars": 1247,
    "preview": "// Copyright 2019 Google LLC\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use th"
  },
  {
    "path": "python/examples/classification/README.md",
    "chars": 2558,
    "preview": "# Image classification example on Coral with TensorFlow Lite\n\nThis example uses [TensorFlow Lite](https://tensorflow.org"
  },
  {
    "path": "python/examples/classification/classify.py",
    "chars": 2416,
    "preview": "# Lint as: python3\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you"
  },
  {
    "path": "python/examples/classification/classify_image.py",
    "chars": 3965,
    "preview": "# Lint as: python3\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you"
  },
  {
    "path": "python/examples/classification/install_requirements.sh",
    "chars": 1554,
    "preview": "#!/bin/bash\n#\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may "
  },
  {
    "path": "python/examples/detection/README.md",
    "chars": 2769,
    "preview": "# Object detection example on Coral with TensorFlow Lite\n\nThis example uses [TensorFlow Lite](https://tensorflow.org/lit"
  },
  {
    "path": "python/examples/detection/detect.py",
    "chars": 5108,
    "preview": "# Lint as: python3\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you"
  },
  {
    "path": "python/examples/detection/detect_image.py",
    "chars": 4209,
    "preview": "# Lint as: python3\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you"
  },
  {
    "path": "python/examples/detection/install_requirements.sh",
    "chars": 1751,
    "preview": "#!/bin/bash\n#\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may "
  }
]

About this extraction

This page contains the full source code of the google-coral/tflite GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 17 files (51.0 KB), approximately 13.4k tokens, and a symbol index with 38 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!