master b21a6c467e21 cached
8 files
16.7 KB
5.4k tokens
1 requests
Download .txt
Repository: 1adrianb/binary-human-pose-estimation
Branch: master
Commit: b21a6c467e21
Files: 8
Total size: 16.7 KB

Directory structure:
gitextract_llmckq0f/

├── .gitignore
├── .gitmodules
├── LICENCE
├── README.md
├── download-content.lua
├── main.lua
├── opts.lua
└── utils.lua

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Compiled Lua sources
luac.out

# Torch serialised objects
*.t7

# Images
*.jpg

# luarocks build files
*.src.rock
*.zip
*.tar.gz

# Object files
*.o
*.os
*.ko
*.obj
*.elf

# Precompiled Headers
*.gch
*.pch

# Libraries
*.lib
*.a
*.la
*.lo
*.def
*.exp

# Shared objects (inc. Windows DLLs)
*.dll
*.so
*.so.*
*.dylib

# Executables
*.exe
*.out
*.app
*.i*86
*.x86_64
*.hex



================================================
FILE: .gitmodules
================================================
[submodule "optimize-net"]
	path = optimize-net
	url = https://github.com/1adrianb/optimize-net
[submodule "bnn.torch"]
	path = bnn.torch
	url = https://github.com/1adrianb/bnn.torch


================================================
FILE: LICENCE
================================================
Copyright (c) 2017, University of Nottingham
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1.Redistributions of source code must retain the above copyright notice, this
  list of conditions and the following disclaimer.

2.Redistributions in binary form must reproduce the above copyright notice,
  this list of conditions and the following disclaimer in the documentation
  and/or other materials provided with the distribution.

3.Neither the name of the paper nor the names of its
  contributors may be used to endorse or promote products derived from
  this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.


================================================
FILE: README.md
================================================
# Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources 

This code implements a demo of the Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources paper by Adrian Bulat and Georgios Tzimiropoulos.

**[2021 Update]: PyTorch repo with training code for BNN available here: [https://github.com/1adrianb/binary-networks-pytorch](https://github.com/1adrianb/binary-networks-pytorch)**

**For the Face Alignment demo please check: [https://github.com/1adrianb/binary-face-alignment](https://github.com/1adrianb/binary-face-alignment)**

## Requirements
- Install the latest [Torch7](http://torch.ch/docs/getting-started.html) version (for Windows, please follow the instructions avaialable [here](https://github.com/torch/distro/blob/master/win-files/README.md))

### Packages
- [cutorch](https://github.com/torch/cutorch)
- [nn](https://github.com/torch/nn)
- [cudnn](https://github.com/soumith/cudnn.torch) (cudnn5 preffered)
- [xlua](https://github.com/torch/xlua)
- [image](https://github.com/torch/image)
- [gnuplot](https://github.com/torch/gnuplot)
- [cURL](https://github.com/Lua-cURL/Lua-cURLv3)
- [paths](https://github.com/torch/paths)

## Setup
Clone the github repository
```bash
git clone https://github.com/1adrianb/binary-human-pose-estimation --recursive
cd binary-human-pose-estimation
```

Build and install the BinaryConvolution package
```bash
cd bnn.torch/; luarocks make; cd ..;
```

Install the modified optnet package
```bash
cd optimize-net/; luarocks make rocks/optnet-scm-1.rockspec; cd ..;
```

Run the following command to prepare the files required by the demo. This will download 10 images from the MPII dataset alongside the dataset structure converted to .t7
```bash
th download-content.lua
```
Download the model available bellow and place it in the models folder. 

## Usage

In order to run the demo simply type:
```bash
th main.lua
```

## Pretrained models

| Layer type | Model Size | MPII  error |
| ------------- | ----------- | ----------- |
| [MPII](https://www.adrianbulat.com/downloads/BinaryHumanPose/human_pose_binary.t7)        | 1.3MB |76.0        |

Note: More pretrained models will be added soon

## Notes

For more details/questions please visit the [project page](https://www.adrianbulat.com/binary-cnn-landmarks) or send an email at adrian.bulat@nottingham.ac.uk






================================================
FILE: download-content.lua
================================================
local cURL = require 'cURL'
local paths = require 'paths'

-- Create the directories if needed
if not paths.dirp('dataset') then paths.mkdir('dataset') end
if not paths.dirp('dataset/mpii/images') then paths.mkdir('dataset/mpii/images') end

-- Url, location
local fileList = {
	{'https://www.adrianbulat.com/downloads/ECCV16/mpii_dataset.t7', 'dataset/mpii_dataset.t7'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/005808361.jpg', 'dataset/mpii/images/005808361.jpg'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/072245212.jpg', 'dataset/mpii/images/072245212.jpg'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/060754485.jpg', 'dataset/mpii/images/060754485.jpg'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/053710654.jpg', 'dataset/mpii/images/053710654.jpg'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/051074730.jpg', 'dataset/mpii/images/051074730.jpg'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/033761517.jpg', 'dataset/mpii/images/033761517.jpg'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/031800347.jpg', 'dataset/mpii/images/031800347.jpg'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/023724909.jpg', 'dataset/mpii/images/023724909.jpg'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/072818876.jpg', 'dataset/mpii/images/072818876.jpg'},
	{'https://www.adrianbulat.com/downloads/BinaryHumanPose/images/061062004.jpg', 'dataset/mpii/images/061062004.jpg'},
}

local m = cURL.multi()

for i = 1, #fileList do
	-- Open files
	fileList[i][2] = io.open(fileList[i][2], "w+b")

	-- Add the url handles
	fileList[i][1] = cURL.easy{url = fileList[i][1], writefunction = fileList[i][2]}
	m:add_handle(fileList[i][1])
end

print("Downloading files, please wait...")
-- Based on https://github.com/Lua-cURL/Lua-cURLv3/blob/master/examples/cURLv3/multi2.lua
local remain = #fileList
while remain > 0 do
	local last = m:perform()
	if last < remain then
		while true do
			local e, ok, err = m:info_read(true)
			if e == 0 then break end -- no more finished tasks
			if ok then
				print(e:getinfo_effective_url(), '-', '\027[00;92mOK\027[00m')
			else
				print(e:getinfo_effective_url(), '-', '\027[00;91mFail\027[00m')
			end
			e:close()
		end
	end 
	remain = last

	m:wait() 
end	


================================================
FILE: main.lua
================================================
require 'torch'
require 'nn'
require 'cudnn'
require 'paths'

require 'bnn'
local optnet = require 'optnet'

require 'gnuplot'
require 'image'
require 'xlua'
local utils = require 'utils'
local opts = require('opts')(arg)

torch.setheaptracking(true)
torch.setdefaulttensortype('torch.FloatTensor')
torch.setnumthreads(1)

local model = torch.load('models/human_pose_binary.t7')
model:evaluate()

local fileLists = utils.getFileList(opts)
local predictions = {}
local output = torch.CudaTensor(1,16,64,64)

optimize_opts = {inplace=true, reuseBuffers=true, mode='inference'}
optnet.optimizeMemory(model, torch.zeros(1,3,256,256):cuda(), optimize_opts)

if opts.mode == 'eval' then xlua.progress(0,#fileLists) end
for i = 1, #fileLists do
	fileLists[i].image = 'dataset/mpii/images/'..fileLists[i].image
	
	local img = image.load(fileLists[i].image)
	local originalSize = img:size()

	img = utils.crop(img, fileLists[i].center, fileLists[i].scale, 256)
	img = img:cuda():view(1,3,256,256)
	
	output:copy(model:forward(img))
	output:add(utils.flip(utils.shuffleLR(model:forward(utils.flip(img)))))

	local preds_hm, preds_img = utils.getPreds(output, fileLists[i].center, fileLists[i].scale)
	
	if opts.mode == 'demo' then
		utils.plot(fileLists[i].image,preds_img:view(16,2),torch.Tensor{originalSize[3],originalSize[2]})
		io.read() -- Wait for user input
	end
	
	if opts.mode == 'eval' then
		predictions[i] = preds_img:clone()
		xlua.progress(i, #fileLists)
	end
end

if opts.mode == 'demo' then gnuplot.closeall() end

if opts.mode == 'eval' then
	predictions = torch.cat(predictions,1)
	local dists = utils.calcDistance(predictions,fileLists)
	utils.calculateMetrics(dists)
end

================================================
FILE: opts.lua
================================================
local function parse(arg)
	local cmd = torch.CmdLine()
	cmd:text()
	cmd:text('Binary Human Pose demo script')
	cmd:text('Please visit https://www.adrianbulat.com for additional details')
	cmd:text()
	cmd:text('Options:')
	
	cmd:option('-mode',			'demo', 'Options: demo | eval')
	
	cmd:text()
	
	local opt = cmd:parse(arg or {})
	
	return opt 
end

return parse

================================================
FILE: utils.lua
================================================
local utils = {}

-- Transform the coordinates from the original image space to the cropped one
function utils.transform(pt, center, scale, res, invert)
    -- Define the transformation matrix
    local pt_new = torch.ones(3)
    pt_new[1], pt_new[2] = pt[1], pt[2]
    local h = 200*scale
    local t = torch.eye(3)
    t[1][1], t[2][2] = res/h, res/h
    t[1][3], t[2][3] = res*(-center[1]/h+0.5), res*(-center[2]/h+0.5)
    if invert then
        t = torch.inverse(t)
    end
    local new_point = (t*pt_new):sub(1,2):int()
    return new_point
end

-- Crop based on the image center & scale
function utils.crop(img, center, scale, res)
    local l1 = utils.transform({1,1}, center, scale, res, true)
    local l2 = utils.transform({res,res}, center, scale, res, true)

    local pad = math.floor(torch.norm((l1 - l2):float())/2 - (l2[1]-l1[1])/2)
    
    if img:nDimension() < 3 then
      img = torch.repeatTensor(img,3,1,1)
    end

    local newDim = torch.IntTensor({img:size(1), l2[2] - l1[2], l2[1] - l1[1]})
    local newImg = torch.zeros(newDim[1],newDim[2],newDim[3])
    local height, width = img:size(2), img:size(3)

    local newX = torch.Tensor({math.max(1, -l1[1]+1), math.min(l2[1], width) - l1[1]})
    local newY = torch.Tensor({math.max(1, -l1[2]+1), math.min(l2[2], height) - l1[2]})
    local oldX = torch.Tensor({math.max(1, l1[1]+1), math.min(l2[1], width)})
    local oldY = torch.Tensor({math.max(1, l1[2]+1), math.min(l2[2], height)})

    newImg:sub(1,newDim[1],newY[1],newY[2],newX[1],newX[2]):copy(img:sub(1,newDim[1],oldY[1],oldY[2],oldX[1],oldX[2]))

    newImg = image.scale(newImg,res,res)
    return newImg
end

function utils.getPreds(heatmaps, center, scale)
    if heatmaps:nDimension() == 3 then heatmaps = heatmaps:view(1, unpack(heatmaps:size():totable())) end

    -- Get locations of maximum activations
    local max, idx = torch.max(heatmaps:view(heatmaps:size(1), heatmaps:size(2), heatmaps:size(3) * heatmaps:size(4)), 3)
    local preds = torch.repeatTensor(idx, 1, 1, 2):float()
    preds[{{}, {}, 1}]:apply(function(x) return (x - 1) % heatmaps:size(4) + 1 end)
    preds[{{}, {}, 2}]:add(-1):div(heatmaps:size(3)):floor():add(1)

    for i = 1,preds:size(1) do        
        for j = 1,preds:size(2) do
            local hm = heatmaps[{i,j,{}}]
            local pX, pY = preds[{i,j,1}], preds[{i,j,2}]
            if pX > 1 and pX < 64 and pY > 1 and pY < 64 then
                local diff = torch.FloatTensor({hm[pY][pX+1]-hm[pY][pX-1], hm[pY+1][pX]-hm[pY-1][pX]})
                preds[i][j]:add(diff:sign():mul(.25))
            end
        end
    end
    preds:add(-0.5)

    -- Get the coordinates in the original space
    local preds_orig = torch.zeros(preds:size())
    for i = 1, heatmaps:size(1) do
        for j = 1, heatmaps:size(2) do
            preds_orig[i][j] = utils.transform(preds[i][j],center,scale,heatmaps:size(3),true)
        end
    end
    return preds, preds_orig
end

function utils.shuffleLR(x)
    local dim
    if x:nDimension() == 4 then
        dim = 2
    else
        assert(x:nDimension() == 3)
        dim = 1
    end

    local matched_parts = {
        {1,6},   {2,5},   {3,4},
        {11,16}, {12,15}, {13,14}
    }

    for i = 1,#matched_parts do
        local idx1, idx2 = unpack(matched_parts[i])
        local tmp = x:narrow(dim, idx1, 1):clone()
        x:narrow(dim, idx1, 1):copy(x:narrow(dim, idx2, 1))
        x:narrow(dim, idx2, 1):copy(tmp)
    end

    return x
end

function utils.flip(x)
    local y = torch.FloatTensor(x:size())
    for i = 1, x:size(1) do
        image.hflip(y[i], x[i]:float())
    end
    return y:typeAs(x)
end

function utils.calcDistance(predictions,groundTruth)
  local n = predictions:size()[1]
  gnds = torch.Tensor(n,16,2)
  for i=1,n do
    gnds[{{i},{},{}}] = groundTruth[i].points
  end

  local dists = torch.Tensor(predictions:size(2),predictions:size(1))
  -- Calculate L2
	for i = 1,predictions:size(1) do
		for j = 1,predictions:size(2) do
			if gnds[i][j][1] > 1 and gnds[i][j][2] > 1 then
				dists[j][i] = torch.dist(gnds[i][j],predictions[i][j])/groundTruth[i].headSize
			else
				dists[j][i] = -1
			end
		end
	end

  return dists
end

function utils.getFileList(opts)
	local fileLists = {}
	tempFileList = torch.load('dataset/mpii_dataset.t7')
    if opts.mode == 'demo' then
        local idxs = {1,5,16,17,18,24,28,63,66,104}
        for i = 1, #idxs do
            fileLists[i] = tempFileList[idxs[i]]
        end
	else
		for i = 1, #tempFileList do
			if tempFileList[i]['type'] == 0 then
				fileLists[#fileLists+1] = tempFileList[i]
			end
		end
    end
	return fileLists
end

-- Requires gnuplot
function utils.plot(surface, points, size)
	points = points:view(16,2)
   
	local matched_parts = {
		{1,2}, {2,3}, {3,7},
		{4,5}, {5,6}, {4,7},
		{9,10},{7,8},
		{11,12}, {12,13}, {13,8},
		{8,14}, {14,15}, {15,16}
	}
	
	local parts_colours = {
		"blue", "blue", "blue",
		"red", "red", "red",
		"#9400D3", "#9400D3",
		"blue", "blue", "blue",
		"red", "red", "red"
	}
	
    gnuplot.figure(1)
    gnuplot.raw("set size ratio -1")
	gnuplot.raw("set xrange [0:"..size[1].."]")
	gnuplot.raw("set yrange [0:"..size[2].."]")
    gnuplot.raw("unset key; unset tics; unset border;")
	gnuplot.raw("set multiplot layout 1,1 margins 0.05,0.95,.1,.99 spacing 0,0")
    gnuplot.raw("plot '"..surface.."' binary filetype=jpg with rgbimage")  

	gnuplot.raw(" set yrange ["..size[2]..":0] ") 

	commands = {}
	for i = 1, #matched_parts do
		commands[i] = {torch.Tensor{points[matched_parts[i][1]][1],points[matched_parts[i][2]][1]},torch.Tensor{points[matched_parts[i][1]][2],points[matched_parts[i][2]][2]},'with lines lw 5 linecolor rgb "'..parts_colours[i]..'"'}
	end
	gnuplot.plot(unpack(commands))
	gnuplot.raw("unset multiplot")
end

local function displayPCKh(dists, idxs, title, disp_key)
	local xs = torch.linspace(0,0.5,30)
	local ys = torch.zeros(xs:size(1))
	local total = {dists[{idxs[1],{}}]:gt(-1):sum(),
					dists[{idxs[2],{}}]:gt(-1):sum()}
	for i = 1, xs:size(1) do
		ys[i] = 0.5*((dists[{idxs[1],{}}]:lt(xs[i]):sum()-(dists:size(2)-total[1]))/total[1]+(dists[{idxs[2],{}}]:lt(xs[i]):sum()-(dists:size(2)-total[2]))/total[2])
	end

	local command = {xs,ys,'-'}
	gnuplot.raw('set title "'..title..'"')
	if not disp_key then 
		gnuplot.raw('unset key')
	else
		gnuplot.raw('set key font ",6" right bottom')
	end
	gnuplot.raw('set xrange [0:0.5]')
	gnuplot.raw('set yrange [0:1]')
	gnuplot.plot(unpack(command))
end

function utils.calculateMetrics(dists)
	gnuplot.raw('set bmargin 1')
	gnuplot.raw('set lmargin 3.2')
	gnuplot.raw('set rmargin 2')
	gnuplot.raw('set multiplot layout 2,3 title "MPII Validation (PCKh)"')
	gnuplot.raw('set xtics font ",6"')
	gnuplot.raw('set xtics font ",6"')
	displayPCKh(dists, {9,10}, 'Head')
	displayPCKh(dists, {2,5}, 'Knee')
	displayPCKh(dists, {1,6}, 'Ankle')
	gnuplot.raw('set tmargin 2.5')
	gnuplot.raw('set bmargin 1.5')
	displayPCKh(dists, {13,14}, 'Shoulder')
	displayPCKh(dists, {12,15}, 'Elbow')
	displayPCKh(dists, {11,16}, 'Wrist', true)	
	gnuplot.raw('unset multiplot')
	
    local threshold = 0.5
    dists:apply(function(x)
        if x>=0 and x<= threshold then 
            return 1
        elseif x>threshold then 
            return 0
        end
    end)

    local count = torch.zeros(16)
    local sums = torch.zeros(16)
    for i=1,16 do
        dists[i]:apply(function(x)
            if x ~= -1 then
                count[i] = count[i] + 1
                sums[i] = sums[i] + x
            end
        end)
    end

    local partNames = {'Head', 'Knee', 'Ankle', 'Shoulder', 'Elbow', 'Wrist', 'Hip'}
    local partsC =  torch.Tensor({{9,10},{2,5},{1,6},{13,14},{12,15},{11,16},{3,4}})
    print('PCKh results:')
    for i=1,#partNames do
        print(partNames[i]..': ',(sums[partsC[i][1]]/count[partsC[i][1]]+sums[partsC[i][2]]/count[partsC[i][1]])*100/2)
    end
end

return utils
Download .txt
gitextract_llmckq0f/

├── .gitignore
├── .gitmodules
├── LICENCE
├── README.md
├── download-content.lua
├── main.lua
├── opts.lua
└── utils.lua
Condensed preview — 8 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (19K chars).
[
  {
    "path": ".gitignore",
    "chars": 373,
    "preview": "# Compiled Lua sources\nluac.out\n\n# Torch serialised objects\n*.t7\n\n# Images\n*.jpg\n\n# luarocks build files\n*.src.rock\n*.zi"
  },
  {
    "path": ".gitmodules",
    "chars": 183,
    "preview": "[submodule \"optimize-net\"]\n\tpath = optimize-net\n\turl = https://github.com/1adrianb/optimize-net\n[submodule \"bnn.torch\"]\n"
  },
  {
    "path": "LICENCE",
    "chars": 1491,
    "preview": "Copyright (c) 2017, University of Nottingham\nAll rights reserved.\n\nRedistribution and use in source and binary forms, wi"
  },
  {
    "path": "README.md",
    "chars": 2433,
    "preview": "# Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources \n\nThis"
  },
  {
    "path": "download-content.lua",
    "chars": 2368,
    "preview": "local cURL = require 'cURL'\nlocal paths = require 'paths'\n\n-- Create the directories if needed\nif not paths.dirp('datase"
  },
  {
    "path": "main.lua",
    "chars": 1741,
    "preview": "require 'torch'\r\nrequire 'nn'\r\nrequire 'cudnn'\r\nrequire 'paths'\r\n\r\nrequire 'bnn'\r\nlocal optnet = require 'optnet'\r\n\r\nreq"
  },
  {
    "path": "opts.lua",
    "chars": 377,
    "preview": "local function parse(arg)\r\n\tlocal cmd = torch.CmdLine()\r\n\tcmd:text()\r\n\tcmd:text('Binary Human Pose demo script')\r\n\tcmd:t"
  },
  {
    "path": "utils.lua",
    "chars": 8163,
    "preview": "local utils = {}\r\n\r\n-- Transform the coordinates from the original image space to the cropped one\r\nfunction utils.transf"
  }
]

About this extraction

This page contains the full source code of the 1adrianb/binary-human-pose-estimation GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 8 files (16.7 KB), approximately 5.4k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!