[
  {
    "path": ".gitignore",
    "content": "\nmanual.pdf\n"
  },
  {
    "path": "README.md",
    "content": "# LibADMM Toolbox\n\n## 1. Introduction\n\nThis toolbox solves many sparse, low-rank matrix and low-rank tensor optimization problems by using M-ADMM developed in our paper <a class=\"footnote-reference\" href=\"#id2\" id=\"id1\">[1]</a>. \n\n## 2. List of Problems\n\nThe table below gives the list of problems solved in our toolbox. See more details in the manual at <a href=\"../publications/2016-software-LibADMM.pdf\" class=\"textlink\" target=\"_blank\">https://canyilu.github.io/publications/2016-software-LibADMM.pdf</a>. \n\n<p align=\"center\"> \n<img src=\"https://github.com/canyilu/LibADMM/blob/master/tab_problemlist.JPG\">\n</p>\n\n### 3. Citation\n\n<p>In citing this toolbox in your papers, please use the following references:</p>\n\n<div class=\"highlight-none\"><div class=\"highlight\"><pre>\nC. Lu, J. Feng, S. Yan, Z. Lin. A Unified Alternating Direction Method of Multipliers by Majorization \nMinimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, pp. 527-541, 2018\nC. Lu. A Library of ADMM for Sparse and Low-rank Optimization. National University of Singapore, June 2016.\nhttps://github.com/canyilu/LibADMM.\n</pre></div>\n  \n\n<p>The corresponding BiBTeX citation are given below:</p>\n<div class=\"highlight-none\"><div class=\"highlight\"><pre>\n@manual{lu2016libadmm,\nauthor       = {Lu, Canyi},\ntitle        = {A Library of {ADMM} for Sparse and Low-rank Optimization},\norganization = {National University of Singapore},\nmonth        = {June},\nyear         = {2016},\nnote         = {\\url{https://github.com/canyilu/LibADMM}}\n}\n@article{lu2018unified,\nauthor       = {Lu, Canyi and Feng, Jiashi and Yan, Shuicheng and Lin, Zhouchen},\ntitle        = {A Unified Alternating Direction Method of Multipliers by Majorization Minimization},\njournal      = {IEEE Transactions on Pattern Analysis and Machine Intelligence},\npublisher    = {IEEE},\nyear         = {2018},\nvolume       = {40},\nnumber       = {3},\npages        = {527—-541},\n}</pre></div>\n\n## 4. Version History\n- Version 1.0 was released on June, 2016.\n- Version 1.1 was released on June, 2018. Some key differences are below:\n  + Add a new model about low-rank tensor recovery from Gaussian measurements based on tensor nuclear norm and the corresponding function lrtr_Gaussian_tnn.m\n  + Update several functions to improve the efficiency, including prox_tnn.m, tprod.m, tran.m, tubalrank.m, and nmodeproduct.m\n  + Update the three example functions: example_sparse_models.m, example_low_rank_matrix_models.m, and example_low_rank_tensor_models.m\n  + Remove the test on image data and some unnecessary functions\n\n## References\n<table class=\"docutils footnote\" frame=\"void\" id=\"id2\" rules=\"none\">\n<colgroup><col class=\"label\" /><col /></colgroup>\n<tbody valign=\"top\">\n<tr><td class=\"label\"><a class=\"fn-backref\" href=\"#id2\">[1]</a></td><td>C. Lu, J. Feng, S. Yan, Z. Lin. A Unified Alternating Direction Method of Multipliers by Majorization Minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, pp. 527-541, 2018</td></tr>\n<tr><td class=\"label\"><a class=\"fn-backref\" href=\"#id2\">[2]</a></td><td>C. Lu. A Library of ADMM for Sparse and Low-rank Optimization. National University of Singapore, June 2016. https://github.com/canyilu/LibADMM.</td></tr>\n</tbody>\n</table>\n\n"
  },
  {
    "path": "algorithms/comp_loss.m",
    "content": "function out = comp_loss(E,loss)\r\n\r\nswitch loss\r\n    case 'l1'\r\n        out = norm(E(:),1);\r\n    case 'l21'\r\n        out = 0;\r\n        for i = 1 : size(E,2)\r\n            out = out + norm(E(:,i));\r\n        end\r\n    case 'l2'\r\n        out = 0.5*norm(E,'fro')^2;\r\nend\r\n\r\n"
  },
  {
    "path": "algorithms/elasticnet.m",
    "content": "function [X,obj,err,iter] = elasticnet(A,B,lambda,opts)\r\n\r\n% Solve the elastic net minimization problem by ADMM\r\n%\r\n% min_X ||X||_1+lambda*||X||_F^2, s.t. AX=B\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       lambda  -    >=0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    na*nb matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual ||AX-B||_F\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(na,nb);\r\nZ = X;\r\nY1 = zeros(d,nb);\r\nY2 = X;\r\n\r\nAtB = A'*B;\r\nI = eye(na);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Zk = Z;\r\n    % update X\r\n    X = prox_elasticnet(Z-Y2/mu,1/mu,lambda/mu);\r\n    % update Z\r\n    Z = invAtAI*(-(A'*Y1-Y2)/mu+AtB+X);    \r\n    dY1 = A*Z-B;\r\n    dY2 = X-Z;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgX chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = norm(X(:),1)+lambda*norm(X,'fro')^2;\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = norm(X(:),1)+lambda*norm(X,'fro')^2;\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\n"
  },
  {
    "path": "algorithms/elasticnetR.m",
    "content": "function [X,E,obj,err,iter] = elasticnetR(A,B,lambda1,lambda2,opts)\r\n\r\n% Solve the elastic net regularized minimization problem by ADMM\r\n%\r\n% min_{X,E} loss(E)+lambda1*||X||_1+lambda2*||X||_F^2, s.t. AX+E=B\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       lambda1 -    >=0, parameter\r\n%       lambda2 -    >=0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    na*nb matrix\r\n%       E       -    d*nb matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1'; % default\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(na,nb);\r\nE = zeros(d,nb);\r\nZ = X;\r\nY1 = E;\r\nY2 = X;\r\n\r\nAtB = A'*B;\r\nI = eye(na);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Ek = E;\r\n    Zk = Z;\r\n    % first super block {X,E}\r\n    X = prox_elasticnet(Z-Y2/mu,lambda1/mu,lambda2/mu);\r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(B-A*Z-Y1/mu,1/mu);\r\n    elseif strcmp(loss,'l2')\r\n        E = mu*(B-A*Z-Y1/mu)/(1+mu);\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    % second  super block {Z}\r\n    Z = invAtAI*(-A'*(Y1/mu+E)+AtB+Y2/mu+X);    \r\n    dY1 = A*Z+E-B;\r\n    dY2 = X-Z;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgE = max(max(abs(Ek-E)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgX chgE chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = comp_loss(E,loss)+lambda1*norm(X(:),1)+lambda2*norm(X,'fro')^2;\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = comp_loss(E,loss)+lambda1*norm(X(:),1)+lambda2*norm(X,'fro')^2;\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\n"
  },
  {
    "path": "algorithms/fusedl1.m",
    "content": "function [x,obj,err,iter] = fusedl1(A,b,lambda,opts)\r\n\r\n% Solve the fused Lasso (Fused L1) minimization problem by ADMM\r\n%\r\n% min_x ||x||_1 + lambda*\\sum_{i=2}^p |x_i-x_{i-1}|,\r\n%   s.t. Ax=b\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*n matrix\r\n%       b       -    d*1 vector\r\n%       lambda  -    >=0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       x       -    n*1 vector\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 20/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,n] = size(A);\r\nx = zeros(n,1);\r\n\r\nz = x;\r\nY1 = zeros(d,1);\r\nY2 = x;\r\n\r\nAtb = A'*b;\r\nI = eye(n);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\n% parameters for \"flsa\" (from SLEP package)\r\ntol2 = 1e-10;      % the duality gap for termination\r\nmax_step = 50;     % the maximal number of iterations\r\nx0 = zeros(n-1,1); % the starting point\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    xk = x;\r\n    zk = z;\r\n    % update x. \r\n    % flsa solves min_x 1/2||x-v||_2^2+lambda1*||x||_1+lambda2*\\sum_{i=2}^p |x_i-x_{i-1}|\r\n    x = flsa(z-Y2/mu,x0,1/mu,lambda/mu,n,max_step,tol2,1,6);\r\n    % update z\r\n    z = invAtAI*(-A'*Y1/mu+Atb+Y2/mu+x);    \r\n    dY1 = A*z-b;\r\n    dY2 = x-z;\r\n    chgx = max(abs(xk-x));\r\n    chgz = max(abs(zk-z));\r\n    chg = max([chgx chgz max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = comp_fusedl1(x,1,lambda);\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = comp_fusedl1(x,1,lambda);\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\nfunction f = comp_fusedl1(x,lambda1,lambda2)\r\n% compute f = lambda1*||x||_1 + lambda2*\\sum_{i=2}^p |x_i-x_{i-1}|.\r\n% x - p*1 vector\r\nf = 0;\r\np = length(x);\r\nfor i = 2 : p\r\n   f = f+abs(x(i)-x(i-1)); \r\nend\r\nf = lambda1*norm(x,1)+lambda2*f;\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "algorithms/fusedl1R.m",
    "content": "function [x,e,obj,err,iter] = fusedl1R(A,b,lambda1,lambda2,opts)\r\n\r\n% Solve the fused Lasso regularized minimization problem by ADMM\r\n%\r\n% min_{x,e} loss(e) + lambda1*||x||_1 + lambda2*\\sum_{i=2}^p |x_i-x_{i-1}|,\r\n% loss(e) = ||e||_1 or 0.5*||e||_2^2\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*n matrix\r\n%       b       -    d*1 vector\r\n%       lambda1 -    >=0, parameter\r\n%       lambda2 -    >=0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(e) = ||e||_1 \r\n%                               'l2': loss(E) = 0.5*||e||_2^2\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       x       -    n*1 vector\r\n%       e       -    d*1 vector\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 20/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1'; % default\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,n] = size(A);\r\nx = zeros(n,1);\r\ne = zeros(d,1);\r\nz = x;\r\nY1 = e;\r\nY2 = x;\r\n\r\nAtb = A'*b;\r\nI = eye(n);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\n\r\n% parameters for \"flsa\" (from SLEP package)\r\ntol2 = 1e-10;      % the duality gap for termination\r\nmax_step = 50;     % the maximal number of iterations\r\nx0 = zeros(n-1,1); % the starting point\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    xk = x;\r\n    ek = e;\r\n    zk = z;\r\n    % first super block {x,e}\r\n    % flsa solves min_x 1/2||x-v||_2^2+lambda1*||x||_1+lambda2*\\sum_{i=2}^p |x_i-x_{i-1}|,\r\n    x = flsa(z-Y2/mu,x0,lambda1/mu,lambda2/mu,n,max_step,tol2,1,6);\r\n    if strcmp(loss,'l1')\r\n        e = prox_l1(b-A*z-Y1/mu,1/mu);\r\n    elseif strcmp(loss,'l2')\r\n        e = mu*(b-A*z-Y1/mu)/(1+mu);\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    % second  super block {Z}\r\n    z = invAtAI*(-A'*(Y1/mu+e)+Atb+Y2/mu+x);    \r\n    dY1 = A*z+e-b;\r\n    dY2 = x-z;\r\n    chgx = max(abs(xk-x));\r\n    chge = max(abs(ek-e));\r\n    chgz = max(abs(zk-z));\r\n    chg = max([chgx chge chgz max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0            \r\n            obj = comp_loss(e,loss)+comp_fusedl1(x,lambda1,lambda2);\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = comp_loss(e,loss)+comp_fusedl1(x,lambda1,lambda2);\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\n\r\nfunction f = comp_fusedl1(x,lambda1,lambda2)\r\n% compute f = lambda1*||x||_1 + lambda2*\\sum_{i=2}^p |x_i-x_{i-1}|.\r\n% x - p*1 vector\r\nf = 0;\r\np = length(x);\r\nfor i = 2 : p\r\n   f = f+abs(x(i)-x(i-1)); \r\nend\r\nf = lambda1*norm(x,1)+lambda2*f;\r\n\r\n\r\n"
  },
  {
    "path": "algorithms/groupl1.m",
    "content": "function [X,obj,err,iter] = groupl1(A,B,G,opts)\r\n\r\n% Solve the group l1-minimization problem by ADMM\r\n%\r\n% min_X \\sum_{i=1}^n\\sum_{g in G} ||(x_i)_g||_2, s.t. AX=B\r\n%\r\n% x_i is the i-th column of X\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       G       -    a cell indicates a partition of 1:na\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    na*nb matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual ||AX-B||_F\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(na,nb);\r\nZ = X;\r\nY1 = zeros(d,nb);\r\nY2 = X;\r\n\r\nAtB = A'*B;\r\nI = eye(na);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Zk = Z;\r\n    % update X\r\n    for i = 1 : nb\r\n        X(:,i) = prox_gl1(Z(:,i)-Y2(:,i)/mu,G,1/mu);\r\n    end\r\n    % update Z\r\n    Z = invAtAI*(-(A'*Y1-Y2)/mu+AtB+X);    \r\n    dY1 = A*Z-B;\r\n    dY2 = X-Z;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgX chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = compute_obj(X,G);\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = compute_obj(X,G);\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\n\r\nfunction obj = compute_obj(X,G)\r\nobj = 0;\r\nfor i = 1 : size(X,2)\r\n    x = X(:,i);\r\n    for j = 1 : length(G)\r\n        obj = obj + norm(x(G{j}));\r\n    end\r\nend"
  },
  {
    "path": "algorithms/groupl1R.m",
    "content": "function [X,E,obj,err,iter] = groupl1R(A,B,G,lambda,opts)\r\n\r\n% Solve the group l1 norm regularized minimization problem by M-ADMM\r\n%\r\n% min_{X,E} loss(E)+lambda*\\sum_{i=1}^n\\sum_{g in G} ||(x_i)_g||_2, s.t. AX+E=B\r\n% x_i is the i-th column of X\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       G       -    a cell indicates a partition of 1:na\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    na*nb matrix\r\n%       E       -    d*nb matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual ||AX+E-B||_F\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(na,nb);\r\nE = zeros(d,nb);\r\nZ = X;\r\nY1 = E;\r\nY2 = X;\r\n\r\nAtB = A'*B;\r\nI = eye(na);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Ek = E;\r\n    Zk = Z;\r\n    % first super block {X,E}\r\n    for i = 1 : nb\r\n        X(:,i) = prox_gl1(Z(:,i)-Y2(:,i)/mu,G,1/mu);\r\n    end\r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(B-A*Z-Y1/mu,1/mu);\r\n    elseif strcmp(loss,'l2')\r\n        E = mu*(B-A*Z-Y1/mu)/(1+mu);\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    % second  super block {Z}\r\n    Z = invAtAI*(-A'*(Y1/mu+E)+AtB+Y2/mu+X);    \r\n    dY1 = A*Z+E-B;\r\n    dY2 = X-Z;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgE = max(max(abs(Ek-E)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgX chgE chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = comp_loss(E,loss)+lambda*compute_groupl1(X,G);            \r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = comp_loss(E,loss)+lambda*compute_groupl1(X,G);\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\nfunction obj = compute_groupl1(X,G)\r\nobj = 0;\r\nfor i = 1 : size(X,2)\r\n    x = X(:,i);\r\n    for j = 1 : length(G)\r\n        obj = obj + norm(x(G{j}));\r\n    end\r\nend"
  },
  {
    "path": "algorithms/igc.m",
    "content": "function [L,S,obj,err,iter] = igc(A,C,lambda,opts)\r\n\r\n% Reference: Chen, Yudong, Sujay Sanghavi, and Huan Xu. Improved graph clustering.\r\n% IEEE Transactions on Information Theory 60.10 (2014): 6440-6455.\r\n%\r\n% min_{L,S} ||L||_*+lambda*||C \\cdot S||_1, s.t. A=L+S, 0<=L<=1.\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*n matrix\r\n%       C       -    d*n matrix\r\n%       lambda  -    >0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       L       -    d*n matrix\r\n%       S       -    d*n matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 19/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\nC = abs(C);\r\n[d,n] = size(A);\r\n\r\nL = zeros(d,n);\r\nS = L;\r\nZ = L;\r\nY1 = L;\r\nY2 = L;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Lk = L;\r\n    Sk = S;\r\n    Zk = Z;\r\n    % first super block {L,S}\r\n    [L,nuclearnormL] = prox_nuclear(Z-Y2/mu,1/mu);\r\n    S = prox_l1(-Z+A-Y1/mu,C*(lambda/mu));\r\n    \r\n    % second super block {Z}\r\n    Z = project_box((-S+A+L+(Y2-Y1)/mu)/2,0,1);\r\n  \r\n    dY1 = Z+S-A;\r\n    dY2 = L-Z;\r\n    chgL = max(max(abs(Lk-L)));\r\n    chgS = max(max(abs(Sk-S)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgL chgS chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = nuclearnormL+lambda*sum(sum(C.*abs(S)));\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = nuclearnormL+lambda*sum(sum(C.*abs(S)));\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\n"
  },
  {
    "path": "algorithms/ksupport.m",
    "content": "function [X,err,iter] = ksupport(A,B,k,opts)\r\n\r\n% Solve the k support norm minimization problem by ADMM\r\n%\r\n% min_X 0.5*||vec(X)||_ksp^2, s.t. AX=B\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       k       -    >0, integer, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    na*nb matrix\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 27/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(na,nb);\r\nZ = X;\r\nY1 = zeros(d,nb);\r\nY2 = X;\r\n\r\nAtB = A'*B;\r\nI = eye(na);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Zk = Z;\r\n    % update X\r\n    temp = Z-Y2/mu;\r\n    temp = prox_ksupport(temp(:),k,1/mu);\r\n    X = reshape(temp,na,nb);\r\n    % update Z\r\n    Z = invAtAI*(-A'*Y1/mu+AtB+Y2/mu+X);    \r\n    dY1 = A*Z-B;\r\n    dY2 = X-Z;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgX chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\n"
  },
  {
    "path": "algorithms/ksupportR.m",
    "content": "function [X,E,err,iter] = ksupportR(A,B,lambda,k,opts)\r\n\r\n% Solve the l1 norm regularized minimization problem by M-ADMM\r\n%\r\n% min_{X,E} loss(E)+0.5*||vec(X)||_ksp^2, s.t. AX+E=B\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       lambda  -    >=0, parameter\r\n%       k       -    >0, integer, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    na*nb matrix\r\n%       E       -    d*nb matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual \r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 27/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(na,nb);\r\nE = zeros(d,nb);\r\nZ = X;\r\nY1 = E;\r\nY2 = X;\r\n\r\nAtB = A'*B;\r\nI = eye(na);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Ek = E;\r\n    Zk = Z;\r\n    % first super block {X,E}\r\n    temp = Z-Y2/mu;\r\n    temp = prox_ksupport(temp(:),k,lambda/mu);\r\n    X = reshape(temp,na,nb);\r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(B-A*Z-Y1/mu,1/mu);\r\n    elseif strcmp(loss,'l2')\r\n        E = mu*(B-A*Z-Y1/mu)/(1+mu);\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    % second  super block {Z}\r\n    Z = invAtAI*(-A'*(Y1/mu+E)+AtB+Y2/mu+X);    \r\n    dY1 = A*Z+E-B;\r\n    dY2 = X-Z;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgE = max(max(abs(Ek-E)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgX chgE chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\n"
  },
  {
    "path": "algorithms/l1.m",
    "content": "function [X,obj,err,iter] = l1(A,B,opts)\r\n\r\n% Solve the l1-minimization problem by ADMM\r\n%\r\n% min_X ||X||_1, s.t. AX=B\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    na*nb matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual ||AX-B||_F\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(na,nb);\r\nZ = X;\r\nY1 = zeros(d,nb);\r\nY2 = X;\r\n\r\nAtB = A'*B;\r\nI = eye(na);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Zk = Z;\r\n    % update X\r\n    X = prox_l1(Z-Y2/mu,1/mu);\r\n    % update Z\r\n    Z = invAtAI*(-A'*Y1/mu+AtB+Y2/mu+X);    \r\n    dY1 = A*Z-B;\r\n    dY2 = X-Z;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgX chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = norm(X(:),1);            \r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = norm(X(:),1);\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\n"
  },
  {
    "path": "algorithms/l1R.m",
    "content": "function [X,E,obj,err,iter] = l1R(A,B,lambda,opts)\r\n\r\n% Solve the l1 norm regularized minimization problem by M-ADMM\r\n%\r\n% min_{X,E} loss(E)+lambda*||X||_1, s.t. AX+E=B\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       lambda  -    >=0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    na*nb matrix\r\n%       E       -    d*nb matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual \r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(na,nb);\r\nE = zeros(d,nb);\r\nZ = X;\r\nY1 = E;\r\nY2 = X;\r\n\r\nAtB = A'*B;\r\nI = eye(na);\r\ninvAtAI = (A'*A+I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Ek = E;\r\n    Zk = Z;\r\n    % first super block {X,E}\r\n    X = prox_l1(Z-Y2/mu,lambda/mu);\r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(B-A*Z-Y1/mu,1/mu);\r\n    elseif strcmp(loss,'l2')\r\n        E = mu*(B-A*Z-Y1/mu)/(1+mu);\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    % second  super block {Z}\r\n    Z = invAtAI*(-A'*(Y1/mu+E)+AtB+Y2/mu+X);    \r\n    dY1 = A*Z+E-B;\r\n    dY2 = X-Z;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgE = max(max(abs(Ek-E)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgX chgE chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = comp_loss(E,loss)+lambda*norm(X(:),1);\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = comp_loss(E,loss)+lambda*norm(X(:),1);\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\n"
  },
  {
    "path": "algorithms/latlrr.m",
    "content": "function [Z,L,obj,err,iter] = latlrr(X,lambda,opts)\r\n\r\n% Solve the Latent Low-Rank Representation by M-ADMM\r\n%\r\n% min_{Z,L,E} ||Z||_*+||L||_*+lambda*loss(E),\r\n% s.t., XZ+LX-X=E.\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2 or ||E||_{2,1}\r\n% ---------------------------------------------\r\n% Input:\r\n%       X       -    d*n matrix\r\n%       lambda  -    >0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%                               'l21': loss(E) = ||E||_{2,1}\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       Z       -    n*n matrix\r\n%       L       -    d*d matrix\r\n%       E       -    d*n matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 19/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\neta1 = 1.02*2*norm(X,2)^2; % for Z\r\neta2 = eta1; % for L\r\neta3 = 1.02*2; % for E\r\n\r\n[d,n] = size(X);\r\nE = zeros(d,n);\r\nZ = zeros(n,n);\r\nL = zeros(d,d);\r\nY = E;\r\n\r\nXtX = X'*X;\r\nXXt = X*X';\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Lk = L;\r\n    Ek = E;\r\n    Zk = Z;\r\n    % first super block {Z}\r\n    [Z,nuclearnormZ] = prox_nuclear(Zk-(X'*(Y/mu+L*X-X-E)+XtX*Z)/eta1,1/(mu*eta1));\r\n    % second super block {L,E}\r\n    temp = Lk-((Y/mu+X*Z-Ek)*X'+Lk*XXt-XXt)/eta2;\r\n    [L,nuclearnormL] = prox_nuclear(temp,1/(mu*eta2));        \r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(Ek+(Y/mu+X*Z+Lk*X-X-Ek)/eta3,lambda/(mu*eta3));\r\n    elseif strcmp(loss,'l21')\r\n        E = prox_l21(Ek+(Y/mu+X*Z+Lk*X-X-Ek)/eta3,lambda/(mu*eta3));\r\n    elseif strcmp(loss,'l2')\r\n        E = (Y+mu*(X*Z+Lk*X-X+(eta3-1)*Ek))/(lambda+mu*eta3);\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    \r\n    dY = X*Z+L*X-X-E;\r\n    chgL = max(max(abs(Lk-L)));\r\n    chgE = max(max(abs(Ek-E)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgL chgE chgZ max(abs(dY(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = nuclearnormZ+nuclearnormL+lambda*comp_loss(E,loss);\r\n            err = norm(dY,'fro')^2;\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y = Y + mu*dY;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = nuclearnormZ+nuclearnormZ+lambda*comp_loss(E,loss);\r\nerr = norm(dY,'fro')^2;\r\n\r\nfunction out = comp_loss(E,loss)\r\n\r\nswitch loss\r\n    case 'l1'\r\n        out = norm(E(:),1);\r\n    case 'l21'\r\n        out = 0;\r\n        for i = 1 : size(E,2)\r\n            out = out + norm(E(:,i));\r\n        end\r\n    case 'l2'\r\n        out = 0.5*norm(E,'fro')^2;\r\nend\r\n\r\n "
  },
  {
    "path": "algorithms/lrmc.m",
    "content": "function [X,obj,err,iter] = lrmc(MM,omega,opts)\r\n\r\n% Solve the Low-Rank Matrix Completion (LRMC) problem by ADMM\r\n%\r\n% min_X ||X||_*, s.t. P_Omega(X) = P_Omega(M)\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       MM      -    d*n matrix\r\n%       omega   -    index of the observed entries\r\n%       lambda  -    >=0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X      -    d*n matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 22/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n[d,n] = size(MM);\r\nM = zeros(d,n);\r\nM(omega) = MM(omega);\r\nX = zeros(d,n);\r\nE = X;\r\nY = X;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Ek = E;\r\n    % update X\r\n    [X,nuclearnormX] = prox_nuclear(-(E-M+Y/mu),1/mu);\r\n    % update E\r\n    E = -(X-M+Y/mu);\r\n    E(omega) = 0;\r\n    \r\n    dY = X+E-M;  \r\n    chgX = max(max(abs(Xk-X)));\r\n    chgE = max(max(abs(Ek-E)));\r\n    chg = max([chgX chgE max(abs(dY(:)))]);\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = nuclearnormX;\r\n            err = norm(dY,'fro');\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y = Y + mu*dY;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = nuclearnormX;\r\nerr = norm(dY,'fro');\r\n"
  },
  {
    "path": "algorithms/lrmcR.m",
    "content": "function [X,E,obj,err,iter] = lrmcR(M,omega,lambda,opts)\r\n\r\n% Solve the Noisy Low-Rank Matrix Completion (LRMC) problem by ADMM\r\n%\r\n% min_{X,E} ||X||_*+lambda*loss(E), s.t. P_Omega(X) + E = M.\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2 or ||E||_{2,1}\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       M       -    d*n matrix\r\n%       omega   -    index of the observed entries\r\n%       lambda  -    >=0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%                               'l21': loss(E) = ||E||_{2,1}\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    d*n matrix\r\n%       E       -    d*n matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 23/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n[d,n] = size(M);\r\nX = zeros(d,n);\r\nZ = X;\r\nE = X;\r\nY1 = X;\r\nY2 = X;\r\nomegac = setdiff(1:d*n,omega);\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Zk = Z;\r\n    Ek = E;\r\n    % first super block {X,E}\r\n    [X,nuclearnormX] = prox_nuclear(Z-Y2/mu,1/mu);\r\n    temp = M-Y1/mu;\r\n    temp(omega) = temp(omega)-Z(omega);\r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(temp,lambda/mu);\r\n    elseif strcmp(loss,'l21')\r\n        E = prox_l21(temp,lambda/mu);\r\n    elseif strcmp(loss,'l2')\r\n        E = temp*(mu/(lambda+mu));\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    \r\n    % second super block {Z}\r\n    Z(omega) = (-E(omega)+M(omega)-(Y1(omega)-Y2(omega))/mu+X(omega))/2;\r\n    Z(omegac) = X(omegac)+Y2(omegac)/mu;\r\n    \r\n    dY1 = E-M;\r\n    dY1(omega) = dY1(omega)+Z(omega);\r\n    dY2 = X-Z;   \r\n    chgX = max(max(abs(Xk-X)));\r\n    chgE = max(max(abs(Ek-E)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgX chgE chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = nuclearnormX+lambda*comp_loss(E,loss);\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = nuclearnormX+lambda*comp_loss(E,loss);\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n"
  },
  {
    "path": "algorithms/lrr.m",
    "content": "function [X,E,obj,err,iter] = lrr(A,B,lambda,opts)\r\n\r\n% Solve the Low-Rank Representation minimization problem by M-ADMM\r\n%\r\n% min_{X,E} ||X||_*+lambda*loss(E), s.t. A=BX+E\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2 or ||E||_{2,1}\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       lambda  -    >0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1': loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%                               'l21' (default): loss(E) = ||E||_{2,1}\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    nb*na matrix\r\n%       E       -    d*na matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l21';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(nb,na);\r\nE = zeros(d,na);\r\nJ = X;\r\n\r\nY1 = E;\r\nY2 = X;\r\nBtB = B'*B;\r\nBtA = B'*A;\r\nI = eye(nb);\r\ninvBtBI = (BtB+I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Ek = E;\r\n    Jk = J;\r\n    % first super block {J,E}\r\n    [J,nuclearnormJ] = prox_nuclear(X+Y2/mu,1/mu);\r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(A-B*X+Y1/mu,lambda/mu);\r\n    elseif strcmp(loss,'l21')\r\n        E = prox_l21(A-B*X+Y1/mu,lambda/mu);\r\n    elseif strcmp(loss,'l2')\r\n        E = mu*(A-B*X+Y1/mu)/(lambda+mu);\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    % second  super block {X}\r\n    X = invBtBI*(B'*(Y1/mu-E)+BtA-Y2/mu+J);\r\n  \r\n    dY1 = A-B*X-E;\r\n    dY2 = X-J;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgE = max(max(abs(Ek-E)));\r\n    chgJ = max(max(abs(Jk-J)));\r\n    chg = max([chgX chgE chgJ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = nuclearnormJ+lambda*comp_loss(E,loss);\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = nuclearnormJ+lambda*comp_loss(E,loss);\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\nfunction out = comp_loss(E,loss)\r\n\r\nswitch loss\r\n    case 'l1'\r\n        out = norm(E(:),1);\r\n    case 'l21'\r\n        out = 0;\r\n        for i = 1 : size(E,2)\r\n            out = out + norm(E(:,i));\r\n        end\r\n    case 'l2'\r\n        out = 0.5*norm(E,'fro')^2;\r\nend\r\n\r\n \r\n\r\n\r\n"
  },
  {
    "path": "algorithms/lrsr.m",
    "content": "function [X,E,obj,err,iter] = lrsr(A,B,lambda1,lambda2,opts)\r\n\r\n% Solve the Low-Rank and Sparse Representation (LRSR) minimization problem by M-ADMM\r\n%\r\n% min_{X,E} ||X||_*+lambda1*||X||_1+lambda2*loss(E), s.t. A=BX+E\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2 or ||E||_{2,1}\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*na matrix\r\n%       B       -    d*nb matrix\r\n%       lambda1 -    >0, parameter\r\n%       lambda2 -    >0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1': loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%                               'l21' (default): loss(E) = ||E||_{2,1}\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    nb*na matrix\r\n%       E       -    d*na matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l21';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,na] = size(A);\r\n[~,nb] = size(B);\r\n\r\nX = zeros(nb,na);\r\nE = zeros(d,na);\r\nZ = X;\r\nJ = X;\r\n\r\nY1 = E;\r\nY2 = X;\r\nY3 = X;\r\nBtB = B'*B;\r\nBtA = B'*A;\r\nI = eye(nb);\r\ninvBtBI = (BtB+2*I)\\I;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Zk = Z;\r\n    Ek = E;\r\n    Jk = J;\r\n    % first super block {Z,J,E}\r\n    [Z,nuclearnormZ] = prox_nuclear(X+Y2/mu,1/mu);\r\n    J = prox_l1(X+Y3/mu,lambda1/mu);\r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(A-B*X+Y1/mu,lambda2/mu);\r\n    elseif strcmp(loss,'l21')\r\n        E = prox_l21(A-B*X+Y1/mu,lambda2/mu);\r\n    elseif strcmp(loss,'l2')\r\n        E = mu*(A-B*X+Y1/mu)/(lambda2+mu);\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    % second  super block {X}\r\n    X = invBtBI*(B'*(Y1/mu-E)+BtA-(Y2+Y3)/mu+Z+J);\r\n  \r\n    dY1 = A-B*X-E;\r\n    dY2 = X-Z;\r\n    dY3 = X-J;\r\n    chgX = max(max(abs(Xk-X)));\r\n    chgE = max(max(abs(Ek-E)));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chgJ = max(max(abs(Jk-J)));\r\n    chg = max([chgX chgE chgZ chgJ max(abs(dY1(:))) max(abs(dY2(:))) max(abs(dY3(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = nuclearnormZ+lambda1*norm(J(:),1)+lambda2*comp_loss(E,loss);\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2+norm(dY3,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = nuclearnormZ+lambda1*norm(J(:),1)+lambda2*comp_loss(E,loss);\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2+norm(dY3,'fro')^2);\r\n\r\n\r\nfunction out = comp_loss(E,normtype)\r\n\r\nswitch normtype\r\n    case 'l1'\r\n        out = norm(E(:),1);\r\n    case 'l21'\r\n        out = 0;\r\n        for i = 1 : size(E,2)\r\n            out = out + norm(E(:,i));\r\n        end\r\n    case 'l2'\r\n        out = 0.5*norm(E,'fro')^2;\r\nend\r\n"
  },
  {
    "path": "algorithms/lrtcR_snn.m",
    "content": "function [X,err,iter] = lrtcR_snn(M,omega,alpha,opts)\r\n\r\n% Solve the Noisy Low-Rank Tensor Completion (LRTC) based on Sum of Nuclear Norm (SNN) problem by M-ADMM\r\n%\r\n% min_{X,E} \\sum_i \\alpha_i*||X_{i(i)}||_* + loss(E),\r\n% s.t. P_Omega(X) + E = M.\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       M       -    d1*d2*...dk tensor\r\n%       omega   -    index of the observed entries\r\n%       alpha   -    k*1 vector, parameters\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    d1*d2*...*dk tensor\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 24/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\ndim = size(M);\r\nk = length(dim);\r\n\r\nomegac = setdiff(1:prod(dim),omega);\r\n\r\nX = zeros(dim);\r\nY = cell(k,1);\r\nZ = Y;\r\nE = X;\r\nY2 = E;\r\nfor i = 1 : k\r\n    Y{i} = X;\r\n    Z{i} = X;\r\nend\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Ek = E;\r\n    Zk = Z;\r\n    % first super block {Z_i,E}\r\n    sumtemp = zeros(dim);\r\n    for i = 1 : k\r\n        Z{i} = Fold(prox_nuclear(Unfold(X+Y{i}/mu,dim,i), alpha(i)/mu),dim,i);\r\n        sumtemp = sumtemp + Z{i} - Y{i}/mu;\r\n    end    \r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(-X+M-Y2/mu,1/mu);\r\n    elseif strcmp(loss,'l2')\r\n        E = (-X+M-Y2/mu)*(mu/(1+mu));\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    % second super block {X}\r\n    X(omega) = (sumtemp(omega)-Y2(omega)/mu-E(omega)+M(omega))/(k+1);\r\n    X(omegac) = sumtemp(omegac)/k;\r\n    \r\n    chg = max([max(abs(Xk(:)-X(:))), max(abs(Ek(:)-E(:))) ]);\r\n    err = 0;\r\n    for i = 1 : k\r\n        dY = X-Z{i};\r\n        err = err+norm(dY(:))^2;\r\n        Y{i} = Y{i}+mu*dY;\r\n        chg = max([chg,max(abs(dY(:))), max(abs((Zk{i}(:)-Z{i}(:))))]);\r\n    end\r\n    dY = E-M;    \r\n    dY(omega) = dY(omega)+X(omega);\r\n    chg = max(chg,max(abs(dY(:))));\r\n    Y2 = Y2 + mu*dY;\r\n    err = sqrt(err+norm(dY(:))^2);\r\n\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', err=' num2str(chg)]); \r\n        end\r\n    end\r\n    if chg < tol\r\n        break;\r\n    end \r\n    mu = min(rho*mu,max_mu);    \r\nend\r\n "
  },
  {
    "path": "algorithms/lrtcR_tnn.m",
    "content": "function [X,E,obj,err,iter] = lrtcR_tnn(M,omega,lambda,opts)\r\n\r\n% Solve the Noisy Low-Rank Tensor Completion (LRTC) problem by ADMM\r\n%\r\n% min_{X,E} ||X||_*+lambda*loss(E), s.t. P_Omega(X) + E = M.\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       M       -    d1*d2*d3 tensor\r\n%       omega   -    index of the observed entries\r\n%       lambda  -    >=0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    d1*d2*d3 tensor\r\n%       E       -    d1*d2*d3 tensor\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 27/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\ndim = size(M);\r\nX = zeros(dim);\r\nZ = X;\r\nE = X;\r\nY1 = X;\r\nY2 = X;\r\nomegac = setdiff(1:prod(dim),omega);\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Zk = Z;\r\n    Ek = E;\r\n    % first super block {X,E}\r\n    [X,tnnX] = prox_tnn(Z-Y2/mu,1/mu);\r\n    temp = M-Y1/mu;\r\n    temp(omega) = temp(omega)-Z(omega);\r\n    if strcmp(loss,'l1')\r\n        E = prox_l1(temp,lambda/mu);\r\n    elseif strcmp(loss,'l2')\r\n        E = temp*(mu/(lambda+mu));\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    \r\n    % second super block {Z}\r\n    Z(omega) = (-E(omega)+M(omega)-(Y1(omega)-Y2(omega))/mu+X(omega))/2;\r\n    Z(omegac) = X(omegac)+Y2(omegac)/mu;\r\n    \r\n    dY1 = E-M;\r\n    dY1(omega) = dY1(omega)+Z(omega);\r\n    dY2 = X-Z;   \r\n    chgX = max(abs(Xk(:)-X(:)));\r\n    chgE = max(abs(Ek(:)-E(:)));\r\n    chgZ = max(abs(Zk(:)-Z(:)));\r\n    chg = max([chgX chgE chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = tnnX+lambda*comp_loss(E,loss);\r\n            err = sqrt(norm(dY1(:))^2+norm(dY2(:))^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = tnnX+lambda*comp_loss(E,loss);\r\nerr = sqrt(norm(dY1(:))^2+norm(dY2(:))^2);"
  },
  {
    "path": "algorithms/lrtc_snn.m",
    "content": "function [X,err,iter] = lrtc_snn(M,omega,alpha,opts)\r\n\r\n% Solve the Low-Rank Tensor Completion (LRTC) based on Sum of Nuclear Norm (SNN) problem by M-ADMM\r\n%\r\n% min_X \\sum_i \\alpha_i*||X_{i(i)}||_*, s.t. P_Omega(X) = P_Omega(M)\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       M       -    d1*d2*...*dk tensor\r\n%       omega   -    index of the observed entries\r\n%       alpha   -    k*1 vector, parameters\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    d1*d2*...*dk tensor\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 24/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\ndim = size(M);\r\nk = length(dim);\r\nomegac = setdiff(1:prod(dim),omega);\r\n\r\nX = zeros(dim);\r\nX(omega) = M(omega);\r\nY = cell(k,1);\r\nZ = Y;\r\nfor i = 1 : k\r\n    Y{i} = X;\r\n    Z{i} = X;\r\nend\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Zk = Z;\r\n    % first super block {Z_i}\r\n    sumtemp = zeros(1,length(omegac));\r\n    for i = 1 : k\r\n        Z{i} = Fold(prox_nuclear(Unfold(X+Y{i}/mu,dim,i), alpha(i)/mu),dim,i);\r\n        sumtemp = sumtemp + Z{i}(omegac) - Y{i}(omegac)/mu;\r\n    end\r\n    % second super block {X}\r\n    X(omegac) = sumtemp/k;\r\n    \r\n    chg = max(abs(Xk(:)-X(:)));\r\n    err = 0;\r\n    for i = 1 : k\r\n        dY = X-Z{i};\r\n        err = err+norm(dY(:))^2;\r\n        Y{i} = Y{i}+mu*dY;\r\n        chg = max([chg, max(abs(dY(:))), max(abs(Zk{i}(:)-Z{i}(:)))]);\r\n    end\r\n    err = sqrt(err); \r\n\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    if chg < tol\r\n        break;\r\n    end \r\n    mu = min(rho*mu,max_mu);    \r\nend\r\n "
  },
  {
    "path": "algorithms/lrtc_tnn.m",
    "content": "function [X,obj,err,iter] = lrtc_tnn(M,omega,opts)\r\n\r\n% Solve the Low-Rank Tensor Completion (LRTC) based on Tensor Nuclear Norm (TNN) problem by M-ADMM\r\n%\r\n% min_X ||X||_*, s.t. P_Omega(X) = P_Omega(M)\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       M       -    d1*d2*d3 tensor\r\n%       omega   -    index of the observed entries\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       X       -    d1*d2*d3 tensor\r\n%       err     -    residual\r\n%       obj     -    objective function value\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 25/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\ndim = size(M);\r\nk = length(dim);\r\nomegac = setdiff(1:prod(dim),omega);\r\n\r\nX = zeros(dim);\r\nX(omega) = M(omega);\r\nE = zeros(dim);\r\nY = E;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Xk = X;\r\n    Ek = E;\r\n    % update X\r\n    [X,tnnX] = prox_tnn(-E+M+Y/mu,1/mu); \r\n    % update E\r\n    E = M-X+Y/mu;\r\n    E(omega) = 0;\r\n \r\n    dY = M-X-E;    \r\n    chgX = max(abs(Xk(:)-X(:)));\r\n    chgE = max(abs(Ek(:)-E(:)));\r\n    chg = max([chgX chgE max(abs(dY(:)))]);\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = tnnX;\r\n            err = norm(dY(:));\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y = Y + mu*dY;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = tnnX;\r\nerr = norm(dY(:));\r\n\r\n "
  },
  {
    "path": "algorithms/lrtr_Gaussian_tnn.m",
    "content": "function [X,obj,err,iter] = lrtr_Gaussian_tnn(A,b,Xsize,opts)\n\n% Low tubal rank tensor recovery from Gaussian measurements by tensor\n% nuclear norm minimization\n%\n% min_X ||X||_*, s.t. A*vec(X) = b\n%\n% ---------------------------------------------\n% Input:\n%       A       -    m*n matrix\n%       b       -    m*1 vector\n%       Xsize   -    Structure value in Matlab. The fields\n%       (Xsize.n1,Xsize.n2,Xsize.n3) give the size of X.\n%           \n%       opts    -    Structure value in Matlab. The fields are\n%           opts.tol        -   termination tolerance\n%           opts.max_iter   -   maximum number of iterations\n%           opts.mu         -   stepsize for dual variable updating in ADMM\n%           opts.max_mu     -   maximum stepsize\n%           opts.rho        -   rho>=1, ratio used to increase mu\n%           opts.DEBUG      -   0 or 1\n%\n% Output:\n%       X       -    n1*n2*n3 tensor (n=n1*n2*n3)\n%       obj     -    objective function value\n%       err     -    residual\n%       iter    -    number of iterations\n%\n% version 1.0 - 09/10/2017\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n%\n% References:\n% Canyi Lu, Jiashi Feng, Zhouchen Lin, Shuicheng Yan\n% Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements\n% International Joint Conference on Artificial Intelligence (IJCAI). 2018\n\n\ntol = 1e-8; \nmax_iter = 1000;\nrho = 1.1;\nmu = 1e-6;\nmax_mu = 1e10;\nDEBUG = 0;\n\nif ~exist('opts', 'var')\n    opts = [];\nend    \nif isfield(opts, 'tol');         tol = opts.tol;              end\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\nif isfield(opts, 'rho');         rho = opts.rho;              end\nif isfield(opts, 'mu');          mu = opts.mu;                end\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\n\nn1 = Xsize.n1;\nn2 = Xsize.n2;\nn3 = Xsize.n3;\nX = zeros(n1,n2,n3);\nZ = X;\nm = length(b);\nY1 = zeros(m,1);\nY2 = X;\nI = eye(n1*n2*n3);\ninvA = (A'*A+I)\\I;\niter = 0;\nfor iter = 1 : max_iter\n    Xk = X;\n    Zk = Z;\n    % update X\n    [X,Xtnn] = prox_tnn(Z-Y2/mu,1/mu);\n    % update Z\n    vecZ = invA*(A'*(-Y1/mu+b)+Y2(:)/mu+X(:));\n    Z = reshape(vecZ,n1,n2,n3);\n    \n    dY1 = A*vecZ-b;\n    dY2 = X-Z;\n    chgX = max(abs(Xk(:)-X(:)));\n    chgZ = max(abs(Zk(:)-Z(:)));\n    chg = max([chgX chgZ max(abs(dY1)) max(abs(dY2(:)))]);\n    if DEBUG\n        if iter == 1 || mod(iter, 10) == 0\n            obj = Xtnn;\n            err = norm(dY1)^2+norm(dY2(:))^2;\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \n        end\n    end\n    \n    if chg < tol\n        break;\n    end \n    Y1 = Y1 + mu*dY1;\n    Y2 = Y2 + mu*dY2;\n    mu = min(rho*mu,max_mu);    \nend\nobj = Xtnn;\nerr = norm(dY1)^2+norm(dY2(:))^2;\n"
  },
  {
    "path": "algorithms/mlap.m",
    "content": "function [Z,E,obj,err,iter] = mlap(X,lambda,alpha,opts)\r\n\r\n% Solve the Multi-task Low-rank Affinity Pursuit (MLAP) minimization problem by M-ADMM\r\n%\r\n% Reference: Cheng, Bin, Guangcan Liu, Jingdong Wang, Zhongyang Huang, and Shuicheng Yan.\r\n% Multi-task low-rank affinity pursuit for image segmentation. ICCV, 2011.\r\n%\r\n% min_{Z_i,E_i} \\sum_{i=1}^K (||Z_i||_*+lambda*loss(E_i))+alpha*||Z||_{2,1}, \r\n% s.t. X_i=X_i*Z_i+E_i, i=1,...,K.\r\n% loss(E) = ||E||_1 or 0.5*||E||_F^2 or ||E||_{2,1}\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       X       -    d*n*K tensor\r\n%       lambda  -    >0, parameter\r\n%       alpha   -    >0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1': loss(E) = ||E||_1 \r\n%                               'l2': loss(E) = 0.5*||E||_F^2\r\n%                               'l21' (default): loss(E) = ||E||_{2,1}\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       Z       -    n*n*K tensor\r\n%       E       -    d*n*K tensor\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l21';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n[d,n,K] = size(X);\r\nZ = zeros(n,n,K);\r\nE = zeros(d,n,K);\r\nJ = Z;\r\nS = Z;\r\nY = E;\r\nW = Z;\r\nV = Z;\r\ndY = Y;\r\nXmXS = E;\r\nXtX = zeros(n,n,K);\r\ninvXtXI = zeros(n,n,K);\r\nI = eye(n);\r\nfor i = 1 : K\r\n    XtX(:,:,i) = X(:,:,i)'*X(:,:,i);\r\n    invXtXI(:,:,i) = (XtX(:,:,i)+I)\\I;\r\nend\r\nnuclearnormJ = zeros(K,1);\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Zk = Z;\r\n    Ek = E;\r\n    Jk = J;\r\n    Sk = S;\r\n    % first super block {J,S}\r\n    for i = 1 : K\r\n        [J(:,:,i),nuclearnormJ(i)] = prox_nuclear(Z(:,:,i)+W(:,:,i)/mu,1/mu);\r\n        S(:,:,i) = invXtXI(:,:,i)*(XtX(:,:,i)-X(:,:,i)'*(E(:,:,i)-Y(:,:,i)/mu)+Z(:,:,i)+(V(:,:,i)-W(:,:,i))/mu);\r\n    end\r\n    % second super block {Z,E}\r\n    Z = prox_tensor_l21((J+S-(W+V)/mu)/2,alpha/(2*mu));\r\n    for i = 1 : K\r\n        XmXS(:,:,i) = X(:,:,i)-X(:,:,i)*S(:,:,i);\r\n    end\r\n    if strcmp(loss,'l1')\r\n        for i = 1 : K\r\n            E(:,:,i) = prox_l1(XmXS(:,:,i)+Y(:,:,i)/mu,lambda/mu);\r\n        end\r\n    elseif strcmp(loss,'l21')\r\n        for i = 1 : K\r\n            E(:,:,i) = prox_l21(XmXS(:,:,i)+Y(:,:,i)/mu,lambda/mu);\r\n        end\r\n    elseif strcmp(loss,'l2')\r\n        for i = 1 : K\r\n            E = (XmXS(:,:,i)+Y(:,:,i)/mu) / (lambda/mu+1);\r\n        end        \r\n    else\r\n        error('not supported loss function');\r\n    end\r\n    \r\n    dY = XmXS-E;\r\n    dW = Z-J;\r\n    dV = Z-S;\r\n\r\n    chgZ = max(abs(Zk(:)-Z(:)));\r\n    chgE = max(abs(Ek(:)-E(:)));\r\n    chgJ = max(abs(Jk(:)-J(:)));\r\n    chgS = max(abs(Sk(:)-S(:)));\r\n    chg = max([chgZ chgE chgJ chgS max(abs(dY(:))) max(abs(dW(:))) max(abs(dV(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = sum(nuclearnormJ)+lambda*comp_loss(E,loss)+alpha*comp_loss(Z,'l21');\r\n            err = sqrt(norm(dY(:))^2+norm(dW(:))^2+norm(dV(:))^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y = Y + mu*dY;\r\n    W = W + mu*dW;\r\n    V = V + mu*dV;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = sum(nuclearnormJ)+lambda*comp_loss(E,loss)+alpha*comp_loss(Z,'l21');\r\nerr = sqrt(norm(dY(:))^2+norm(dW(:))^2+norm(dV(:))^2);\r\n \r\nfunction X = prox_tensor_l21(B,lambda)\r\n% proximal operator of tensor l21-norm, i.e., the sum of the l2 norm of all\r\n% tubes of a tensor. \r\n% \r\n% X     -   n1*n2*n3 tensor\r\n% B     -   n1*n2*n3 tensor\r\n% \r\n% min_X lambda*\\sum_{i=1}^n1\\sum_{j=1}^n2 ||X(i,j,:)||_2 + 0.5*||X-B||_F^2\r\n\r\n[n1,n2,n3] = size(B);\r\nX = zeros(n1,n2,n3);\r\nfor i = 1 : n1\r\n    for j = 1 : n2\r\n        v = B(i,j,:);\r\n        nxi = norm(v(:));\r\n        if nxi > lambda\r\n            X(i,j,:) = (1-lambda/nxi)*B(i,j,:);\r\n        end        \r\n    end\r\nend\r\n"
  },
  {
    "path": "algorithms/rmsc.m",
    "content": "function [L,S,obj,err,iter] = rmsc(X,lambda,opts)\r\n\r\n% Solve the Robust Multi-view Spectral Clustering (RMSC) problem by M-ADMM\r\n%\r\n% min_{L,S_i} ||L||_*+lambda*\\sum_i ||S_i||_1,\r\n% s.t. X_i=L+S_i, i=1,...,m, L>=0, L1=1.\r\n% ---------------------------------------------\r\n% Input:\r\n%       X       -    d*n*m tensor\r\n%       lambda  -    >0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       L       -    d*n matrix\r\n%       S       -    d*n*m tensor\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 19/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n[d,n,m] = size(X);\r\nL = zeros(d,n);\r\nS = zeros(d,n,m);\r\nZ = L;\r\nY = S;\r\ndY = S;\r\nY2 = L;\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Lk = L;\r\n    Sk = S;\r\n    Zk = Z;\r\n    % first super block {Z,S_i}\r\n    [Z,nuclearnormZ] = prox_nuclear(L+Y2/mu,1/mu);\r\n    for i = 1 : m\r\n        S(:,:,i) = prox_l1(-L+X(:,:,i)-Y(:,:,i)/mu,lambda/mu);\r\n    end\r\n    % second super block {L}\r\n    temp = (sum(X-S-Y/mu,3)+Z-Y2/mu)/(m+1);\r\n    L = project_simplex(temp);\r\n\r\n    for i = 1 : m\r\n        dY(:,:,i) = L+S(:,:,i)-X(:,:,i);\r\n    end\r\n    dY2 = L-Z;\r\n    chgL = max(abs(Lk(:)-L(:)));\r\n    chgZ = max(abs(Zk(:)-Z(:)));\r\n    chgS = max(abs(Sk(:)-S(:)));\r\n    chg = max([chgL chgS chgZ max(abs(dY(:))) max(abs(dY2(:)))]);\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = nuclearnormZ+lambda*norm(S(:),1);\r\n            err = sqrt(norm(dY(:))^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y = Y + mu*dY;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = nuclearnormZ+lambda*norm(S(:),1);\r\nerr = sqrt(norm(dY(:))^2+norm(dY2,'fro')^2);\r\n\r\n"
  },
  {
    "path": "algorithms/rpca.m",
    "content": "function [L,S,obj,err,iter] = rpca(X,lambda,opts)\r\n\r\n% Solve the Robust Principal Component Analysis minimization problem by M-ADMM\r\n%\r\n% min_{L,S} ||L||_*+lambda*loss(S), s.t. X=L+S\r\n% loss(S) = ||S||_1 or ||S||_{2,1}\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       X       -    d*n matrix\r\n%       lambda  -    >0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(S) = ||S||_1 \r\n%                               'l21': loss(S) = ||S||_{2,1}\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       L       -    d*n matrix\r\n%       S       -    d*n matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual \r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 19/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\n[d,n] = size(X);\r\n\r\nL = zeros(d,n);\r\nS = L;\r\nY = L;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Lk = L;\r\n    Sk = S;\r\n    % update L\r\n    [L,nuclearnormL] = prox_nuclear(-S+X-Y/mu,1/mu);\r\n    % update S\r\n    if strcmp(loss,'l1')\r\n        S = prox_l1(-L+X-Y/mu,lambda/mu);\r\n    elseif strcmp(loss,'l21')\r\n        S = prox_l21(-L+X-Y/mu,lambda/mu);\r\n    else\r\n        error('not supported loss function');\r\n    end\r\n  \r\n    dY = L+S-X;\r\n    chgL = max(max(abs(Lk-L)));\r\n    chgS = max(max(abs(Sk-S)));\r\n    chg = max([chgL chgS max(abs(dY(:)))]);\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = nuclearnormL+lambda*comp_loss(S,loss);\r\n            err = norm(dY,'fro');\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y = Y + mu*dY;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = nuclearnormL+lambda*comp_loss(S,loss);\r\nerr = norm(dY,'fro');\r\n\r\nfunction out = comp_loss(E,loss)\r\n\r\nswitch loss\r\n    case 'l1'\r\n        out = norm(E(:),1);\r\n    case 'l21'\r\n        out = 0;\r\n        for i = 1 : size(E,2)\r\n            out = out + norm(E(:,i));\r\n        end\r\nend\r\n"
  },
  {
    "path": "algorithms/sparsesc.m",
    "content": "function [P,obj,err,iter] = sparsesc(L,lambda,k,opts)\r\n\r\n% Solve the Sparse Spectral Clustering problem\r\n%\r\n% min_P <P,L>+lambda*||P||_1, s.t. 0\\preceq P \\preceq I, Tr(P)=k\r\n%\r\n% Reference: Canyi Lu, Shuicheng Yan, Zhouchen Lin, Convex Sparse Spectral\r\n% Clustering: Single-view to Multi-view, TIP, 2016\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       L       -    n*n normalized Laplacian matrix matrix\r\n%       k       -    integer\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       P       -    n*n matrix\r\n%       obj     -    objective function value\r\n%       err     -    residual ||AX-B||_F\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n\r\nn = size(L,1);\r\nP = zeros(n);\r\nQ = P;\r\nY = P;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Pk = P;\r\n    Qk = Q;\r\n    % update P\r\n    P = prox_l1(Q-(Y+L)/mu,lambda/mu);\r\n    % update Q\r\n    temp = P+Y/mu;\r\n    temp = (temp+temp')/2;\r\n    Q = project_fantope(temp,k);\r\n    \r\n    dY = P-Q;\r\n    chgP = max(max(abs(Pk-P)));\r\n    chgQ = max(max(abs(Qk-Q)));\r\n    chg = max([chgP chgQ max(abs(dY(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = trace(P'*L)+lambda*norm(Q(:),1);\r\n            err = norm(dY,'fro');\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y = Y + mu*dY;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = trace(P'*L)+lambda*norm(Q(:),1);\r\nerr = norm(dY,'fro');"
  },
  {
    "path": "algorithms/tracelasso.m",
    "content": "function [x,obj,err,iter] = tracelasso(A,b,opts)\r\n\r\n% Solve the trace Lasso minimization problem by ADMM\r\n%\r\n% min_x ||A*Diag(x)||_*, s.t. Ax=b\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*n matrix\r\n%       b       -    d*1 vector\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       x       -    n*1 vector\r\n%       obj     -    objective function value\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n[d,n] = size(A);\r\nx = zeros(n,1);\r\nZ = zeros(d,n);\r\nY1 = zeros(d,1);\r\nY2 = Z;\r\nAtb = A'*b;\r\nAtA = A'*A;\r\ninvAtA = (AtA+diag(diag(AtA)))\\eye(n);\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    xk = x;\r\n    Zk = Z;\r\n    % update x\r\n    x = invAtA*(-A'*Y1/mu+Atb+diagAtB(A,-Y2/mu+Z));\r\n    % update Z\r\n    [Z,nuclearnorm] = prox_nuclear(A*diag(x)+Y2/mu,1/mu);\r\n\r\n    dY1 = A*x-b;\r\n    dY2 = A*diag(x)-Z;\r\n    chgx = max(abs(xk-x));\r\n    chgZ = max(abs(Zk-Z));\r\n    chg = max([chgx chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = nuclearnorm;\r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = nuclearnorm;\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\nfunction v = diagAtB(A,B)\r\n% A, B - d*n matrices\r\n% v = diag(A'*B), n*1 vector\r\n\r\nn = size(A,2);\r\nv = zeros(n,1);\r\nfor i = 1 : n\r\n   v(i) = A(:,i)'*B(:,i); \r\nend"
  },
  {
    "path": "algorithms/tracelassoR.m",
    "content": "function [x,e,obj,err,iter] = tracelassoR(A,b,lambda,opts)\r\n\r\n% Solve the trace Lasso regularized minimization problem by M-ADMM\r\n%\r\n% min_{x,e} loss(e)+lambda*||A*Diag(x)||_*, s.t. Ax+e=b\r\n% loss(e) = ||e||_1 or 0.5*||e||_2^2\r\n% ---------------------------------------------\r\n% Input:\r\n%       A       -    d*n matrix\r\n%       b       -    d*1 vector\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.loss       -   'l1' (default): loss(e) = ||e||_1 \r\n%                               'l2': loss(e) = 0.5*||e||_2^2\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       x       -    n*1 vector\r\n%       e       -    d*1 vector\r\n%       obj     -    objective function value\r\n%       err     -    residual \r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\nloss = 'l1';\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend\r\nif isfield(opts, 'loss');        loss = opts.loss;            end\r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\n[d,n] = size(A);\r\nx = zeros(n,1);\r\nZ = zeros(d,n);\r\ne = zeros(d,1);\r\nY1 = e;\r\nY2 = Z;\r\n\r\nAtb = A'*b;\r\nAtA = A'*A;\r\ninvAtA = (AtA+diag(diag(AtA)))\\eye(n);\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    xk = x;\r\n    ek = e;\r\n    Zk = Z;    \r\n    % first super block {Z,e}\r\n    [Z,nuclearnorm] = prox_nuclear(A*diag(x)-Y2/mu,lambda/mu);\r\n    if strcmp(loss,'l1')\r\n        e = prox_l1(b-A*x-Y1/mu,1/mu);\r\n    elseif strcmp(loss,'l2')\r\n        e = mu*(b-A*x-Y1/mu)/(1+mu);\r\n    else\r\n        error('not supported loss function');\r\n    end    \r\n    % second super block {x}\r\n    x = invAtA*(-A'*(Y1/mu+e)+Atb+diagAtB(A,Y2/mu+Z));\r\n    dY1 = A*x+e-b;\r\n    dY2 = Z-A*diag(x);\r\n    chgx = max(abs(xk-x));\r\n    chge = max(abs(ek-e));\r\n    chgZ = max(max(abs(Zk-Z)));\r\n    chg = max([chgx chge chgZ max(abs(dY1(:))) max(abs(dY2(:)))]);\r\n    if DEBUG        \r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = comp_loss(e,loss)+lambda*nuclearnorm;    \r\n            err = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y1 = Y1 + mu*dY1;\r\n    Y2 = Y2 + mu*dY2;\r\n    mu = min(rho*mu,max_mu);\r\nend\r\nobj = comp_loss(e,loss)+lambda*nuclearnorm;\r\nerr = sqrt(norm(dY1,'fro')^2+norm(dY2,'fro')^2);\r\n\r\nfunction v = diagAtB(A,B)\r\n% A, B - d*n matrices\r\n% v = diag(A'*B), n*1 vector\r\n\r\nn = size(A,2);\r\nv = zeros(n,1);\r\nfor i = 1 : n\r\n   v(i) = A(:,i)'*B(:,i); \r\nend\r\n"
  },
  {
    "path": "algorithms/trpca_snn.m",
    "content": "function [L,E,err,iter] = trpca_snn(X,alpha,opts)\r\n\r\n% Solve the Tensor Robust Principal Component Analysis (TRPCA) based on Sum of Nuclear Norm (SNN) problem by M-ADMM\r\n%\r\n% min_{L,E} \\sum_i \\alpha_i*||L_{i(i)}||_* + ||E||_1,\r\n% s.t. X = L + E.\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       X       -    d1*d2*...dk tensor\r\n%       alpha   -    k*1 vector, parameters\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       L       -    d1*d2*...*dk tensor\r\n%       E       -    d1*d2*...*dk tensor\r\n%       err     -    residual\r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 24/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\ndim = size(X);\r\nk = length(dim);\r\n\r\nE = zeros(dim);\r\nY = cell(k,1);\r\nL = Y;\r\nfor i = 1 : k\r\n    Y{i} = E;\r\n    L{i} = E;\r\nend\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Lk = L;\r\n    Ek = E;\r\n    % first super block {L_i}\r\n    sumtemp = zeros(dim);\r\n    for i = 1 : k\r\n        L{i} = Fold(prox_nuclear(Unfold(X-E-Y{i}/mu,dim,i), alpha(i)/mu),dim,i);\r\n        sumtemp = sumtemp + L{i} + Y{i}/mu;\r\n    end\r\n    % second super block {E}\r\n    E = prox_l1(X-sumtemp/k,1/(mu*k));\r\n    \r\n    chg = max(abs(Ek(:)-E(:)));\r\n    err = 0;\r\n    for i = 1 : k\r\n        dY = L{i}+E-X;\r\n        err = err+norm(dY(:))^2;\r\n        Y{i} = Y{i}+mu*dY;\r\n        chg = max([chg, max(abs(dY(:))), max(abs(Lk{i}(:)-L{i}(:)))]);\r\n    end\r\n    err = sqrt(err);\r\n\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    if chg < tol\r\n        break;\r\n    end \r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nL = L{1};\r\n"
  },
  {
    "path": "algorithms/trpca_tnn.m",
    "content": "function [L,S,obj,err,iter] = trpca_tnn(X,lambda,opts)\r\n\r\n% Solve the Tensor Robust Principal Component Analysis based on Tensor Nuclear Norm problem by ADMM\r\n%\r\n% min_{L,S} ||L||_*+lambda*||S||_1, s.t. X=L+S\r\n%\r\n% ---------------------------------------------\r\n% Input:\r\n%       X       -    d1*d2*d3 tensor\r\n%       lambda  -    >0, parameter\r\n%       opts    -    Structure value in Matlab. The fields are\r\n%           opts.tol        -   termination tolerance\r\n%           opts.max_iter   -   maximum number of iterations\r\n%           opts.mu         -   stepsize for dual variable updating in ADMM\r\n%           opts.max_mu     -   maximum stepsize\r\n%           opts.rho        -   rho>=1, ratio used to increase mu\r\n%           opts.DEBUG      -   0 or 1\r\n%\r\n% Output:\r\n%       L       -    d1*d2*d3 tensor\r\n%       S       -    d1*d2*d3 tensor\r\n%       obj     -    objective function value\r\n%       err     -    residual \r\n%       iter    -    number of iterations\r\n%\r\n% version 1.0 - 19/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n% References: \r\n% [1] Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Zhouchen Lin and Shuicheng\r\n%     Yan, Tensor Robust Principal Component Analysis with A New Tensor Nuclear\r\n%     Norm, arXiv preprint arXiv:1804.03728, 2018\r\n% [2] Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Zhouchen Lin and Shuicheng\r\n%     Yan, Tensor Robust Principal Component Analysis: Exact Recovery of Corrupted \r\n%     Low-Rank Tensors via Convex Optimization, arXiv preprint arXiv:1804.03728, 2018\r\n%\r\n\r\ntol = 1e-8; \r\nmax_iter = 500;\r\nrho = 1.1;\r\nmu = 1e-4;\r\nmax_mu = 1e10;\r\nDEBUG = 0;\r\n\r\nif ~exist('opts', 'var')\r\n    opts = [];\r\nend    \r\nif isfield(opts, 'tol');         tol = opts.tol;              end\r\nif isfield(opts, 'max_iter');    max_iter = opts.max_iter;    end\r\nif isfield(opts, 'rho');         rho = opts.rho;              end\r\nif isfield(opts, 'mu');          mu = opts.mu;                end\r\nif isfield(opts, 'max_mu');      max_mu = opts.max_mu;        end\r\nif isfield(opts, 'DEBUG');       DEBUG = opts.DEBUG;          end\r\n\r\ndim = size(X);\r\nL = zeros(dim);\r\nS = L;\r\nY = L;\r\n\r\niter = 0;\r\nfor iter = 1 : max_iter\r\n    Lk = L;\r\n    Sk = S;\r\n    % update L\r\n    [L,tnnL] = prox_tnn(-S+X-Y/mu,1/mu);\r\n    % update S\r\n    S = prox_l1(-L+X-Y/mu,lambda/mu);\r\n  \r\n    dY = L+S-X;\r\n    chgL = max(abs(Lk(:)-L(:)));\r\n    chgS = max(abs(Sk(:)-S(:)));\r\n    chg = max([ chgL chgS max(abs(dY(:))) ]);\r\n    if DEBUG\r\n        if iter == 1 || mod(iter, 10) == 0\r\n            obj = tnnL+lambda*norm(S(:),1);\r\n            err = norm(dY(:));\r\n            disp(['iter ' num2str(iter) ', mu=' num2str(mu) ...\r\n                    ', obj=' num2str(obj) ', err=' num2str(err)]); \r\n        end\r\n    end\r\n    \r\n    if chg < tol\r\n        break;\r\n    end \r\n    Y = Y + mu*dY;\r\n    mu = min(rho*mu,max_mu);    \r\nend\r\nobj = tnnL+lambda*norm(S(:),1);\r\nerr = norm(dY(:));\r\n"
  },
  {
    "path": "example_low_rank_matrix_models.m",
    "content": "%\n% References:\n%\n% C. Lu. A Library of ADMM for Sparse and Low-rank Optimization. National University of Singapore, June 2016.\n% https://github.com/canyilu/LibADMM.\n% C. Lu, J. Feng, S. Yan, Z. Lin. A Unified Alternating Direction Method of Multipliers by Majorization \n% Minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, pp. 527-541, 2018\n%\n\n\naddpath(genpath(cd))\nclear\n\n%% Examples for testing the low-rank matrix based models\n% For detailed description of the sparse models, please refer to the Manual.\n\n\n%% generate toy data\nd = 10;\nna = 200;\nnb = 100;\n\nA = randn(d,na);\nX = randn(na,nb);\nB = A*X;\nb = B(:,1);\n\nopts.tol = 1e-6; \nopts.max_iter = 1000;\nopts.rho = 1.2;\nopts.mu = 1e-3;\nopts.max_mu = 1e10;\nopts.DEBUG = 0;\n\n\n%% RPCA\nn1 = 100;\nn2 = 200;\nr = 10;\nL = rand(n1,r)*rand(r,n2); % low-rank part\n\np = 0.1;\nm = p*n1*n2;\ntemp = rand(n1*n2,1);\n[~,I] = sort(temp);\nI = I(1:m);\nOmega = zeros(n1,n2);\nOmega(I) = 1;\nE = sign(rand(n1,n2)-0.5);\nS = Omega.*E; % sparse part, S = P_Omega(E)\n\nXn = L+S;\n\nlambda = 1/sqrt(max(n1,n2));\nopts.loss = 'l1'; \nopts.DEBUG = 1;\ntic\n[Lhat,Shat,obj,err,iter] = rpca(Xn,lambda,opts);\ntoc\nrel_err_L = norm(L-Lhat,'fro')/norm(L,'fro')\nrel_err_S = norm(S-Shat,'fro')/norm(S,'fro')\n\nerr\niter\n\n\n%% low rank matrix completion (lrmc) and regularized lrmc\n\nn1 = 100;\nn2 = 200;\nr = 5;\nX = rand(n1,r)*rand(r,n2);\n\np = 0.6;\nomega = find(rand(n1,n2)<p);\nM = zeros(n1,n2);\nM(omega) = X(omega);\n[Xhat,obj,err,iter] = lrmc(M, omega, opts);\nrel_err_X = norm(Xhat-X,'fro')/norm(X,'fro')\n \nE = randn(n1,n2)/100;\nM = X+E;\nlambda = 0.1;\n[Xhat,obj,err,iter] = lrmcR(M, omega, lambda, opts);\n\n\n%% low rank representation (lrr)\nlambda = 0.001;\nopts.loss = 'l21'; \ntic\n[X,E,obj,err,iter] = lrr(A,A,lambda,opts);\ntoc\nobj\nerr\niter\n\n%% latent LRR (latlrr)\nlambda = 0.1;\nopts.loss = 'l1'; \ntic\n[Z,L,obj,err,iter] = latlrr(A,lambda,opts);\ntoc\nobj\nerr\niter\n\n%% low rank and sparse representation (lrsr)\nlambda1 = 0.1;\nlambda2 = 4;\nopts.loss = 'l21'; \ntic\n[X,E,obj,err,iter] = lrsr(A,B,lambda1,lambda2,opts);\ntoc\nobj\nerr\niter\n\n%% improved graph clustering (igc)\nn = 100;\nr = 5;\nX = rand(n,r)*rand(r,n);\nC = rand(size(X));\nlambda = 1/sqrt(n);\nopts.loss = 'l1'; \nopts.DEBUG = 1;\ntic\n[L,S,obj,err,iter] = igc(X,C,lambda,opts);\ntoc\nerr\niter\n\n%% multi-task low-rank affinity pursuit (mlap)\nn1 = 100;\nn2 = 200;\nK = 10;\nX = rand(n1,n2,K);\nlambda = 0.1;\nalpha = 0.2;\nopts.loss = 'l1'; \ntic\n[Z,E,obj,err,iter] = mlap(X,lambda,alpha,opts);\ntoc\nerr\niter\n\n%% robust multi-view spectral clustering (rmsc)\nn = 100;\nr = 5;\nm = 10;\nX = rand(n,n,m);\nlambda = 1/sqrt(n);\nopts.loss = 'l1'; \nopts.DEBUG = 1;\ntic\n[L,S,obj,err,iter] = rmsc(X,lambda,opts);\ntoc\nerr\niter\n\n%% sparse spectral clustering (sparsesc)\nlambda = 0.001;\nn = 100;\nX = rand(n,n);\nW = abs(X'*X);\nI = eye(n);\nD = diag(sum(W,1));\nL = I - sqrt(inv(D))*W*sqrt(inv(D));\nk = 5;\n[P,obj,err,iter] = sparsesc(L,lambda,k,opts);\nobj\nerr\niter\n\n\n \n\n\n"
  },
  {
    "path": "example_low_rank_tensor_models.m",
    "content": "%\n% References:\n%\n% C. Lu. A Library of ADMM for Sparse and Low-rank Optimization. National University of Singapore, June 2016.\n% https://github.com/canyilu/LibADMM.\n% C. Lu, J. Feng, S. Yan, Z. Lin. A Unified Alternating Direction Method of Multipliers by Majorization \n% Minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, pp. 527-541, 2018\n%\n\n\naddpath(genpath(cd))\nclear\n\n%% Examples for testing the low-rank tensor models\n% For detailed description of the sparse models, please refer to the Manual.\n\n\nopts.mu = 1e-6;\nopts.rho = 1.1;\nopts.max_iter = 500;\nopts.DEBUG = 1;\n\n\n%% Tensor RRPCA based on sum of nuclear norm minimization (rpca_snn)\nn1 = 50;\nn2 = n1;\nn3 = n1;\nr = 5\nL = rand(r,r,r);\nU1 = rand(n1,r);\nU2 = rand(n2,r);\nU3 = rand(n3,r);\nL = nmodeproduct(L,U1,1);\nL = nmodeproduct(L,U2,2);\nL = nmodeproduct(L,U3,3); % low rank part\n\np = 0.05;\nm = p*n1*n2*n3;\ntemp = rand(n1*n2*n3,1);\n[~,I] = sort(temp);\nI = I(1:m);\nOmega = zeros(n1,n2,n3);\nOmega(I) = 1;\nE = sign(rand(n1,n2,n3)-0.5);\nS = Omega.*E; % sparse part, S = P_Omega(E)\n\nXn = L+S;\n\nlambda = sqrt([max(n1,n2*n3), max(n2,n1*n3), max(n3,n1*n2)]);\nlambda = [1 1 1]\n[Lhat,Shat,err,iter] = trpca_snn(Xn,lambda,opts);\n\nerr\niter\n\n\n%% low-rank tensor completion based on sum of nuclear norm minimization (lrtc_snn) \nn1 = 50;\nn2 = n1;\nn3 = n1;\nr = 5;\nX = rand(r,r,r);\nU1 = rand(n1,r);\nU2 = rand(n2,r);\nU3 = rand(n3,r);\nX = nmodeproduct(X,U1,1);\nX = nmodeproduct(X,U2,2);\nX = nmodeproduct(X,U3,3);\np = 0.5;\nomega = find(rand(n1*n2*n3,1)<p);\nM = zeros(n1,n2,n3);\nM(omega) = X(omega);\n\nlambda = [1 1 1];\n[Xhat,err,iter] = lrtc_snn(M,omega,lambda,opts);\nerr\niter\nRSE = norm(X(:)-Xhat(:))/norm(X(:))\n\n%% regularized low-rank tensor completion based on sum of nuclear norm minimization (lrtcR_snn)\nn1 = 50;\nn2 = n1;\nn3 = n1;\nr = 5;\nX = rand(r,r,r);\nU1 = rand(n1,r);\nU2 = rand(n2,r);\nU3 = rand(n3,r);\nX = nmodeproduct(X,U1,1);\nX = nmodeproduct(X,U2,2);\nX = nmodeproduct(X,U3,3);\np = 0.5;\nomega = find(rand(n1*n2*n3,1)<p);\nM = zeros(n1,n2,n3);\nM(omega) = X(omega);\nlambda = [1 1 1];\n[Xhat,err,iter] = lrtcR_snn(M,omega,lambda,opts);\nerr\niter\n\n\n%% Tensor RRPCA based on tensor nuclear norm minimization (rpca_tnn)\nn1 = 50;\nn2 = n1;\nn3 = n1;\nr = 0.1*n1 % tubal rank\nL1 = randn(n1,r,n3)/n1;\nL2 = randn(r,n2,n3)/n2;\nL = tprod(L1,L2); % low rank part\n\np = 0.1;\nm = p*n1*n2*n3;\ntemp = rand(n1*n2*n3,1);\n[~,I] = sort(temp);\nI = I(1:m);\nOmega = zeros(n1,n2,n3);\nOmega(I) = 1;\nE = sign(rand(n1,n2,n3)-0.5);\nS = Omega.*E; % sparse part, S = P_Omega(E)\n\nXn = L+S;\nlambda = 1/sqrt(n3*max(n1,n2));\n\ntic\n[Lhat,Shat] = trpca_tnn(Xn,lambda,opts);\n\nRES_L = norm(L(:)-Lhat(:))/norm(L(:))\nRES_S = norm(S(:)-Shat(:))/norm(S(:))\ntrank = tubalrank(Lhat)\n\n\n\n%% low-rank tensor completion based on tensor nuclear norm minimization (lrtc_tnn)\nn1 = 50;\nn2 = n1;\nn3 = n1;\nr = 0.1*n1 % tubal rank\nL1 = randn(n1,r,n3)/n1;\nL2 = randn(r,n2,n3)/n2;\nX = tprod(L1,L2); % low rank part\np = 0.5;\nomega = find(rand(n1*n2*n3,1)<p);\nM = zeros(n1,n2,n3);\nM(omega) = X(omega);\n\n[Xhat,obj,err,iter] = lrtc_tnn(M,omega,opts);\n\nerr\niter\nRSE = norm(X(:)-Xhat(:))/norm(X(:))\ntrank = tubalrank(Xhat)\n\n\n\n%% regularized low-rank tensor completion based on tensor nuclear norm minimization (lrtcR_tnn) \nn1 = 50;\nn2 = n1;\nn3 = n1;\nr = 0.1*n1 % tubal rank\nL1 = randn(n1,r,n3)/n1;\nL2 = randn(r,n2,n3)/n2;\nX = tprod(L1,L2); % low rank part\np = 0.5;\nomega = find(rand(n1*n2*n3,1)<p);\nM = zeros(n1,n2,n3);\nM(omega) = X(omega);\n\nlambda = 0.5;\n[Xhat,Ehat,obj,err,iter] = lrtcR_tnn(M,omega,lambda,opts);\nerr\niter\n\n\n%% low-rank tensor recovery from Gaussian measurements based on tensor nuclear norm minimization (lrtr_Gaussian_tnn)\nn1 = 30;\nn2 = n1; \nn3 = 5;\nr = 0.2*n1; % tubal rank\nX = tprod(randn(n1,r,n3),randn(r,n2,n3)); % size: n1*n2*n3\n\nm = 3*r*(n1+n2-r)*n3+1; % number of measurements\nn = n1*n2*n3;\nA = randn(m,n)/sqrt(m);\n\nb = A*X(:);\nXsize.n1 = n1;\nXsize.n2 = n2;\nXsize.n3 = n3;\n\nopts.DEBUG = 1;\n[Xhat,obj,err,iter]  = lrtr_Gaussian_tnn(A,b,Xsize,opts);\n\nRSE = norm(Xhat(:)-X(:))/norm(X(:))\ntrank = tubalrank(Xhat)\n\n"
  },
  {
    "path": "example_sparse_models.m",
    "content": "%\n% References:\n%\n% C. Lu. A Library of ADMM for Sparse and Low-rank Optimization. National University of Singapore, June 2016.\n% https://github.com/canyilu/LibADMM.\n% C. Lu, J. Feng, S. Yan, Z. Lin. A Unified Alternating Direction Method of Multipliers by Majorization \n% Minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, pp. 527-541, 2018\n%\n\n\naddpath(genpath(cd))\nclear\n\n%% Examples for testing the sparse models\n% For detailed description of the sparse models, please refer to the Manual.\n\n\n%% generate toy data\nd = 10;\nna = 200;\nnb = 100;\n\nA = randn(d,na);\nX = randn(na,nb);\nB = A*X;\nb = B(:,1);\n\nopts.tol = 1e-6; \nopts.max_iter = 1000;\nopts.rho = 1.1;\nopts.mu = 1e-4;\nopts.max_mu = 1e10;\nopts.DEBUG = 0;\n\n%% l1\n[X2,obj,err,iter] = l1(A,B,opts);\niter\nobj\nerr\nstem(X2(:,1))\n\n%% group l1\ng_num = 5;\ng_len = round(na/g_num);\nfor i = 1 : g_num-1\n    G{i} = (i-1)*g_len+1 : i*g_len;\nend\nG{g_num} = (g_num-1)*g_len+1:na;\n\n[X2,obj,err,iter] = groupl1(A,B,G,opts);\niter\nobj\nerr\nstem(X2(:,1))\n\n%% elastic net\nlambda = 0.01;\n[X2,obj,err,iter] = elasticnet(A,B,lambda,opts);\niter\nobj\nerr\nstem(X2(:,1))\n\n%% fused Lasso\nlambda = 0.01;\n[x,obj,err,iter] = fusedl1(A,b,lambda,opts);\niter\nobj\nerr\nstem(x)\n\n%% trace Lasso\n[x,obj,err,iter] = tracelasso(A,b,opts);\niter\nobj\nerr\nstem(x)\n\n%% k-support norm \nk = 10;\n[X,err,iter] = ksupport(A,B,k,opts);\niter\nerr\nstem(X(:,1));\n\n%% --------------------------------------------------------------\n\n%% regularized l1\nlambda = 0.01;\nopts.loss = 'l1'; \n[X,E,obj,err,iter] = l1R(A,B,lambda,opts);\niter\nobj\nerr\nstem(X(:,1)) \n\n%% regularized group Lasso\ng_num = 5;\ng_len = round(na/g_num);\n\nfor i = 1 : g_num-1\n    G{i} = (i-1)*g_len+1 : i*g_len;\nend\nG{g_num} = (g_num-1)*g_len+1:na;\nlambda = 1;\nopts.loss = 'l1'; \n[X,E,obj,err,iter] = groupl1R(A,B,G,lambda,opts);\niter\nobj\nerr\nstem(X(:,1))\n \n%% regularized elastic net\nlambda1 = 10;\nlambda2 = 10;\nopts.loss = 'l1'; \n[X,E,obj,err,iter] = elasticnetR(A,B,lambda1,lambda2,opts);\niter\nobj\nerr\nstem(X(:,1))\n% stem(E(:,1))\n\n%% regularized fused Lasso\nlambda1 = 10;\nlambda2 = 10;\nopts.loss = 'l1';\n[X,E,obj,err,iter] = fusedl1R(A,b,lambda1,lambda2,opts);\niter\nobj\nerr\nstem(X(:,1))\nstem(E(:,1))\n\n\n%% regularized trace Lasso\nlambda = 0.1;\nopts.loss = 'l1'; \ntic\n[x,e,obj,err,iter] = tracelassoR(A,b,lambda,opts);\ntoc\niter\nobj\nerr\nstem(x)\n\n%% regularized k-support norm\nlambda = 0.1;\nk = 10;\n[X,E,err,iter] = ksupportR(A,B,lambda,k,opts);\niter\nerr\nstem(X(:,1));\n\n"
  },
  {
    "path": "proximal_operators/cappedsimplexprojection.cpp",
    "content": "#include <iostream>\n#include <vector>\n#include <algorithm>\n#include \"mex.h\"\n\nusing namespace std;\n\nstruct mypair\n{\n  double number;\n  int index;\n  \n  void setval(double n, int i)\n  {\n    number=n;\n    index=i;\n  }\n  \n};\n\nbool mycompare(mypair l, mypair r)\n{\n  return (l.number<r.number);\n}\n\nvoid cappedsimplexprojection(int N, double * y, double s, double * x, double * e)\n{\n  int i,j;\n  \n  if ((s<0)||(s>N)){\n    cout<<\"impossible sum constraint!\\n\"<<endl;\n    exit(-1);\n  }\n  \n  if (s==0){\n    *e=0;\n    for(i=0;i<N;i++){\n      x[i]=0;\n      (*e)+=y[i]*y[i];\n    }\n    (*e)*=0.5;\n    return;\n  }\n  \n  if (s==N){\n    *e=0;\n    for(i=0;i<N;i++){\n      x[i]=1;\n      (*e)+=(1-y[i])*(1-y[i]);\n    }\n    (*e)*=0.5;\n    return;\n  }\n  \n  // Sort y into ascending order.\n  vector<mypair> v(N);\n  for(i=0;i<N;i++){\n    v[i].setval(y[i],i);\n  }\n  sort(v.begin(),v.end(),mycompare);\n  \n//   double T[N+1];  T[0]=0;\n//   malloc(sizeof(double)*N)\n//   double *T;\n//   T=(double*)malloc(N+1);\n//   T[0]=0;\n\n  double *T;\n  T = new double[N+1];\n  T[0]=0;\n\n      \n\n  // Compute partial sums.\n  for(i=1; i<=N; i++) T[i]=T[i-1]+v[i-1].number;\n  \n  double gamma;\n  // i is the number of 0's in the solution.\n  // j is the number of (0,1)'s in the solution.\n  bool flag=false;\n  for(i=0;i<=N;i++){\n    \n    // i==j\n    if ((i+s)==N)\n      if((i==0) || (v[i].number>=v[i-1].number+1)){\n        j=i;\n        flag=true;\n        break;\n      }\n    \n    // i<j\n    for(j=i+1;j<=N;j++){\n      gamma=(s+j-N+T[i]-T[j])/(j-i);\n      //cout<<\"gamma=\"<<gamma<<endl;\n       \n      if (i==0)\n        if (j==N) {\n          if ((v[i].number+gamma>0) && (v[j-1].number+gamma<1)) {flag=true;break;}\n        }\n        else {\n          if ((v[i].number+gamma>0) && (v[j-1].number+gamma<1) && (v[j].number+gamma>=1)) {flag=true; break;}\n        }\n      else\n        if (j==N) {\n          if ((v[i-1].number+gamma<=0) && (v[i].number+gamma>0) && (v[j-1].number+gamma<1)) {flag=true;break;}\n        }\n        else {\n          if ((v[i-1].number+gamma<=0) && (v[i].number+gamma>0) && (v[j-1].number+gamma<1) && (v[j].number+gamma>=1)) {flag=true;break;}\n        }\n    }\n    \n    if(flag) break;\n  }\n  \n  // get the solution in original order.\n  *e=0;\n  int k;\n  for(k=0;k<i;k++){\n    x[v[k].index]=0;\n    (*e)+=(v[k].number)*(v[k].number);\n  }\n  \n  for(k=i;k<j;k++){\n    x[v[k].index]=v[k].number+gamma;\n    (*e)+=gamma*gamma;\n  }\n  \n  for(k=j;k<N;k++){\n    x[v[k].index]=1;\n    (*e)+=(1-v[k].number)*(1-v[k].number);\n  }\n  \n  (*e)*=0.5;\n}\n\nvoid mexFunction( int nlhs, mxArray *plhs[],\n        int nrhs, const mxArray *prhs[])\n{\n  \n  /* check for proper number of arguments */\n  if(nrhs!=2)\n    mexErrMsgIdAndTxt(\"projection:invalidNumInputs\", \"Two inputs (y,s) required.\");\n  \n  int M=mxGetM(prhs[0]);\n  int N=mxGetN(prhs[0]);\n  if((M!=1)&&(N!=1))\n    mexErrMsgIdAndTxt(\"projection:invalidDimensions\", \"First argument y needs to be vector.\");\n  int Length=(N>M)?N:M;\n  \n  plhs[0] = mxCreateDoubleMatrix((mwSize)M, (mwSize)N, mxREAL);\n  plhs[1] = mxCreateDoubleMatrix((mwSize)1, (mwSize)1, mxREAL);\n  \n  double * y=mxGetPr(prhs[0]);\n  double s=mxGetScalar(prhs[1]);\n  double * x=mxGetPr(plhs[0]);\n  double * e =mxGetPr(plhs[1]);\n  \n  cappedsimplexprojection(Length, y, s, x, e);\n}\n\n/*\n * int main(int argc,char * argv[])\n * {\n *\n * int N=6;\n *\n * double y[6]={0.5377,    1.8339,    -2.2588,    0.8622,    0.3188,   -1.3077};\n * double s=10;\n * double d[6]={0.2785,    0.5469,    0.9575,    0.9649,    0.1576,    0.9706};\n *\n * double x[6];\n * double alpha;\n *\n * cappedsimplexprojection(N, y, s, d, x, &alpha);\n *\n * cout<<alpha<<endl;\n *\n * for(int i=0;i<N;i++)\n * cout<<x[i]<<\"   \";\n * cout<<endl;\n * }\n */\n\n\n\n"
  },
  {
    "path": "proximal_operators/cappedsimplexprojection_matlab.m",
    "content": "function [x,e]= cappedsimplexprojection_matlab(y0,k)\n\n% This subroutine solves the capped simplex projection problem\n% min 0.5||x-y0||, s.t. 0<=x<=1, sum x_i = k;\n% Reference: Weiran Wang, Canyi Lu, Projection onto the Capped Simplex, arXiv:1503.01002.\n \n\nn=length(y0);\nx=zeros(n,1);\n\nif (k<0) || (k>n)\n  error('the sum constraint is infeasible!\\n');\nend\n\nif k==0;\n  e=0.5*sum((x-y0).^2);\n  return;\nend\n\nif k==n\n  x=ones(n,1);\n  e=0.5*sum((x-y0).^2);\n  return;\nend\n[y,idx]=sort(y0,'ascend');\n\n% Test the possiblity of a==b are integers.\nif k==round(k)\n  b=n-k;\n  if y(b+1)-y(b)>=1\n    x(idx(b+1:end))=1;\n    e=0.5*sum((x-y0).^2);\n    return;\n  end\nend\n\n% Assume a=0.\ns=cumsum(y);\ny=[y;inf];\nfor b=1:n\n  % Hypothesized gamma.\n  gamma = (k+b-n-s(b)) / b;\n  if ((y(1)+gamma)>0) && ((y(b)+gamma)<1) && ((y(b+1)+gamma)>=1)\n    xtmp=[y(1:b)+gamma; ones(n-b,1)];\n    x(idx)=xtmp;\n    e=0.5*sum((x-y0).^2);\n    return;\n  end\nend\n\n% Now a>=1;\nfor a=1:n\n  for b=a+1:n\n    % Hypothesized gamma.\n    gamma = (k+b-n+s(a)-s(b))/(b-a);\n    if ((y(a)+gamma)<=0) && ((y(a+1)+gamma)>0) && ((y(b)+gamma)<1) && ((y(b+1)+gamma)>=1)\n      xtmp=[zeros(a,1); y(a+1:b)+gamma; ones(n-b,1)];\n      x(idx)=xtmp;\n      e=0.5*sum((x-y0).^2);\n      return;\n    end\n  end\nend\n\n\n"
  },
  {
    "path": "proximal_operators/flsa.c",
    "content": "#include <stdlib.h>\n#include <stdio.h>\n#include <time.h>\n#include <mex.h>\n#include <math.h>\n#include \"matrix.h\"\n\n#include \"flsa.h\"\n\n\n/*\n\n  Functions contained in \"flsa.h\"\n\n1. The algorithm for sloving (1) with a given (labmda1, lambda2)\n \n  void flsa(double *x, double *z, double *info,\n\t\t  double * v, double *z0, \n\t\t  double lambda1, double lambda2, int n, \n\t\t  int maxStep, double tol, int tau, int flag)\n*/\n\n\n/*\n\n  We solve the Fused Lasso Signal Approximator (FLSA) problem:\n\n     min_x  1/2 \\|x-v\\|^2  + lambda1 * \\|x\\|_1 + lambda2 * \\|A x\\|_1,      (1)\n\n  It can be shown that, if x* is the solution to\n\n     min_x  1/2 \\|x-v\\|^2  + lambda2 \\|A x\\|_1,                            (2)\n\n  then \n     x**= sgn(x*) max(|x*|-lambda_1, 0)                                    (3)\n\n  is the solution to (1).\n\n  By some derivation (see the description in sfa.h), (2) can be solved by\n\n     x*= v - A^T z*,\n\n  where z* is the optimal solution to\n\n     min_z  1/2  z^T A AT z - < z, A v>,\n\t\tsubject to  \\|z\\|_{infty} \\leq lambda2                             (4)\n*/\n\n\n\n/*\n\n\n  In flsa, we solve (1) corresponding to a given (lambda1, lambda2)\n\n  void flsa(double *x, double *z, double *gap,\n\t\t  double * v, double *z0, \n\t\t  double lambda1, double lambda2, int n, \n\t\t  int maxStep, double tol, int flag)\n\n  Output parameters:\n      x:        the solution to problem (1)\n\t  z:        the solution to problem (4)\n\t  infor:    the information about running the subgradient finding algorithm\n\t                 infor[0] = gap:         the computed gap (either the duality gap\n\t                                            or the summation of the absolute change of the adjacent solutions)\n\t\t\t\t\t infor[1] = steps:       the number of iterations\n\t\t\t\t\t infor[2] = lambad2_max: the maximal value of lambda2_max\n\t\t\t\t\t infor[3] = numS:        the number of elements in the support set\n\t\t\t\t\t\t\t\t\n  Input parameters:\n      v:        the input vector to be projected\n\t  z0:       a guess of the solution of z\n\n\t  lambad1:  the regularization parameter\n\t  labmda2:  the regularization parameter\n\t  n:        the length of v and x\n\n      maxStep:  the maximal allowed iteration steps\n\t  tol:      the tolerance parameter\n\t  flag:     the flag for initialization and deciding calling sfa\n                     switch (flag)\n\t\t\t\t\t     >0: sfa\n\t\t\t\t\t\t <0: sfa_ls\n\n                     switch ( abs(flag))\n\t\t\t\t\t     case 1, 2, 3, or 4: \n\t\t\t\t\t\t               z0 is a \"good\" starting point \n\t\t\t\t\t\t               (such as the warm-start of the previous solution,\n\t\t\t\t\t\t\t\t\t   or the user want to test the performance of this starting point;\n\t\t\t\t\t\t\t\t\t   the starting point shall be further projected to the L_{infty} ball,\n\t\t\t\t\t\t\t\t\t   to make sure that it is feasible)\n\n\t\t\t\t\t\t case 11, 12, 13, or 14: z0 is a \"random\" guess, and thus not used\n\t\t\t\t\t\t               (we shall initialize z with zero if lambda2 is less than 0.5 *zMax\n\t\t\t\t\t\t\t\t\t         and otherwise initialize z with zero with the solution of the linear system;\n\t\t\t\t\t\t\t\t\t\t\t this solution is projected to the L_{infty} ball)\n\n*/\n\n\n/*\n\nWe write the wrapper for calling from Matlab\n\nvoid flsa(double *x, double *z, double *gap,\n\t\t  double * v, double *z0, \n\t\t  double lambda1, double lambda2, int n, \n\t\t  int maxStep, double tol, int flag)\n*/\n\n\nvoid mexFunction (int nlhs, mxArray* plhs[], int nrhs, const mxArray* prhs[])\n{\n    /*set up input arguments */\n    double* v=            mxGetPr(prhs[0]);\n\tdouble* z0=           mxGetPr(prhs[1]);\n\n\tdouble lambda1=       mxGetScalar(prhs[2]);\n\tdouble lambda2=       mxGetScalar(prhs[3]);\n    int     n=   (int )   mxGetScalar(prhs[4]);\n\n\tint    maxStep= (int) mxGetScalar(prhs[5]);\n\tdouble tol=           mxGetScalar(prhs[6]);\n\tint    tau=     (int) mxGetScalar(prhs[7]);\n\tint    flag= (int)    mxGetScalar(prhs[8]);\n\t\n    \n    double *x, *z, *infor;\n    /* set up output arguments */\n    plhs[0] = mxCreateDoubleMatrix( n, 1, mxREAL); \t\n    plhs[1] = mxCreateDoubleMatrix( n-1, 1, mxREAL); \n\tplhs[2] = mxCreateDoubleMatrix( 1, 4, mxREAL);\n    x=  mxGetPr(plhs[0]);\n\tz=  mxGetPr(plhs[1]);\n\tinfor=mxGetPr(plhs[2]);\n\n\tflsa(x, z, infor,\n\t\t  v, z0, \n\t\t  lambda1, lambda2, n, \n\t\t  maxStep, tol, tau, flag);\n}\n\n"
  },
  {
    "path": "proximal_operators/flsa.h",
    "content": "#include <stdlib.h>\n#include <stdio.h>\n#include <time.h>\n#include <mex.h>\n#include <math.h>\n#include \"matrix.h\"\n\n#include \"sfa.h\"\n\n\n/*\n\nFiles contained in this header file sfa.h:\n\n1. Algorithms for solving the linear system A A^T z0 = Av (see the description of A from the following context)\n\n  void Thomas(double *zMax, double *z0, \n              double * Av, int nn)\n\n  void Rose(double *zMax, double *z0, \n            double * Av, int nn)\n\n  int supportSet(double *x, double *v, double *z, \n                 double *g, int * S, double lambda, int nn)\n\n  void dualityGap(double *gap, double *z, \n                  double *g, double *s, double *Av, \n\t\t\t\t  double lambda, int nn)\n\n  void dualityGap2(double *gap, double *z, \n                  double *g, double *s, double *Av, \n\t\t\t\t  double lambda, int nn)\n\n\n2. The Subgraident Finding Algorithm (SFA) for solving problem (4) (refer to the description of the problem for detail) \n  \n  int sfa(double *x,     double *gap,\n\t\t double *z,     double *z0,   double * v,   double * Av, \n\t\t double lambda, int nn,       int maxStep,\n\t\t double *s,     double *g,\n\t\t double tol,    int tau,       int flag)\n\n  int sfa_special(double *x,     double *gap,\n\t\t double *z,     double * v,   double * Av, \n\t\t double lambda, int nn,       int maxStep,\n\t\t double *s,     double *g,\n\t\t double tol,    int tau)\n\n  int sfa_one(double *x,     double *gap,\n\t\t double *z,     double * v,   double * Av, \n\t\t double lambda, int nn,       int maxStep,\n\t\t double *s,     double *g,\n\t\t double tol,    int tau)\n\n\n*/\n\n/*\n\n  In this file, we solve the Fused Lasso Signal Approximator (FLSA) problem:\n\n     min_x  1/2 \\|x-v\\|^2  + lambda1 * \\|x\\|_1 + lambda2 * \\|A x\\|_1,      (1)\n\n  It can be shown that, if x* is the solution to\n\n     min_x  1/2 \\|x-v\\|^2  + lambda2 \\|A x\\|_1,                            (2)\n\n  then \n     x**= sgn(x*) max(|x*|-lambda_1, 0)                                    (3)\n\n  is the solution to (1).\n\n  By some derivation (see the description in sfa.h), (2) can be solved by\n\n     x*= v - A^T z*,\n\n  where z* is the optimal solution to\n\n     min_z  1/2  z^T A AT z - < z, A v>,\n\t\tsubject to  \\|z\\|_{infty} \\leq lambda2                             (4)\n*/\n\n\n\n/*\n\n  In flsa, we solve (1) corresponding to a given (lambda1, lambda2)\n\n  void flsa(double *x, double *z, double *gap,\n\t\t  double * v, double *z0, \n\t\t  double lambda1, double lambda2, int n, \n\t\t  int maxStep, double tol, int flag)\n\n  Output parameters:\n      x:        the solution to problem (1)\n\t  z:        the solution to problem (4)\n\t  infor:    the information about running the subgradient finding algorithm\n\t                 infor[0] = gap:         the computed gap (either the duality gap\n\t                                            or the summation of the absolute change of the adjacent solutions)\n\t\t\t\t\t infor[1] = steps:       the number of iterations\n\t\t\t\t\t infor[2] = lambad2_max: the maximal value of lambda2_max\n\t\t\t\t\t infor[3] = numS:        the number of elements in the support set\n\t\t\t\t\t\t\t\t\n  Input parameters:\n      v:        the input vector to be projected\n\t  z0:       a guess of the solution of z\n\n\t  lambad1:  the regularization parameter\n\t  labmda2:  the regularization parameter\n\t  n:        the length of v and x\n\n      maxStep:  the maximal allowed iteration steps\n\t  tol:      the tolerance parameter\n\t  tau:      the program sfa is checked every tau iterations for termination\n\t  flag:     the flag for initialization and deciding calling sfa\n                     switch ( flag )\n\t\t\t\t\t     1-4, 11-14: sfa\n\n                     switch ( flag )\n\t\t\t\t\t     case 1, 2, 3, or 4: \n\t\t\t\t\t\t               z0 is a \"good\" starting point \n\t\t\t\t\t\t               (such as the warm-start of the previous solution,\n\t\t\t\t\t\t\t\t\t   or the user want to test the performance of this starting point;\n\t\t\t\t\t\t\t\t\t   the starting point shall be further projected to the L_{infty} ball,\n\t\t\t\t\t\t\t\t\t   to make sure that it is feasible)\n\n\t\t\t\t\t\t case 11, 12, 13, or 14: z0 is a \"random\" guess, and thus not used\n\t\t\t\t\t\t               (we shall initialize z as follows:\n\t\t\t\t\t\t\t\t\t             if lambda2 >= 0.5 * lambda_2^max, we initialize the solution of the linear system;\n\t\t\t\t\t\t\t\t\t\t\t\t if lambda2 <  0.5 * lambda_2^max, we initialize with zero\n\t\t\t\t\t\t\t\t\t\t\t this solution is projected to the L_{infty} ball)\n\n                     switch( flag )\n\t\t\t\t\t     5, 15: sfa_special\n\n                     switch( flag )\n\t\t\t\t\t     5:  z0 is a good starting point\n\t\t\t\t\t\t 15: z0 is a bad starting point, use the solution of the linear system\n\n\n                     switch( flag )\n\t\t\t\t\t     6, 16: sfa_one\n\n                     switch( flag )\n\t\t\t\t\t     6:  z0 is a good starting point\n\t\t\t\t\t\t 16: z0 is a bad starting point, use the solution of the linear system\n\n  Revision made on October 31, 2009.\n  The input variable z0 is not modified after calling sfa. For this sake, we allocate a new variable zz to replace z0.\n*/\n\n\n\nvoid flsa(double *x, double *z, double *infor,\n\t\t  double * v, double *z0, \n\t\t  double lambda1, double lambda2, int n, \n\t\t  int maxStep, double tol, int tau, int flag){\n\n\tint i, nn=n-1, m;\n\tdouble zMax, temp;\n\tdouble *Av, *g, *s;\n\tint iterStep, numS;\n\tdouble gap;\n\tdouble *zz; /*to replace z0, so that z0 shall not revised after */\n\n\t\n    Av=(double *) malloc(sizeof(double)*nn);\n\n\t/*\n\tCompute Av= A*v                  (n=4, nn=3)\n\t\t\t                                         A= [ -1  1  0  0;\n\t\t\t\t\t\t\t\t\t\t\t\t          0  -1 1  0;\n\t\t\t\t\t\t\t\t\t\t\t\t\t      0  0  -1 1]\n\t*/\n\n\tfor (i=0;i<nn; i++)\n\t\tAv[i]=v[i+1]-v[i];\n\n\t/*\n\tSovlve the linear system via Thomas's algorithm or Rose's algorithm\n        B * z0 = Av\n\t*/\n\n    Thomas(&zMax, z, Av, nn);\n\n\t /*\n\tRose(&zMax, z, Av, nn);\n\t*/\n\n\n\t/*\n\tprintf(\"\\n zMax=%2.5f\\n\",zMax);\n\t*/\n\n\n\t/*\n\tWe consider two cases:\n\t   1) lambda2 >= zMax, which leads to a solution with same entry values\n\t   2) lambda2 < zMax, which needs to first run sfa, and then perform soft thresholding\n\t*/\n\n\n\t/*\n\tFirst case: lambda2 >= zMax\n\t*/\n\tif (lambda2 >= zMax){\n\t\t\n\t\ttemp=0;\n\t\tm=n%5;\n\t\tif (m!=0){\n\t\t\tfor (i=0;i<m;i++)\n\t\t\t\ttemp+=v[i];\n\t\t}\t\t\n\t\tfor (i=m;i<n;i+=5){\n\t\t\ttemp += v[i] + v[i+1] + v[i+2] + v[i+3] + v[i+4];\n\t\t}\n\t\ttemp/=n; \n\t\t/* temp is the mean value of v*/\n\n\n\t\t/*\n\t\tsoft thresholding by lambda1\n\t\t*/\n\t\tif (temp> lambda1)\n\t\t\ttemp= temp-lambda1;\n\t\telse\n\t\t\tif (temp < -lambda1)\n\t\t\t\ttemp= temp+lambda1;\n\t\t\telse\n\t\t\t\ttemp=0;\n\n\t\tm=n%7;\n\t\tif (m!=0){\n\t\t\tfor (i=0;i<m;i++)\n\t\t\t\tx[i]=temp;\n\t\t}\n\t\tfor (i=m;i<n;i+=7){\n\t\t\tx[i]   =temp;\n\t\t\tx[i+1] =temp;\n\t\t\tx[i+2] =temp;\n\t\t\tx[i+3] =temp;\n\t\t\tx[i+4] =temp;\n\t\t\tx[i+5] =temp;\n\t\t\tx[i+6] =temp;\n\t\t}\n\t\t\n\t\tgap=0;\n\n\t\tfree(Av);\n\n\t\tinfor[0]= gap;\n\t\tinfor[1]= 0;\n\t\tinfor[2]=zMax;\n\t\tinfor[3]=0;\n\n\t\treturn;\n\t}\n\n\n\t/*\n\tSecond case: lambda2 < zMax\n\n    We need to call sfa for computing x, and then do soft thresholding\n\n    Before calling sfa, we need to allocate memory for g and s, \n\t           and initialize z and z0.\n\t*/\n\n\n\t/*\n\tAllocate memory for g and s\n\t*/\n\n\tg    =(double *) malloc(sizeof(double)*nn),\n\ts    =(double *) malloc(sizeof(double)*nn);\n\n\n\n\tm=flag /10;\n\t/* \n\n\tIf m=0, then this shows that, z0 is a \"good\" starting point. (m=1-6)\n\n\tOtherwise (m=11-16), we shall set z as either the solution to the linear system.\n\t                                       or the zero point\n    \n\t*/\n\tif (m==0){\n\t\tfor (i=0;i<nn;i++){\n\t\t\tif (z0[i] > lambda2)\n\t\t\t\tz[i]=lambda2;\n\t\t\telse\n\t\t\t\tif (z0[i]<-lambda2)\n\t\t\t\t\tz[i]=-lambda2;\n\t\t\t\telse\n\t\t\t\t\tz[i]=z0[i];\t\n\t\t}\n\t}\n\telse{\n\t\tif (lambda2 >= 0.5 * zMax){\n\t\t\tfor (i=0;i<nn;i++){\n\t\t\t\tif (z[i] > lambda2)\n\t\t\t\t\tz[i]=lambda2;\n\t\t\t\telse\n\t\t\t\t\tif (z[i]<-lambda2)\n\t\t\t\t\t\tz[i]=-lambda2;\n\t\t\t}\n\t\t}\n\t\telse{\n\t\t\tfor (i=0;i<nn;i++)\n\t\t\t\tz[i]=0;\n\n\t\t}\n\t}\n\t\n\tflag=flag %10;  /*\n\t                flag is now in [1:6]\n\t\t\t\t\t\n\t\t\t\t\tfor sfa, i.e., flag in [1:4], we need initialize z0 with zero\n\t                */\n\n\tif (flag>=1 && flag<=4){\n\t\tzz    =(double *) malloc(sizeof(double)*nn);\n\n\t\tfor (i=0;i<nn;i++)\n\t\t\tzz[i]=0;\n\t}\n\n\t/*\n\tcall sfa, sfa_one, or sfa_special to compute z, for finding the subgradient\n\t                                         and x\n\t*/\n\t\n\tif (flag==6)\n\t\titerStep=sfa_one(x, &gap, &numS,\n\t\t            z,  v,   Av, \n\t\t           lambda2, nn,  maxStep,\n\t\t\t\t   s, g,\n\t\t           tol, tau);\n\telse\n\t\tif (flag==5)\n\t\t\titerStep=sfa_special(x, &gap, &numS,\n\t\t\t            z,  v,   Av, \n\t\t                lambda2, nn,  maxStep,\n\t\t\t\t        s, g,\n\t\t                tol, tau);\n\t\telse{\n\t\t\titerStep=sfa(x, &gap, &numS,\n\t\t\t    z, zz,   v,  Av, \n\t\t        lambda2, nn, maxStep,\n\t\t        s,  g,\n\t\t        tol,tau, flag);\n\n\t\t\tfree (zz);\n\t\t\t/*free the variable zz*/\n\t\t}\n\t\t\n\n\t/*\n\tsoft thresholding by lambda1\n\t*/\n\n\tfor(i=0;i<n;i++)\n\t\tif (x[i] > lambda1)\n\t\t\tx[i]-=lambda1;\n\t\telse\n\t\t\tif (x[i]<-lambda1)\n\t\t\t\tx[i]+=lambda1;\n\t\t\telse\n\t\t\t\tx[i]=0;\n\n\t\n\tfree(Av);\n\tfree(g);\n\tfree(s);\n\n\tinfor[0]=gap;\n\tinfor[1]=iterStep;\n\tinfor[2]=zMax;\n\tinfor[3]=numS;\n}\n\n"
  },
  {
    "path": "proximal_operators/project_box.m",
    "content": "function x = project_box(b,l,u)\n\n% Project a point onto a box\n% min_x ||x-b||_2, s.t., l <= x <= u\n%\n% version 1.0 - 18/06/2016\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n% \n\nx = max(l,min(b,u));"
  },
  {
    "path": "proximal_operators/project_fantope.m",
    "content": "function X = project_fantope(Q,k)\n\n% Project a point onto the Fantope\n% Q - a symmetric matrix\n%\n% min_X ||X-Q||_F, s.t. 0\\succeq X \\succeq I, Tr(X)=k.\n%\n% version 1.0 - 18/06/2016\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n% \n\n[U,D] = eig(Q);\nDr = cappedsimplexprojection(diag(D),k);\n% Dr = cappedsimplexprojection_matlab(diag(D),k);\nX = U*diag(Dr)*U';"
  },
  {
    "path": "proximal_operators/project_simplex.m",
    "content": "function X = project_simplex(B)\n\n% Project onto the probability simplex\n% min_X ||X-B||_F\n% s.t Xe=e, X>=0 where e is the constant one vector.\n%\n% ---------------------------------------------\n% Input:\n%       B       -    n*d matrix\n%\n% Output:\n%       X       -    n*d matrix\n% \n\n[n,m] = size(B);\nA = repmat(1:m,n,1);\nB_sort = sort(B,2,'descend');\ncum_B = cumsum(B_sort,2);\nsigma = B_sort-(cum_B-1)./A;\ntmp = sigma>0;\nidx = sum(tmp,2);\ntmp = B_sort-sigma;\nsigma = diag(tmp(:,idx));\nsigma = repmat(sigma,1,m);\nX = max(B-sigma,0);"
  },
  {
    "path": "proximal_operators/prox_elasticnet.m",
    "content": "function x = prox_elasticnet(b,lambda1,lambda2)\n\n% The proximal operator of the elastic net\n% \n% min_x lambda1*||x||_1+0.5*lambda2*||x||_2^2+0.5*||x-b||_2^2\n%\n% version 1.0 - 18/06/2016\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n% \n\nx = (max(0,b-lambda1)+min(0,b+lambda1))/(lambda2+1);"
  },
  {
    "path": "proximal_operators/prox_gl1.m",
    "content": "function x = prox_gl1(b,G,lambda)\n\n% The proximal operator of the group l1 norm\n% \n% min_x lambda*\\sum_{g in G} ||x_g||_2+0.5*||x-b||_2^2\n% ---------------------------------------------\n% Input:\n%       b       -    d*1 vector\n%       G       -    a cell indicates a partition of 1:d\n%\n% Output:\n%       x       -    d*1 vector\n% \n% version 1.0 - 18/06/2016\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n% \n\nx = zeros(size(b));\nfor i = 1 : length(G)\n    nxg = norm(b(G{i}));\n    if nxg > lambda  \n        x(G{i}) = b(G{i})*(1-lambda/nxg);\n    end\nend"
  },
  {
    "path": "proximal_operators/prox_ksupport.m",
    "content": "function B = prox_ksupport(v,k,lambda)\n\n% The proximal operator of the k support norm of a vector\n%\n% min_x 0.5*lambda*||x||_{ksp}^2+0.5*||x-v||_2^2\n%\n% version 1.0 - 27/06/2016\n%\n% Written by Hanjiang Lai\n%\n% Reference: \n% Lai H, Pan Y, Lu C, et al. Efficient k-support matrix pursuit, ECCV, 2014: 617-631.\n% \n\nL = 1/lambda;\nd = length(v);\nif k >= d\n    B = L*v/(1+L);\n    return;\nelseif k <= 1\n    k = 1;\nend\n\n[z, ind] = sort(abs(v), 'descend');\nz = z*L;\nar = cumsum(z);\nz(d+1) = -inf;\n\ndiff = 0;\nerr = inf;\nfound = false;\nfor r=k-1:-1:0\n    [l,T] = bsearch(z,ar,k-r,d,diff,k,r,L);\n    if ( ((L+1)*T >= (l-k+(L+1)*r+L+1)*z(k-r)) && ...\n            (((k-r-1 == 0) || (L+1)*T < (l-k+(L+1)*r+L+1)*z(k-r-1)) ) )\n        found = true;\n        break;\n    end\n    diff = diff + z(k-r);\n    if k-r-1 == 0\n        err_tmp = max(0,(l-k+(L+1)*r+L+1)*z(k-r) - (L+1)*T);\n    else\n        err_tmp = max(0,(l-k+(L+1)*r+L+1)*z(k-r) -(L+1)*T) + max(0, - (l-k+(L+1)*r+L+1)*z(k-r-1) + (L+1)*T);\n    end\n    if err > err_tmp\n        err_r = r; err_l = l; err_T = T; err = err_tmp;\n    end\nend\n\n\nif found == false\n    r = err_r; l = err_l; T = err_T;\nend\n\n%  fprintf('r = %d, l = %d \\n',r,l);\n\np(1:k-r-1) = z(1:k-r-1)/(L+1);\np(k-r:l) = T / (l-k+(L+1)*r+L+1);\np(l+1:d) = z(l+1:d);\np = p';\n\n% [dummy, rev]=sort(ind,'ascend');\nrev(ind) = 1:d;\np = sign(v) .* p(rev);\nB = v - 1/L*p;\nend\n\nfunction [l,T] = bsearch(z,array,low,high,diff,k,r,L)\nif z(low) == 0\n    l = low;\n    T = 0;\n    return;\nend\n%z(mid) * tmp - (array(mid) - diff) > 0\n%z(mid+1) * tmp - (array(mid+1) - diff) <= 0\nwhile( low < high )\n    mid = floor( (low + high)/2 ) + 1;\n    tmp = (mid - k + r + 1 + L*(r+1));\n    if z(mid) * tmp - (array(mid) - diff) > 0\n        low = mid;\n    else\n        high = mid - 1;\n    end\nend\nl = low;\nT = array(low) - diff;\nend\n\n\n"
  },
  {
    "path": "proximal_operators/prox_l1.m",
    "content": "function x = prox_l1(b,lambda)\n\n% The proximal operator of the l1 norm\n% \n% min_x lambda*||x||_1+0.5*||x-b||_2^2\n%\n% version 1.0 - 18/06/2016\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n% \n\nx = max(0,b-lambda)+min(0,b+lambda);"
  },
  {
    "path": "proximal_operators/prox_l21.m",
    "content": "function X = prox_l21(B,lambda)\n\n% The proximal operator of the l21 norm of a matrix\n% l21 norm is the sum of the l2 norm of all columns of a matrix \n%\n% min_X lambda*||X||_{2,1}+0.5*||X-B||_2^2\n%\n% version 1.0 - 18/06/2016\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n%\n\nX = zeros(size(B));\nfor i = 1 : size(X,2)\n    nxi = norm(B(:,i));\n    if nxi > lambda  \n        X(:,i) = (1-lambda/nxi)*B(:,i);\n    end\nend"
  },
  {
    "path": "proximal_operators/prox_nuclear.m",
    "content": "function [X,nuclearnorm] = prox_nuclear(B,lambda)\n\n% The proximal operator of the nuclear norm of a matrix\n% \n% min_X lambda*||X||_*+0.5*||X-B||_F^2\n%\n% version 1.0 - 18/06/2016\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n% \n\n[U,S,V] = svd(B,'econ');\nS = diag(S);\nsvp = length(find(S>lambda));\nif svp>=1\n    S = S(1:svp)-lambda;\n    X = U(:,1:svp)*diag(S)*V(:,1:svp)';\n    nuclearnorm = sum(S);\nelse\n    X = zeros(size(B));\n    nuclearnorm = 0;\nend"
  },
  {
    "path": "proximal_operators/prox_tnn.m",
    "content": "function [X,tnn,trank] = prox_tnn(Y,rho)\n\n% The proximal operator of the tensor nuclear norm of a 3 way tensor\n%\n% min_X rho*||X||_*+0.5*||X-Y||_F^2\n%\n% Y     -    n1*n2*n3 tensor\n%\n% X     -    n1*n2*n3 tensor\n% tnn   -    tensor nuclear norm of X\n% trank -    tensor tubal rank of X\n%\n% version 2.1 - 14/06/2018\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n%\n%\n% References: \n% Canyi Lu, Tensor-Tensor Product Toolbox. Carnegie Mellon University. \n% June, 2018. https://github.com/canyilu/tproduct.\n%\n% Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Zhouchen Lin and Shuicheng\n% Yan, Tensor Robust Principal Component Analysis with A New Tensor Nuclear\n% Norm, arXiv preprint arXiv:1804.03728, 2018\n%\n\n[n1,n2,n3] = size(Y);\nX = zeros(n1,n2,n3);\nY = fft(Y,[],3);\ntnn = 0;\ntrank = 0;\n        \n% first frontal slice\n[U,S,V] = svd(Y(:,:,1),'econ');\nS = diag(S);\nr = length(find(S>rho));\nif r>=1\n    S = S(1:r)-rho;\n    X(:,:,1) = U(:,1:r)*diag(S)*V(:,1:r)';\n    tnn = tnn+sum(S);\n    trank = max(trank,r);\nend\n% i=2,...,halfn3\nhalfn3 = round(n3/2);\nfor i = 2 : halfn3\n    [U,S,V] = svd(Y(:,:,i),'econ');\n    S = diag(S);\n    r = length(find(S>rho));\n    if r>=1\n        S = S(1:r)-rho;\n        X(:,:,i) = U(:,1:r)*diag(S)*V(:,1:r)';\n        tnn = tnn+sum(S)*2;\n        trank = max(trank,r);\n    end\n    X(:,:,n3+2-i) = conj(X(:,:,i));\nend\n\n% if n3 is even\nif mod(n3,2) == 0\n    i = halfn3+1;\n    [U,S,V] = svd(Y(:,:,i),'econ');\n    S = diag(S);\n    r = length(find(S>rho));\n    if r>=1\n        S = S(1:r)-rho;\n        X(:,:,i) = U(:,1:r)*diag(S)*V(:,1:r)';\n        tnn = tnn+sum(S);\n        trank = max(trank,r);\n    end\nend\ntnn = tnn/n3;\nX = ifft(X,[],3);\n"
  },
  {
    "path": "readme.txt",
    "content": "LibADMM: A Library of ADMM for Sparse and Low-rank Optimization\r\n\r\n\r\nThis package solves several sparse and low-rank optimization problems by M-ADMM proposed in our work\r\nC. Lu, J. Feng, S. Yan, Z. Lin. A Unified Alternating Direction Method of Multipliers by Majorization Minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, pp. 527-541, 2018\r\n    \r\n\r\nThe folder \"LibADMM\" contains three subfolders:\r\n\r\n1. algorithms: the main solvers.\r\n2. proximal_operators: the proximal operators of several functions used in the subproblems of M-ADMM.\r\n3. tensor_tools: some basic tools for tensors.\r\n\r\nBesides the subfolders, we also three functions, \"test_sparse_models.m\", \"test_low_rank_matrix_models.m\", and \"test_low_rank_tensor_models.m\" which provide the examples for all the solvers implemented in this package.\r\n\r\nYou are also suggested to read the manual at https://canyilu.github.io/publications/2016-software-LibADMM.pdf.\r\n\r\nFor any problems, please contact Canyi Lu (canyilu@gmail.com).\r\n\r\n\r\nVersion 1.0 (Jun, 2016)\r\n\r\nVersion 1.1 (Jun, 2018)\r\n- add a new model about low-rank tensor recovery from Gaussian measurements based on tensor nuclear norm and the corresponding function lrtr_Gaussian_tnn.m\r\n- update several functions to improve the efficiency, including prox_tnn.m, tprod.m, tran.m, tubalrank.m, and nmodeproduct.m\r\n- update the three example functions: example_sparse_models.m, example_low_rank_matrix_models.m, and example_low_rank_tensor_models.m\r\n- remove the test on image data and some unnecessary functions\r\n\r\n\r\n"
  },
  {
    "path": "tensor_tools/Fold.m",
    "content": "function [X] = Fold(X, dim, i)\r\ndim = circshift(dim, [1-i, 1-i]);\r\nX = shiftdim(reshape(X, dim), length(dim)+1-i);"
  },
  {
    "path": "tensor_tools/Unfold.m",
    "content": "function [X] = Unfold( X, dim, i )\r\nX = reshape(shiftdim(X,i-1), dim(i), []);"
  },
  {
    "path": "tensor_tools/nmodeproduct.m",
    "content": "function B = nmodeproduct(A,M,n)\n% Calculates the n-Mode Product of a Tensor A and a Matrix M\n% \n% B = nmodeproduct(A,M,n)\n% \n% B = A (x)_n M .. According to the Definition in De Lathauwer (2000)\n%\n% with:\n% A:    (I_1 x I_2 x .. I_n x .. I_N) .. ->  n is in [1..N]\n% M:    (J   x I_n)\n% B:    (I_1 x I_2 x .. J x   .. I_N)\n%\n% note: \"(x)_n\" is the operator between the tensor and the matrix\n% \n% v0.001 2009 by Fabian Schneiter\n% \n\n% check inputs:\ndimvec = size(A);\nn = fix(n);\nif (length(dimvec)<n || n<1)\n    error('nmodeproduct: n is not within the order range of tensor A ');\nend\nif (size(M,2) ~= dimvec(n))\n    error('nmodeproduct: dimension n of tensor A is not equal to dimension 2 of matrix M');\nend\n\n% shift A to prepare flattening: (i.e. make dimension 1 (columns) to 'n', the one we would like to replace)\nAsh = shiftdim(A,n-1);\n\n% save the target dimensions of B (we replace the 1st dimension, because\n% thats the one affected by the matrix multiplication \n% i.e. this dimension changes from I_n to J\ndimvecB = size(Ash);\ndimvecB(1) = size(M,1);\n\n\n% multiply while flattening.. i.e. we first flatten the matrix, so that we\n% have a matrix; as an array of flattened vectors drawn from the tensor\n% second we multiply those vectors with our matrix, resulting in a\n% dimension change of the output vector. the output vectors are then again\n% saved as a matrix, representing an array of those vectors.\nB = M*Ash(:,:);\n\n% wrap the flattened vector-array back into the previously saved tensor shape\nB = reshape(B,dimvecB);\n\n% shift the dimensions back! so that only dimension n has changed from I_n to J\nB = shiftdim(B,length(dimvecB)-n+1);\n\n% and we re done!"
  },
  {
    "path": "tensor_tools/tprod.m",
    "content": "function C = tprod(A,B)\r\n\r\n% Tensor-tensor product of two 3 way tensors: C = A*B\r\n% A - n1*n2*n3 tensor\r\n% B - n2*l*n3  tensor\r\n% C - n1*l*n3  tensor\r\n%\r\n% version 2.0 - 09/10/2017\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n%\r\n%\r\n% References: \r\n% Canyi Lu, Tensor-Tensor Product Toolbox. Carnegie Mellon University. \r\n% June, 2018. https://github.com/canyilu/tproduct.\r\n%\r\n% Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Zhouchen Lin and Shuicheng\r\n% Yan, Tensor Robust Principal Component Analysis with A New Tensor Nuclear\r\n% Norm, arXiv preprint arXiv:1804.03728, 2018\r\n%\r\n\r\n[n1,n2,n3] = size(A);\r\n[m1,m2,m3] = size(B);\r\n\r\nif n2 ~= m1 || n3 ~= m3 \r\n    error('Inner tensor dimensions must agree.');\r\nend\r\n\r\nA = fft(A,[],3);\r\nB = fft(B,[],3);\r\nC = zeros(n1,m2,n3);\r\n\r\n% first frontal slice\r\nC(:,:,1) = A(:,:,1)*B(:,:,1);\r\n% i=2,...,halfn3\r\nhalfn3 = round(n3/2);\r\nfor i = 2 : halfn3\r\n    C(:,:,i) = A(:,:,i)*B(:,:,i);\r\n    C(:,:,n3+2-i) = conj(C(:,:,i));\r\nend\r\n\r\n% if n3 is even\r\nif mod(n3,2) == 0\r\n    i = halfn3+1;\r\n    C(:,:,i) = A(:,:,i)*B(:,:,i);\r\nend\r\nC = ifft(C,[],3);"
  },
  {
    "path": "tensor_tools/tran.m",
    "content": "function Xt = tran(X)\r\n\r\n% conjugate transpose of a 3 way tensor \r\n% X  - n1*n2*n3 tensor\r\n% Xt - n2*n1*n3  tensor\r\n%\r\n% version 1.0 - 18/06/2016\r\n%\r\n% Written by Canyi Lu (canyilu@gmail.com)\r\n% \r\n%\r\n% References: \r\n% Canyi Lu, Tensor-Tensor Product Toolbox. Carnegie Mellon University. \r\n% June, 2018. https://github.com/canyilu/tproduct.\r\n%\r\n% Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Zhouchen Lin and Shuicheng\r\n% Yan, Tensor Robust Principal Component Analysis with A New Tensor Nuclear\r\n% Norm, arXiv preprint arXiv:1804.03728, 2018\r\n%\r\n\r\n[n1,n2,n3] = size(X);\r\nXt = zeros(n2,n1,n3);\r\nXt(:,:,1) = X(:,:,1)';\r\nfor i = 2 : n3\r\n    Xt(:,:,i) = X(:,:,n3-i+2)';\r\nend"
  },
  {
    "path": "tensor_tools/tubalrank.m",
    "content": "function trank = tubalrank(X,tol)\n\n% The tensor tubal rank of a 3 way tensor\n%\n% X     -    n1*n2*n3 tensor\n% trank -    tensor tubal rank of X\n%\n% version 2.0 - 14/06/2018\n%\n% Written by Canyi Lu (canyilu@gmail.com)\n%\n%\n% References: \n% Canyi Lu, Tensor-Tensor Product Toolbox. Carnegie Mellon University. \n% June, 2018. https://github.com/canyilu/tproduct.\n%\n% Canyi Lu, Jiashi Feng, Yudong Chen, Wei Liu, Zhouchen Lin and Shuicheng\n% Yan, Tensor Robust Principal Component Analysis with A New Tensor Nuclear\n% Norm, arXiv preprint arXiv:1804.03728, 2018\n%\n\nX = fft(X,[],3);\n[n1,n2,n3] = size(X);\ns = zeros(min(n1,n2),1);\n\n% i=1\ns = s + svd(X(:,:,1),'econ');\n% i=2,...,halfn3\nhalfn3 = round(n3/2);\nfor i = 2 : halfn3\n    s = s + svd(X(:,:,i),'econ')*2;\nend\n% if n3 is even\nif mod(n3,2) == 0\n    i = halfn3+1;\n    s = s + svd(X(:,:,i),'econ');\nend\ns = s/n3;\n\nif nargin==1\n   tol = max(n1,n2) * eps(max(s));\nend\ntrank = sum(s > tol);\n"
  }
]