Publications

This is my full list of publications. For selected publications representative of my work, go here. Please also see my Google scholar profile; it is sometimes more up to date.

Preprints

1399621 J2WEX3C9 1 apa 50 date desc 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22QIB5RKK9%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Cavazos%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-01%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCavazos%2C%20J.%20S.%2C%20Fessler%2C%20J.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282025%29.%20%26lt%3Bi%26gt%3BALPCAHUS%3A%20Subspace%20Clustering%20for%20Heteroscedastic%20Data%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2505.18918%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2505.18918%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2505.18918%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22ALPCAHUS%3A%20Subspace%20Clustering%20for%20Heteroscedastic%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Javier%20Salazar%22%2C%22lastName%22%3A%22Cavazos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Principal%20component%20analysis%20%28PCA%29%20is%20a%20key%20tool%20in%20the%20field%20of%20data%20dimensionality%20reduction.%20Various%20methods%20have%20been%20proposed%20to%20extend%20PCA%20to%20the%20union%20of%20subspace%20%28UoS%29%20setting%20for%20clustering%20data%20that%20come%20from%20multiple%20subspaces%20like%20K-Subspaces%20%28KSS%29.%20However%2C%20some%20applications%20involve%20heterogeneous%20data%20that%20vary%20in%20quality%20due%20to%20noise%20characteristics%20associated%20with%20each%20data%20sample.%20Heteroscedastic%20methods%20aim%20to%20deal%20with%20such%20mixed%20data%20quality.%20This%20paper%20develops%20a%20heteroscedastic-focused%20subspace%20clustering%20method%2C%20named%20ALPCAHUS%2C%20that%20can%20estimate%20the%20sample-wise%20noise%20variances%20and%20use%20this%20information%20to%20improve%20the%20estimate%20of%20the%20subspace%20bases%20associated%20with%20the%20low-rank%20structure%20of%20the%20data.%20This%20clustering%20algorithm%20builds%20on%20K-Subspaces%20%28KSS%29%20principles%20by%20extending%20the%20recently%20proposed%20heteroscedastic%20PCA%20method%2C%20named%20LR-ALPCAH%2C%20for%20clusters%20with%20heteroscedastic%20noise%20in%20the%20UoS%20setting.%20Simulations%20and%20real-data%20experiments%20show%20the%20effectiveness%20of%20accounting%20for%20data%20heteroscedasticity%20compared%20to%20existing%20clustering%20algorithms.%20Code%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fjaviersc1%5C%2FALPCAHUS.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2505.18918%22%2C%22date%22%3A%222025-06-01%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2505.18918%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2505.18918%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22J2WEX3C9%22%5D%2C%22dateModified%22%3A%222025-06-17T12%3A09%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22YRS6JU82%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kwon%20et%20al.%22%2C%22parsedDate%22%3A%222025-05-20%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKwon%2C%20S.%20M.%2C%20Xu%2C%20A.%20S.%2C%20Yaras%2C%20C.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282025%29.%20%26lt%3Bi%26gt%3BOut-of-Distribution%20Generalization%20of%20In-Context%20Learning%3A%20A%20Low-Dimensional%20Subspace%20Perspective%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2505.14808%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2505.14808%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2505.14808%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Out-of-Distribution%20Generalization%20of%20In-Context%20Learning%3A%20A%20Low-Dimensional%20Subspace%20Perspective%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Soo%20Min%22%2C%22lastName%22%3A%22Kwon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alec%20S.%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22This%20work%20aims%20to%20demystify%20the%20out-of-distribution%20%28OOD%29%20capabilities%20of%20in-context%20learning%20%28ICL%29%20by%20studying%20linear%20regression%20tasks%20parameterized%20with%20low-rank%20covariance%20matrices.%20With%20such%20a%20parameterization%2C%20we%20can%20model%20distribution%20shifts%20as%20a%20varying%20angle%20between%20the%20subspace%20of%20the%20training%20and%20testing%20covariance%20matrices.%20We%20prove%20that%20a%20single-layer%20linear%20attention%20model%20incurs%20a%20test%20risk%20with%20a%20non-negligible%20dependence%20on%20the%20angle%2C%20illustrating%20that%20ICL%20is%20not%20robust%20to%20such%20distribution%20shifts.%20However%2C%20using%20this%20framework%2C%20we%20also%20prove%20an%20interesting%20property%20of%20ICL%3A%20when%20trained%20on%20task%20vectors%20drawn%20from%20a%20union%20of%20low-dimensional%20subspaces%2C%20ICL%20can%20generalize%20to%20any%20subspace%20within%20their%20span%2C%20given%20sufficiently%20long%20prompt%20lengths.%20This%20suggests%20that%20the%20OOD%20generalization%20ability%20of%20Transformers%20may%20actually%20stem%20from%20the%20new%20task%20lying%20within%20the%20span%20of%20those%20encountered%20during%20training.%20We%20empirically%20show%20that%20our%20results%20also%20hold%20for%20models%20such%20as%20GPT-2%2C%20and%20conclude%20with%20%28i%29%20experiments%20on%20how%20our%20observations%20extend%20to%20nonlinear%20function%20classes%20and%20%28ii%29%20results%20on%20how%20LoRA%20has%20the%20ability%20to%20capture%20distribution%20shifts.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2505.14808%22%2C%22date%22%3A%222025-05-20%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2505.14808%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2505.14808%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22J2WEX3C9%22%5D%2C%22dateModified%22%3A%222025-06-17T12%3A07%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22FZGX4TR7%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-25%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20Ding%2C%20T.%2C%20Haeffele%2C%20B.%20D.%2C%20Kwon%2C%20S.%20M.%2C%20Qu%2C%20Q.%2C%20Wang%2C%20P.%2C%20Wang%2C%20Z.%2C%20%26amp%3B%20Yaras%2C%20C.%20%282025%29.%20%26lt%3Bi%26gt%3BAn%20Overview%20of%20Low-Rank%20Structures%20in%20the%20Training%20and%20Adaptation%20of%20Large%20Models%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2503.19859%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2503.19859%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2503.19859%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22An%20Overview%20of%20Low-Rank%20Structures%20in%20the%20Training%20and%20Adaptation%20of%20Large%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianjiao%22%2C%22lastName%22%3A%22Ding%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Benjamin%20D.%22%2C%22lastName%22%3A%22Haeffele%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Soo%20Min%22%2C%22lastName%22%3A%22Kwon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhangyang%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%5D%2C%22abstractNote%22%3A%22The%20rise%20of%20deep%20learning%20has%20revolutionized%20data%20processing%20and%20prediction%20in%20signal%20processing%20and%20machine%20learning%2C%20yet%20the%20substantial%20computational%20demands%20of%20training%20and%20deploying%20modern%20large-scale%20deep%20models%20present%20significant%20challenges%2C%20including%20high%20computational%20costs%20and%20energy%20consumption.%20Recent%20research%20has%20uncovered%20a%20widespread%20phenomenon%20in%20deep%20networks%3A%20the%20emergence%20of%20low-rank%20structures%20in%20weight%20matrices%20and%20learned%20representations%20during%20training.%20These%20implicit%20low-dimensional%20patterns%20provide%20valuable%20insights%20for%20improving%20the%20efficiency%20of%20training%20and%20fine-tuning%20large-scale%20models.%20Practical%20techniques%20inspired%20by%20this%20phenomenon%2C%20such%20as%20low-rank%20adaptation%20%28LoRA%29%20and%20training%2C%20enable%20significant%20reductions%20in%20computational%20cost%20while%20preserving%20model%20performance.%20In%20this%20paper%2C%20we%20present%20a%20comprehensive%20review%20of%20recent%20advances%20in%20exploiting%20low-rank%20structures%20for%20deep%20learning%20and%20shed%20light%20on%20their%20mathematical%20foundations.%20Mathematically%2C%20we%20present%20two%20complementary%20perspectives%20on%20understanding%20the%20low-rankness%20in%20deep%20networks%3A%20%28i%29%20the%20emergence%20of%20low-rank%20structures%20throughout%20the%20whole%20optimization%20dynamics%20of%20gradient%20and%20%28ii%29%20the%20implicit%20regularization%20effects%20that%20induce%20such%20low-rank%20structures%20at%20convergence.%20From%20a%20practical%20standpoint%2C%20studying%20the%20low-rank%20learning%20dynamics%20of%20gradient%20descent%20offers%20a%20mathematical%20foundation%20for%20understanding%20the%20effectiveness%20of%20LoRA%20in%20fine-tuning%20large-scale%20models%20and%20inspires%20parameter-efficient%20low-rank%20training%20strategies.%20Furthermore%2C%20the%20implicit%20low-rank%20regularization%20effect%20helps%20explain%20the%20success%20of%20various%20masked%20training%20approaches%20in%20deep%20neural%20networks%2C%20ranging%20from%20dropout%20to%20masked%20self-supervised%20learning.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2503.19859%22%2C%22date%22%3A%222025-03-25%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2503.19859%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2503.19859%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22J2WEX3C9%22%5D%2C%22dateModified%22%3A%222025-04-09T03%3A12%3A44Z%22%7D%7D%2C%7B%22key%22%3A%22DNK2SUC2%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222024-08-06%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20Y.%2C%20Balzano%2C%20L.%2C%20Needell%2C%20D.%2C%20%26amp%3B%20Lyu%2C%20H.%20%282024%29.%20%26lt%3Bi%26gt%3BConvergence%20and%20complexity%20of%20block%20majorization-minimization%20for%20constrained%20block-Riemannian%20optimization%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2312.10330%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2312.10330%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2312.10330%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Convergence%20and%20complexity%20of%20block%20majorization-minimization%20for%20constrained%20block-Riemannian%20optimization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuchen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Deanna%22%2C%22lastName%22%3A%22Needell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hanbaek%22%2C%22lastName%22%3A%22Lyu%22%7D%5D%2C%22abstractNote%22%3A%22Block%20majorization-minimization%20%28BMM%29%20is%20a%20simple%20iterative%20algorithm%20for%20nonconvex%20optimization%20that%20sequentially%20minimizes%20a%20majorizing%20surrogate%20of%20the%20objective%20function%20in%20each%20block%20coordinate%20while%20the%20other%20block%20coordinates%20are%20held%20fixed.%20We%20consider%20a%20family%20of%20BMM%20algorithms%20for%20minimizing%20smooth%20nonconvex%20objectives%2C%20where%20each%20parameter%20block%20is%20constrained%20within%20a%20subset%20of%20a%20Riemannian%20manifold.%20We%20establish%20that%20this%20algorithm%20converges%20asymptotically%20to%20the%20set%20of%20stationary%20points%2C%20and%20attains%20an%20%24%5C%5Cepsilon%24-stationary%20point%20within%20%24%5C%5Cwidetilde%7BO%7D%28%5C%5Cepsilon%5E%7B-2%7D%29%24%20iterations.%20In%20particular%2C%20the%20assumptions%20for%20our%20complexity%20results%20are%20completely%20Euclidean%20when%20the%20underlying%20manifold%20is%20a%20product%20of%20Euclidean%20or%20Stiefel%20manifolds%2C%20although%20our%20analysis%20makes%20explicit%20use%20of%20the%20Riemannian%20geometry.%20Our%20general%20analysis%20applies%20to%20a%20wide%20range%20of%20algorithms%20with%20Riemannian%20constraints%3A%20Riemannian%20MM%2C%20block%20projected%20gradient%20descent%2C%20optimistic%20likelihood%20estimation%2C%20geodesically%20constrained%20subspace%20tracking%2C%20robust%20PCA%2C%20and%20Riemannian%20CP-dictionary-learning.%20We%20experimentally%20validate%20that%20our%20algorithm%20converges%20faster%20than%20standard%20Euclidean%20algorithms%20applied%20to%20the%20Riemannian%20setting.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2312.10330%22%2C%22date%22%3A%222024-08-06%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2312.10330%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2312.10330%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22J2WEX3C9%22%5D%2C%22dateModified%22%3A%222025-03-25T17%3A33%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22ZYFHK2S2%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yaras%20et%20al.%22%2C%22parsedDate%22%3A%222023-06-01%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYaras%2C%20C.%2C%20Wang%2C%20P.%2C%20Hu%2C%20W.%2C%20Zhu%2C%20Z.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282023%29.%20%26lt%3Bi%26gt%3BThe%20Law%20of%20Parsimony%20in%20Gradient%20Descent%20for%20Learning%20Deep%20Linear%20Networks%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2306.01154%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2306.01154%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2306.01154%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22The%20Law%20of%20Parsimony%20in%20Gradient%20Descent%20for%20Learning%20Deep%20Linear%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wei%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhihui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22Over%20the%20past%20few%20years%2C%20an%20extensively%20studied%20phenomenon%20in%20training%20deep%20networks%20is%20the%20implicit%20bias%20of%20gradient%20descent%20towards%20parsimonious%20solutions.%20In%20this%20work%2C%20we%20investigate%20this%20phenomenon%20by%20narrowing%20our%20focus%20to%20deep%20linear%20networks.%20Through%20our%20analysis%2C%20we%20reveal%20a%20surprising%20%26quot%3Blaw%20of%20parsimony%26quot%3B%20in%20the%20learning%20dynamics%20when%20the%20data%20possesses%20low-dimensional%20structures.%20Specifically%2C%20we%20show%20that%20the%20evolution%20of%20gradient%20descent%20starting%20from%20orthogonal%20initialization%20only%20affects%20a%20minimal%20portion%20of%20singular%20vector%20spaces%20across%20all%20weight%20matrices.%20In%20other%20words%2C%20the%20learning%20process%20happens%20only%20within%20a%20small%20invariant%20subspace%20of%20each%20weight%20matrix%2C%20despite%20the%20fact%20that%20all%20weight%20parameters%20are%20updated%20throughout%20training.%20This%20simplicity%20in%20learning%20dynamics%20could%20have%20significant%20implications%20for%20both%20efficient%20training%20and%20a%20better%20understanding%20of%20deep%20networks.%20First%2C%20the%20analysis%20enables%20us%20to%20considerably%20improve%20training%20efficiency%20by%20taking%20advantage%20of%20the%20low-dimensional%20structure%20in%20learning%20dynamics.%20We%20can%20construct%20smaller%2C%20equivalent%20deep%20linear%20networks%20without%20sacrificing%20the%20benefits%20associated%20with%20the%20wider%20counterparts.%20Second%2C%20it%20allows%20us%20to%20better%20understand%20deep%20representation%20learning%20by%20elucidating%20the%20linear%20progressive%20separation%20and%20concentration%20of%20representations%20from%20shallow%20to%20deep%20layers.%20We%20also%20conduct%20numerical%20experiments%20to%20support%20our%20theoretical%20results.%20The%20code%20for%20our%20experiments%20can%20be%20found%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fcjyaras%5C%2Flawofparsimony.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2306.01154%22%2C%22date%22%3A%222023-06-01%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2306.01154%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2306.01154%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22J2WEX3C9%22%5D%2C%22dateModified%22%3A%222023-08-30T19%3A45%3A31Z%22%7D%7D%2C%7B%22key%22%3A%22KZWRD7GR%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Blocker%20et%20al.%22%2C%22parsedDate%22%3A%222023-03-26%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBlocker%2C%20C.%20J.%2C%20Raja%2C%20H.%2C%20Fessler%2C%20J.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282023%29.%20%26lt%3Bi%26gt%3BDynamic%20Subspace%20Estimation%20with%20Grassmannian%20Geodesics%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2303.14851%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2303.14851%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2303.14851%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Dynamic%20Subspace%20Estimation%20with%20Grassmannian%20Geodesics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cameron%20J.%22%2C%22lastName%22%3A%22Blocker%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haroon%22%2C%22lastName%22%3A%22Raja%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Dynamic%20subspace%20estimation%2C%20or%20subspace%20tracking%2C%20is%20a%20fundamental%20problem%20in%20statistical%20signal%20processing%20and%20machine%20learning.%20This%20paper%20considers%20a%20geodesic%20model%20for%20time-varying%20subspaces.%20The%20natural%20objective%20function%20for%20this%20model%20is%20non-convex.%20We%20propose%20a%20novel%20algorithm%20for%20minimizing%20this%20objective%20and%20estimating%20the%20parameters%20of%20the%20model%20from%20data%20with%20Grassmannian-constrained%20optimization.%20We%20show%20that%20with%20this%20algorithm%2C%20the%20objective%20is%20monotonically%20non-increasing.%20We%20demonstrate%20the%20performance%20of%20this%20model%20and%20our%20algorithm%20on%20synthetic%20data%2C%20video%20data%2C%20and%20dynamic%20fMRI%20data.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2303.14851%22%2C%22date%22%3A%222023-03-26%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2303.14851%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2303.14851%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22J2WEX3C9%22%5D%2C%22dateModified%22%3A%222023-08-03T09%3A49%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22TZMM3WBZ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ritchie%20et%20al.%22%2C%22parsedDate%22%3A%222022-08-16%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BRitchie%2C%20A.%2C%20Balzano%2C%20L.%2C%20Kessler%2C%20D.%2C%20Sripada%2C%20C.%20S.%2C%20%26amp%3B%20Scott%2C%20C.%20%282022%29.%20%26lt%3Bi%26gt%3BSupervised%20PCA%3A%20A%20Multiobjective%20Approach%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2011.05309%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2011.05309%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2011.05309%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Supervised%20PCA%3A%20A%20Multiobjective%20Approach%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Ritchie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Kessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chandra%20S.%22%2C%22lastName%22%3A%22Sripada%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Clayton%22%2C%22lastName%22%3A%22Scott%22%7D%5D%2C%22abstractNote%22%3A%22Methods%20for%20supervised%20principal%20component%20analysis%20%28SPCA%29%20aim%20to%20incorporate%20label%20information%20into%20principal%20component%20analysis%20%28PCA%29%2C%20so%20that%20the%20extracted%20features%20are%20more%20useful%20for%20a%20prediction%20task%20of%20interest.%20Prior%20work%20on%20SPCA%20has%20focused%20primarily%20on%20optimizing%20prediction%20error%2C%20and%20has%20neglected%20the%20value%20of%20maximizing%20variance%20explained%20by%20the%20extracted%20features.%20We%20propose%20a%20new%20method%20for%20SPCA%20that%20addresses%20both%20of%20these%20objectives%20jointly%2C%20and%20demonstrate%20empirically%20that%20our%20approach%20dominates%20existing%20approaches%2C%20i.e.%2C%20outperforms%20them%20with%20respect%20to%20both%20prediction%20error%20and%20variation%20explained.%20Our%20approach%20accommodates%20arbitrary%20supervised%20learning%20losses%20and%2C%20through%20a%20statistical%20reformulation%2C%20provides%20a%20novel%20low-rank%20extension%20of%20generalized%20linear%20models.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2011.05309%22%2C%22date%22%3A%222022-08-16%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2011.05309%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2011.05309%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22J2WEX3C9%22%5D%2C%22dateModified%22%3A%222022-09-14T23%3A31%3A20Z%22%7D%7D%2C%7B%22key%22%3A%2256SNXZHM%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tarzanagh%20et%20al.%22%2C%22parsedDate%22%3A%222021-12-09%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTarzanagh%2C%20D.%20A.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Hero%2C%20A.%20O.%20%282021%29.%20%26lt%3Bi%26gt%3BFair%20Structure%20Learning%20in%20Heterogeneous%20Graphical%20Models%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2112.05128%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2112.05128%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2112.05128%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Fair%20Structure%20Learning%20in%20Heterogeneous%20Graphical%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Davoud%20Ataee%22%2C%22lastName%22%3A%22Tarzanagh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alfred%20O.%22%2C%22lastName%22%3A%22Hero%22%7D%5D%2C%22abstractNote%22%3A%22Inference%20of%20community%20structure%20in%20probabilistic%20graphical%20models%20may%20not%20be%20consistent%20with%20fairness%20constraints%20when%20nodes%20have%20demographic%20attributes.%20Certain%20demographics%20may%20be%20over-represented%20in%20some%20detected%20communities%20and%20under-represented%20in%20others.%20This%20paper%20defines%20a%20novel%20%24%5C%5Cell_1%24-regularized%20pseudo-likelihood%20approach%20for%20fair%20graphical%20model%20selection.%20In%20particular%2C%20we%20assume%20there%20is%20some%20community%20or%20clustering%20structure%20in%20the%20true%20underlying%20graph%2C%20and%20we%20seek%20to%20learn%20a%20sparse%20undirected%20graph%20and%20its%20communities%20from%20the%20data%20such%20that%20demographic%20groups%20are%20fairly%20represented%20within%20the%20communities.%20Our%20optimization%20approach%20uses%20the%20demographic%20parity%20definition%20of%20fairness%2C%20but%20the%20framework%20is%20easily%20extended%20to%20other%20definitions%20of%20fairness.%20We%20establish%20statistical%20consistency%20of%20the%20proposed%20method%20for%20both%20a%20Gaussian%20graphical%20model%20and%20an%20Ising%20model%20for%2C%20respectively%2C%20continuous%20and%20binary%20data%2C%20proving%20that%20our%20method%20can%20recover%20the%20graphs%20and%20their%20fair%20communities%20with%20high%20probability.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2112.05128%22%2C%22date%22%3A%222021-12-09%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2112.05128%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2112.05128%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22J2WEX3C9%22%5D%2C%22dateModified%22%3A%222022-09-14T23%3A32%3A31Z%22%7D%7D%2C%7B%22key%22%3A%22BRIYFNWL%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sattar%20et%20al.%22%2C%22parsedDate%22%3A%222021-11-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSattar%2C%20Y.%2C%20Du%2C%20Z.%2C%20Tarzanagh%2C%20D.%20A.%2C%20Balzano%2C%20L.%2C%20Ozay%2C%20N.%2C%20%26amp%3B%20Oymak%2C%20S.%20%282021%29.%20%26lt%3Bi%26gt%3BIdentification%20and%20Adaptive%20Control%20of%20Markov%20Jump%20Systems%3A%20Sample%20Complexity%20and%20Regret%20Bounds%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2111.07018%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2111.07018%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2111.07018%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Identification%20and%20Adaptive%20Control%20of%20Markov%20Jump%20Systems%3A%20Sample%20Complexity%20and%20Regret%20Bounds%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yahya%22%2C%22lastName%22%3A%22Sattar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Davoud%20Ataee%22%2C%22lastName%22%3A%22Tarzanagh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Necmiye%22%2C%22lastName%22%3A%22Ozay%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samet%22%2C%22lastName%22%3A%22Oymak%22%7D%5D%2C%22abstractNote%22%3A%22Learning%20how%20to%20effectively%20control%20unknown%20dynamical%20systems%20is%20crucial%20for%20intelligent%20autonomous%20systems.%20This%20task%20becomes%20a%20significant%20challenge%20when%20the%20underlying%20dynamics%20are%20changing%20with%20time.%20Motivated%20by%20this%20challenge%2C%20this%20paper%20considers%20the%20problem%20of%20controlling%20an%20unknown%20Markov%20jump%20linear%20system%20%28MJS%29%20to%20optimize%20a%20quadratic%20objective.%20By%20taking%20a%20model-based%20perspective%2C%20we%20consider%20identification-based%20adaptive%20control%20for%20MJSs.%20We%20first%20provide%20a%20system%20identification%20algorithm%20for%20MJS%20to%20learn%20the%20dynamics%20in%20each%20mode%20as%20well%20as%20the%20Markov%20transition%20matrix%2C%20underlying%20the%20evolution%20of%20the%20mode%20switches%2C%20from%20a%20single%20trajectory%20of%20the%20system%20states%2C%20inputs%2C%20and%20modes.%20Through%20mixing-time%20arguments%2C%20sample%20complexity%20of%20this%20algorithm%20is%20shown%20to%20be%20%24%5C%5Cmathcal%7BO%7D%281%5C%2F%5C%5Csqrt%7BT%7D%29%24.%20We%20then%20propose%20an%20adaptive%20control%20scheme%20that%20performs%20system%20identification%20together%20with%20certainty%20equivalent%20control%20to%20adapt%20the%20controllers%20in%20an%20episodic%20fashion.%20Combining%20our%20sample%20complexity%20results%20with%20recent%20perturbation%20results%20for%20certainty%20equivalent%20control%2C%20we%20prove%20that%20when%20the%20episode%20lengths%20are%20appropriately%20chosen%2C%20the%20proposed%20adaptive%20control%20scheme%20achieves%20%24%5C%5Cmathcal%7BO%7D%28%5C%5Csqrt%7BT%7D%29%24%20regret%2C%20which%20can%20be%20improved%20to%20%24%5C%5Cmathcal%7BO%7D%28polylog%28T%29%29%24%20with%20partial%20knowledge%20of%20the%20system.%20Our%20proof%20strategy%20introduces%20innovations%20to%20handle%20Markovian%20jumps%20and%20a%20weaker%20notion%20of%20stability%20common%20in%20MJSs.%20Our%20analysis%20provides%20insights%20into%20system%20theoretic%20quantities%20that%20affect%20learning%20accuracy%20and%20control%20performance.%20Numerical%20simulations%20are%20presented%20to%20further%20reinforce%20these%20insights.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2111.07018%22%2C%22date%22%3A%222021-11-12%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2111.07018%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2111.07018%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22J2WEX3C9%22%5D%2C%22dateModified%22%3A%222022-06-30T16%3A15%3A58Z%22%7D%7D%5D%7D
Cavazos, J. S., Fessler, J. A., & Balzano, L. (2025). ALPCAHUS: Subspace Clustering for Heteroscedastic Data (No. arXiv:2505.18918). arXiv. https://doi.org/10.48550/arXiv.2505.18918
Kwon, S. M., Xu, A. S., Yaras, C., Balzano, L., & Qu, Q. (2025). Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace Perspective (No. arXiv:2505.14808). arXiv. https://doi.org/10.48550/arXiv.2505.14808
Balzano, L., Ding, T., Haeffele, B. D., Kwon, S. M., Qu, Q., Wang, P., Wang, Z., & Yaras, C. (2025). An Overview of Low-Rank Structures in the Training and Adaptation of Large Models (No. arXiv:2503.19859). arXiv. https://doi.org/10.48550/arXiv.2503.19859
Li, Y., Balzano, L., Needell, D., & Lyu, H. (2024). Convergence and complexity of block majorization-minimization for constrained block-Riemannian optimization (No. arXiv:2312.10330). arXiv. https://doi.org/10.48550/arXiv.2312.10330
Yaras, C., Wang, P., Hu, W., Zhu, Z., Balzano, L., & Qu, Q. (2023). The Law of Parsimony in Gradient Descent for Learning Deep Linear Networks (No. arXiv:2306.01154). arXiv. https://doi.org/10.48550/arXiv.2306.01154
Blocker, C. J., Raja, H., Fessler, J. A., & Balzano, L. (2023). Dynamic Subspace Estimation with Grassmannian Geodesics (No. arXiv:2303.14851). arXiv. https://doi.org/10.48550/arXiv.2303.14851
Ritchie, A., Balzano, L., Kessler, D., Sripada, C. S., & Scott, C. (2022). Supervised PCA: A Multiobjective Approach (No. arXiv:2011.05309). arXiv. https://doi.org/10.48550/arXiv.2011.05309
Tarzanagh, D. A., Balzano, L., & Hero, A. O. (2021). Fair Structure Learning in Heterogeneous Graphical Models (No. arXiv:2112.05128). arXiv. https://doi.org/10.48550/arXiv.2112.05128
Sattar, Y., Du, Z., Tarzanagh, D. A., Balzano, L., Ozay, N., & Oymak, S. (2021). Identification and Adaptive Control of Markov Jump Systems: Sample Complexity and Regret Bounds (No. arXiv:2111.07018). arXiv. https://doi.org/10.48550/arXiv.2111.07018

Published

2025

1399621 DZFDBB6V 2025 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22HRSVSCMM%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gilman%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-30%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGilman%2C%20K.%2C%20Burer%2C%20S.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282025%29.%20A%20Semidefinite%20Relaxation%20for%20Sums%20of%20Heterogeneous%20Quadratic%20Forms%20on%20the%20Stiefel%20Manifold.%20%26lt%3Bi%26gt%3BSIAM%20Journal%20on%20Matrix%20Analysis%20and%20Applications%26lt%3B%5C%2Fi%26gt%3B%2C%201091%26%23x2013%3B1116.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1137%5C%2F23M1545136%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1137%5C%2F23M1545136%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Semidefinite%20Relaxation%20for%20Sums%20of%20Heterogeneous%20Quadratic%20Forms%20on%20the%20Stiefel%20Manifold%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kyle%22%2C%22lastName%22%3A%22Gilman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samuel%22%2C%22lastName%22%3A%22Burer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222025-06-30%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1137%5C%2F23M1545136%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fepubs.siam.org%5C%2Fdoi%5C%2F10.1137%5C%2F23M1545136%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%220895-4798%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222025-04-17T14%3A14%3A49Z%22%7D%7D%2C%7B%22key%22%3A%225CL8R46Y%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yaras%20et%20al.%22%2C%22parsedDate%22%3A%222025-05-24%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYaras%2C%20C.%2C%20Xu%2C%20A.%20S.%2C%20Abillama%2C%20P.%2C%20Lee%2C%20C.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282025%29.%20%26lt%3Bi%26gt%3BMonarchAttention%3A%20Zero-Shot%20Conversion%20to%20Fast%2C%20Hardware-Aware%20Structured%20Attention%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20arXiv%3A2505.18698%29.%20Accepted%20to%20Neurips%202025.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2505.18698%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2505.18698%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22MonarchAttention%3A%20Zero-Shot%20Conversion%20to%20Fast%2C%20Hardware-Aware%20Structured%20Attention%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alec%20S.%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pierre%22%2C%22lastName%22%3A%22Abillama%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Changwoo%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Transformers%20have%20achieved%20state-of-the-art%20performance%20across%20various%20tasks%2C%20but%20suffer%20from%20a%20notable%20quadratic%20complexity%20in%20sequence%20length%20due%20to%20the%20attention%20mechanism.%20In%20this%20work%2C%20we%20propose%20MonarchAttention%20--%20a%20novel%20approach%20to%20sub-quadratic%20attention%20approximation%20via%20Monarch%20matrices%2C%20an%20expressive%20class%20of%20structured%20matrices.%20Based%20on%20the%20variational%20form%20of%20softmax%2C%20we%20describe%20an%20efficient%20optimization-based%20algorithm%20to%20compute%20an%20approximate%20projection%20of%20softmax%20attention%20onto%20the%20class%20of%20Monarch%20matrices%20with%20%24%5C%5CTheta%28N%5C%5Csqrt%7BN%7D%20d%29%24%20computational%20complexity%20and%20%24%5C%5CTheta%28Nd%29%24%20memory%5C%2FIO%20complexity.%20Unlike%20previous%20approaches%2C%20MonarchAttention%20is%20both%20%281%29%20transferable%2C%20yielding%20minimal%20performance%20loss%20with%20no%20additional%20training%2C%20even%20when%20replacing%20every%20attention%20layer%20of%20the%20transformer%2C%20and%20%282%29%20hardware-efficient%2C%20utilizing%20the%20highest-throughput%20tensor%20core%20units%20on%20modern%20GPUs.%20With%20optimized%20kernels%2C%20MonarchAttention%20achieves%20substantial%20speed-ups%20in%20wall-time%20over%20FlashAttention-2%3A%20%241.4%5C%5Ctimes%24%20for%20shorter%20sequences%20%24%28N%3D256%29%24%2C%20%244.5%5C%5Ctimes%24%20for%20medium-length%20sequences%20%24%28N%3D4K%29%24%2C%20and%20%248.2%5C%5Ctimes%24%20for%20longer%20sequences%20%24%28N%3D16K%29%24.%20We%20demonstrate%20the%20quality%20of%20MonarchAttention%20on%20diverse%20tasks%20and%20architectures%20in%20vision%20and%20language%20problems%2C%20showing%20that%20it%20flexibly%20and%20accurately%20approximates%20softmax%20attention%20in%20a%20variety%20of%20contexts.%20Our%20code%20is%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fcjyaras%5C%2Fmonarch-attention.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22Accepted%20to%20Neurips%202025%22%2C%22archiveID%22%3A%22arXiv%3A2505.18698%22%2C%22date%22%3A%222025-05-24%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2505.18698%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2505.18698%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222025-09-29T10%3A13%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22PS95E4Z6%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gilman%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-17%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGilman%2C%20K.%2C%20Hong%2C%20D.%2C%20Fessler%2C%20J.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282025%29.%20Streaming%20Heteroscedastic%20Probabilistic%20PCA%20with%20Missing%20Data.%20%26lt%3Bi%26gt%3BTransactions%20on%20Machine%20Learning%20Research%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3Dlb2rPLuP9X%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3Dlb2rPLuP9X%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Streaming%20Heteroscedastic%20Probabilistic%20PCA%20with%20Missing%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kyle%22%2C%22lastName%22%3A%22Gilman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Streaming%20principal%20component%20analysis%20%28PCA%29%20is%20an%20integral%20tool%20in%20large-scale%20machine%20learning%20for%20rapidly%20estimating%20low-dimensional%20subspaces%20from%20very%20high-dimensional%20data%20arriving%20at%20a%20high%20rate.%20However%2C%20modern%20datasets%20increasingly%20combine%20data%20from%20a%20variety%20of%20sources%2C%20and%20thus%20may%20exhibit%20heterogeneous%20quality%20across%20samples.%20Standard%20streaming%20PCA%20algorithms%20do%20not%20account%20for%20non-uniform%20noise%2C%20so%20their%20subspace%20estimates%20can%20quickly%20degrade.%20While%20the%20recently%20proposed%20Heteroscedastic%20Probabilistic%20PCA%20Technique%20%28HePPCAT%29%20addresses%20this%20heterogeneity%2C%20it%20was%20not%20designed%20to%20handle%20streaming%20data%20that%20may%20exhibit%20non-stationary%20behavior.%20Moreover%2C%20HePPCAT%20does%20not%20allow%20for%20missing%20entries%20in%20the%20data%2C%20which%20can%20be%20common%20in%20streaming%20data.%20This%20paper%20proposes%20the%20Streaming%20HeteroscedASTic%20Algorithm%20for%20PCA%20%28SHASTA-PCA%29%20to%20bridge%20this%20divide.%20SHASTA-PCA%20employs%20a%20stochastic%20alternating%20expectation-maximization%20approach%20that%20jointly%20learns%20the%20low-rank%20latent%20factors%20and%20the%20unknown%20noise%20variances%20from%20streaming%20data%20that%20may%20have%20missing%20entries%20and%20heteroscedastic%20noise%2C%20all%20while%20maintaining%20a%20low%20memory%20and%20computational%20footprint.%20Numerical%20experiments%20demonstrate%20the%20superior%20subspace%20estimation%20of%20our%20method%20compared%20to%20state-of-the-art%20streaming%20PCA%20algorithms%20in%20the%20heteroscedastic%20setting.%20Finally%2C%20we%20illustrate%20SHASTA-PCA%20applied%20to%20highly%20heterogeneous%20real%20data%20from%20astronomy.%22%2C%22date%22%3A%222025%5C%2F04%5C%2F17%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3Dlb2rPLuP9X%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222835-8856%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222025-09-29T10%3A17%3A36Z%22%7D%7D%2C%7B%22key%22%3A%22CWZ57AUU%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-07%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20P.%2C%20Jiang%2C%20R.%2C%20Kong%2C%20Q.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282025%29.%20A%20Proximal%20Difference-of-Convex%20Algorithm%20for%20Sample%20Average%20Approximation%20of%20Chance%20Constrained%20Programming.%20%26lt%3Bi%26gt%3BINFORMS%20Journal%20on%20Computing%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1287%5C%2Fijoc.2024.0648%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1287%5C%2Fijoc.2024.0648%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Proximal%20Difference-of-Convex%20Algorithm%20for%20Sample%20Average%20Approximation%20of%20Chance%20Constrained%20Programming%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rujun%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qingyuan%22%2C%22lastName%22%3A%22Kong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Chance%20constrained%20programming%20%28CCP%29%20refers%20to%20a%20type%20of%20optimization%20problem%20with%20uncertain%20constraints%20that%20are%20satisfied%20with%20at%20least%20a%20prescribed%20probability%20level.%20In%20this%20work%2C%20we%20study%20the%20sample%20average%20approximation%20%28SAA%29%20method%20for%20chance%20constraints%2C%20which%20is%20an%20important%20approach%20to%20CCP%20in%20the%20data-driven%20setting%20where%20only%20a%20sample%20of%20multiple%20realizations%20of%20the%20random%20vector%20in%20the%20constraints%20is%20available.%20The%20SAA%20method%20approximates%20the%20underlying%20distribution%20with%20an%20empirical%20distribution%20over%20the%20available%20sample.%20Assuming%20that%20the%20functions%20in%20the%20chance%20constraints%20are%20all%20convex%2C%20we%20reformulate%20the%20SAA%20of%20chance%20constraints%20into%20a%20difference-of-convex%20%28DC%29%20form.%20Additionally%2C%20by%20assuming%20the%20objective%20function%20is%20also%20a%20DC%20function%2C%20we%20obtain%20a%20DC%20constrained%20DC%20program.%20To%20solve%20this%20reformulation%2C%20we%20propose%20a%20proximal%20DC%20algorithm%20and%20show%20that%20the%20subproblems%20of%20the%20algorithm%20are%20suitable%20for%20off-the-shelf%20solvers%20in%20some%20scenarios.%20Moreover%2C%20we%20not%20only%20prove%20the%20subsequential%20and%20sequential%20convergence%20of%20the%20proposed%20algorithm%2C%20but%20also%20derive%20the%20iteration%20complexity%20for%20finding%20an%20approximate%20Karush-Kuhn-Tucker%20point.%20To%20support%20and%20complement%20our%20theoretical%20development%2C%20we%20show%20via%20numerical%20experiments%20that%20our%20proposed%20approach%20is%20competitive%20with%20a%20host%20of%20existing%20approaches.%5Cn%5CnHistory%3A%20Accepted%20by%20Pascal%20Van%20Hentenryck%2C%20Area%20Editor%20for%20Computational%20Modeling%3A%20Methods%20%26amp%3B%20Analysis.%5Cn%5CnFunding%3A%20P.%20Wang%20and%20L.%20Balzano%20received%20financial%20support%20from%20the%20National%20Science%20Foundation%20%5BCAREER%20Award%20CCF-1845076%5D%2C%20the%20Army%20Research%20Office%20Young%20Investigator%20Program%20%5BAward%20W911NF1910027%5D%2C%20and%20the%20Department%20of%20Energy%20%5BAward%20DE-SC0022186%5D.%20R.%20Jiang%20received%20financial%20support%20from%20the%20Major%20Program%20of%20the%20National%20Natural%20Science%20Foundation%20of%20China%20%5BGrants%2072394360%2C%2072394364%5D%20and%20the%20Natural%20Science%20Foundation%20of%20Shanghai%20%5BGrant%2022ZR1405100%5D.%5Cn%5CnSupplemental%20Material%3A%20The%20software%20that%20supports%20the%20findings%20of%20this%20study%20is%20available%20within%20the%20paper%20and%20its%20Supplemental%20Information%20%28https%3A%5C%2F%5C%2Fpubsonline.informs.org%5C%2Fdoi%5C%2Fsuppl%5C%2F10.1287%5C%2Fijoc.2024.0648%29%20as%20well%20as%20from%20the%20IJOC%20GitHub%20software%20repository%20%28https%3A%5C%2F%5C%2Fgithub.com%5C%2FINFORMSJoC%5C%2F2024.0648%29.%20The%20complete%20IJOC%20Software%20and%20Data%20Repository%20is%20available%20at%20https%3A%5C%2F%5C%2Finformsjoc.github.io%5C%2F.%22%2C%22date%22%3A%222025-04-07%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1287%5C%2Fijoc.2024.0648%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fpubsonline.informs.org%5C%2Fdoi%5C%2Ffull%5C%2F10.1287%5C%2Fijoc.2024.0648%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221091-9856%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222025-04-07T17%3A15%3A38Z%22%7D%7D%2C%7B%22key%22%3A%22MA32RNAU%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20P.%2C%20Li%2C%20X.%2C%20Yaras%2C%20C.%2C%20Zhu%2C%20Z.%2C%20Balzano%2C%20L.%2C%20Hu%2C%20W.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282025%29.%20Understanding%20Deep%20Representation%20Learning%20via%20Layerwise%20Feature%20Compression%20and%20Discrimination.%20%26lt%3Bi%26gt%3BJournal%20of%20Machine%20Learning%20Research%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B26%26lt%3B%5C%2Fi%26gt%3B%28220%29%2C%201%26%23x2013%3B61.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fjmlr.org%5C%2Fpapers%5C%2Fv26%5C%2F24-0047.html%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fjmlr.org%5C%2Fpapers%5C%2Fv26%5C%2F24-0047.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Understanding%20Deep%20Representation%20Learning%20via%20Layerwise%20Feature%20Compression%20and%20Discrimination%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhihui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wei%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22Over%20the%20past%20decade%2C%20deep%20learning%20has%20proven%20to%20be%20a%20highly%20effective%20tool%20for%20learning%20meaningful%20features%20from%20raw%20data.%20However%2C%20it%20remains%20an%20open%20question%20how%20deep%20networks%20perform%20hierarchical%20feature%20learning%20across%20layers.%20In%20this%20work%2C%20we%20attempt%20to%20unveil%20this%20mystery%20by%20investigating%20the%20structures%20of%20intermediate%20features.%20Motivated%20by%20our%20empirical%20findings%20that%20linear%20layers%20mimic%20the%20roles%20of%20deep%20layers%20in%20nonlinear%20networks%20for%20feature%20learning%2C%20we%20explore%20how%20deep%20linear%20networks%20transform%20input%20data%20into%20output%20by%20investigating%20the%20output%20%28i.e.%2C%20features%29%20of%20each%20layer%20after%20training%20in%20the%20context%20of%20multi-class%20classification%20problems.%20Toward%20this%20goal%2C%20we%20first%20define%20metrics%20to%20measure%20within-class%20compression%20and%20between-class%20discrimination%20of%20intermediate%20features%2C%20respectively.%20Through%20theoretical%20analysis%20of%20these%20two%20metrics%2C%20we%20show%20that%20the%20evolution%20of%20features%20follows%20a%20simple%20and%20quantitative%20pattern%20from%20shallow%20to%20deep%20layers%20when%20the%20input%20data%20is%20nearly%20orthogonal%20and%20the%20network%20weights%20are%20minimum-norm%2C%20balanced%2C%20and%20approximately%20low-rank%3A%20each%20layer%20of%20the%20linear%20network%20progressively%20compresses%20within-class%20features%20at%20a%20geometric%20rate%20and%20discriminates%20between-class%20features%20at%20a%20linear%20rate%20with%20respect%20to%20the%20number%20of%20layers%20that%20data%20have%20passed%20through.%20To%20the%20best%20of%20our%20knowledge%2C%20this%20is%20the%20first%20quantitative%20characterization%20of%20feature%20evolution%20in%20hierarchical%20representations%20of%20deep%20linear%20networks.%20Moreover%2C%20our%20extensive%20experiments%20not%20only%20validate%20our%20theoretical%20results%20but%20also%20reveal%20a%20similar%20pattern%20in%20deep%20nonlinear%20networks%2C%20which%20aligns%20well%20with%20recent%20empirical%20studies.%20Finally%2C%20we%20demonstrate%20the%20practical%20value%20of%20our%20results%20in%20transfer%20learning.%22%2C%22date%22%3A%222025%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fjmlr.org%5C%2Fpapers%5C%2Fv26%5C%2F24-0047.html%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221533-7928%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222026-01-02T16%3A25%3A11Z%22%7D%7D%2C%7B%22key%22%3A%22YWGMY8AA%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hong%20and%20Balzano%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHong%2C%20D.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282025%29.%20Optimal%20sample%20acquisition%20for%20optimally%20weighted%20PCA%20from%20heterogeneous%20quality%20sources.%20%26lt%3Bi%26gt%3BIEEE%20Signal%20Processing%20Letters%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B5.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FLSP.2025.3550280%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FLSP.2025.3550280%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Optimal%20sample%20acquisition%20for%20optimally%20weighted%20PCA%20from%20heterogeneous%20quality%20sources%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Modern%20high-dimensional%20datasets%20are%20often%20formed%20by%20acquiring%20samples%20from%20multiple%20sources%20having%20heterogeneous%20quality%2C%20i.e.%2C%20some%20sources%20are%20noisier%20than%20others.%20Collecting%20data%20in%20this%20manner%20raises%20the%20following%20natural%20question%3A%20what%20is%20the%20best%20way%20to%20collect%20the%20data%20%28i.e.%2C%20how%20many%20samples%20should%20be%20acquired%20from%20each%20source%29%20given%20constraints%20%28e.g.%2C%20on%20time%20or%20energy%29%3F%20In%20general%2C%20the%20answer%20depends%20on%20what%20analysis%20is%20to%20be%20performed.%20In%20this%20paper%2C%20we%20study%20the%20foundational%20signal%20processing%20task%20of%20estimating%20underlying%20low-dimensional%20principal%20components.%20Since%20the%20resulting%20dataset%20will%20be%20high-dimensional%20and%20will%20have%20heteroscedastic%20noise%2C%20we%20focus%20on%20the%20recently%20proposed%20optimally%20weighted%20PCA%2C%20which%20is%20designed%20specifically%20for%20this%20setting.%20We%20develop%20an%20efficient%20method%20for%20designing%20sample%20acquisitions%20that%20optimize%20the%20asymptotic%20performance%20of%20optimally%20weighted%20PCA%20given%20resource%20constraints%2C%20and%20we%20illustrate%20the%20proposed%20method%20through%20various%20case%20studies.%22%2C%22date%22%3A%222025%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLSP.2025.3550280%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10921711%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221558-2361%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222025-03-13T01%3A54%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22V2UU9A3A%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Salazar%20Cavazos%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSalazar%20Cavazos%2C%20J.%2C%20Fessler%2C%20J.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282025%29.%20ALPCAH%3A%20Subspace%20Learning%20for%20Sample-Wise%20Heteroscedastic%20Data.%20%26lt%3Bi%26gt%3BIEEE%20Transactions%20on%20Signal%20Processing%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B73%26lt%3B%5C%2Fi%26gt%3B%2C%20876%26%23x2013%3B886.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTSP.2025.3537867%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTSP.2025.3537867%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22ALPCAH%3A%20Subspace%20Learning%20for%20Sample-Wise%20Heteroscedastic%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Javier%22%2C%22lastName%22%3A%22Salazar%20Cavazos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Principal%20component%20analysis%20%28PCA%29%20is%20a%20key%20tool%20in%20the%20field%20of%20data%20dimensionality%20reduction.%20However%2C%20some%20applications%20involve%20heterogeneous%20data%20that%20vary%20in%20quality%20due%20to%20noise%20characteristics%20associated%20with%20each%20data%20sample.%20Heteroscedastic%20methods%20aim%20to%20deal%20with%20such%20mixed%20data%20quality.%20This%20paper%20develops%20a%20subspace%20learning%20method%2C%20named%20ALPCAH%2C%20that%20can%20estimate%20the%20sample-wise%20noise%20variances%20and%20use%20this%20information%20to%20improve%20the%20estimate%20of%20the%20subspace%20basis%20associated%20with%20the%20low-rank%20structure%20of%20the%20data.%20Our%20method%20makes%20no%20distributional%20assumptions%20of%20the%20low-rank%20component%20and%20does%20not%20assume%20that%20the%20noise%20variances%20are%20known.%20Further%2C%20this%20method%20uses%20a%20soft%20rank%20constraint%20that%20does%20not%20require%20subspace%20dimension%20to%20be%20known.%20Additionally%2C%20this%20paper%20develops%20a%20matrix%20factorized%20version%20of%20ALPCAH%2C%20named%20LR-ALPCAH%2C%20that%20is%20much%20faster%20and%20more%20memory%20efficient%20at%20the%20cost%20of%20requiring%20subspace%20dimension%20to%20be%20known%20or%20estimated.%20Simulations%20and%20real%20data%20experiments%20show%20the%20effectiveness%20of%20accounting%20for%20data%20heteroscedasticity%20compared%20to%20existing%20algorithms.%20Code%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fjaviersc1%5C%2FALPCAH.%22%2C%22date%22%3A%222025%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTSP.2025.3537867%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10869328%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221941-0476%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%5D%2C%22dateModified%22%3A%222025-02-17T21%3A08%3A58Z%22%7D%7D%5D%7D
Gilman, K., Burer, S., & Balzano, L. (2025). A Semidefinite Relaxation for Sums of Heterogeneous Quadratic Forms on the Stiefel Manifold. SIAM Journal on Matrix Analysis and Applications, 1091–1116. https://doi.org/10.1137/23M1545136
Yaras, C., Xu, A. S., Abillama, P., Lee, C., & Balzano, L. (2025). MonarchAttention: Zero-Shot Conversion to Fast, Hardware-Aware Structured Attention (No. arXiv:2505.18698). Accepted to Neurips 2025. https://doi.org/10.48550/arXiv.2505.18698
Gilman, K., Hong, D., Fessler, J. A., & Balzano, L. (2025). Streaming Heteroscedastic Probabilistic PCA with Missing Data. Transactions on Machine Learning Research. https://openreview.net/forum?id=lb2rPLuP9X
Wang, P., Jiang, R., Kong, Q., & Balzano, L. (2025). A Proximal Difference-of-Convex Algorithm for Sample Average Approximation of Chance Constrained Programming. INFORMS Journal on Computing. https://doi.org/10.1287/ijoc.2024.0648
Wang, P., Li, X., Yaras, C., Zhu, Z., Balzano, L., Hu, W., & Qu, Q. (2025). Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination. Journal of Machine Learning Research, 26(220), 1–61. http://jmlr.org/papers/v26/24-0047.html
Hong, D., & Balzano, L. (2025). Optimal sample acquisition for optimally weighted PCA from heterogeneous quality sources. IEEE Signal Processing Letters, 1–5. https://doi.org/10.1109/LSP.2025.3550280
Salazar Cavazos, J., Fessler, J. A., & Balzano, L. (2025). ALPCAH: Subspace Learning for Sample-Wise Heteroscedastic Data. IEEE Transactions on Signal Processing, 73, 876–886. https://doi.org/10.1109/TSP.2025.3537867

2024

1399621 DZFDBB6V 2024 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22WPB3JS3H%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hume%20and%20Balzano%22%2C%22parsedDate%22%3A%222024-11-19%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHume%2C%20J.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282024%2C%20November%2019%29.%20%26lt%3Bi%26gt%3BA%20Spectral%20Framework%20for%20Tracking%20Communities%20in%20Evolving%20Networks%26lt%3B%5C%2Fi%26gt%3B.%20The%20Third%20Learning%20on%20Graphs%20Conference.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3Des9LIeVa9s%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3Des9LIeVa9s%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Spectral%20Framework%20for%20Tracking%20Communities%20in%20Evolving%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacob%22%2C%22lastName%22%3A%22Hume%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Discovering%20and%20tracking%20communities%20in%20time-varying%20networks%20is%20an%20important%20task%20in%20network%20science%2C%20motivated%20by%20applications%20in%20fields%20ranging%20from%20neuroscience%20to%20sociology.%20In%20this%20work%2C%20we%20characterize%20the%20celebrated%20family%20of%20spectral%20methods%20for%20static%20clustering%20in%20terms%20of%20the%20low-rank%20approximation%20of%20high-dimensional%20node%20embeddings.%20From%20this%20perspective%2C%20it%20becomes%20natural%20to%20view%20the%20evolving%20community%20detection%20problem%20as%20one%20of%20subspace%20tracking%20on%20the%20Grassmann%20manifold.%20While%20the%20resulting%20optimization%20problem%20is%20nonconvex%2C%20we%20adopt%20a%20recently%20proposed%20block%20majorize-minimize%20Riemannian%20optimization%20scheme%20to%20learn%20the%20Grassmann%20geodesic%20which%20best%20fits%20the%20data.%20Our%20framework%20generalizes%20any%20static%20spectral%20community%20detection%20approach%20and%20leads%20to%20algorithms%20achieving%20favorable%20performance%20on%20synthetic%20and%20real%20temporal%20networks%2C%20including%20those%20that%20are%20weighted%2C%20signed%2C%20directed%2C%20mixed-membership%2C%20multiview%2C%20hierarchical%2C%20cocommunity-structured%2C%20bipartite%2C%20or%20some%20combination%20thereof.%20We%20demonstrate%20how%20to%20specifically%20cast%20a%20wide%20variety%20of%20methods%20into%20our%20framework%2C%20and%20demonstrate%20greatly%20improved%20dynamic%20community%20detection%20results%20in%20all%20cases.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22The%20Third%20Learning%20on%20Graphs%20Conference%22%2C%22date%22%3A%222024%5C%2F11%5C%2F19%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3Des9LIeVa9s%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-12-12T22%3A08%3A49Z%22%7D%7D%2C%7B%22key%22%3A%224DADRUF5%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yaras%20et%20al.%22%2C%22parsedDate%22%3A%222024-06-06%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYaras%2C%20C.%2C%20Wang%2C%20P.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282024%2C%20June%206%29.%20%26lt%3Bi%26gt%3BCompressible%20Dynamics%20in%20Deep%20Overparameterized%20Low-Rank%20Learning%20%26amp%3B%20Adaptation%26lt%3B%5C%2Fi%26gt%3B.%20Forty-first%20International%20Conference%20on%20Machine%20Learning.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DuDkXoZMzBv%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DuDkXoZMzBv%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Compressible%20Dynamics%20in%20Deep%20Overparameterized%20Low-Rank%20Learning%20%26%20Adaptation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22While%20overparameterization%20in%20machine%20learning%20models%20offers%20great%20benefits%20in%20terms%20of%20optimization%20and%20generalization%2C%20it%20also%20leads%20to%20increased%20computational%20requirements%20as%20model%20sizes%20grow.%20In%20this%20work%2C%20we%20show%20that%20by%20leveraging%20the%20inherent%20low-dimensional%20structures%20of%20data%20and%20compressible%20dynamics%20within%20the%20model%20parameters%2C%20we%20can%20reap%20the%20benefits%20of%20overparameterization%20without%20the%20computational%20burdens.%20In%20practice%2C%20we%20demonstrate%20the%20effectiveness%20of%20this%20approach%20for%20deep%20low-rank%20matrix%20completion%20as%20well%20as%20fine-tuning%20language%20models.%20Our%20approach%20is%20grounded%20in%20theoretical%20findings%20for%20deep%20overparameterized%20low-rank%20matrix%20recovery%2C%20where%20we%20show%20that%20the%20learning%20dynamics%20of%20each%20weight%20matrix%20are%20confined%20to%20an%20invariant%20low-dimensional%20subspace.%20Consequently%2C%20we%20can%20construct%20and%20train%20compact%2C%20highly%20compressed%20factorizations%20possessing%20the%20same%20benefits%20as%20their%20overparameterized%20counterparts.%20In%20the%20context%20of%20deep%20matrix%20completion%2C%20our%20technique%20substantially%20improves%20training%20efficiency%20while%20retaining%20the%20advantages%20of%20overparameterization.%20For%20language%20model%20fine-tuning%2C%20we%20propose%20a%20method%20called%20%26quot%3BDeep%20LoRA%26quot%3B%2C%20which%20improves%20the%20existing%20low-rank%20adaptation%20%28LoRA%29%20technique%2C%20leading%20to%20reduced%20overfitting%20and%20a%20simplified%20hyperparameter%20setup%2C%20while%20maintaining%20comparable%20efficiency.%20We%20validate%20the%20effectiveness%20of%20Deep%20LoRA%20on%20natural%20language%20tasks%2C%20particularly%20when%20fine-tuning%20with%20limited%20data.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Forty-first%20International%20Conference%20on%20Machine%20Learning%22%2C%22date%22%3A%222024%5C%2F06%5C%2F06%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DuDkXoZMzBv%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A12%3A40Z%22%7D%7D%2C%7B%22key%22%3A%22QWEB5FTR%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222024-06-06%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20H.%2C%20Wang%2C%20P.%2C%20Huang%2C%20L.%2C%20Qu%2C%20Q.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282024%2C%20June%206%29.%20%26lt%3Bi%26gt%3BSymmetric%20Matrix%20Completion%20with%20ReLU%20Sampling%26lt%3B%5C%2Fi%26gt%3B.%20Forty-first%20International%20Conference%20on%20Machine%20Learning.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DVxI0gInNlh%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DVxI0gInNlh%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Symmetric%20Matrix%20Completion%20with%20ReLU%20Sampling%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huikang%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Longxiu%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20study%20the%20problem%20of%20symmetric%20positive%20semi-definite%20low-rank%20matrix%20completion%20%28MC%29%20with%20deterministic%20entry-dependent%20sampling.%20In%20particular%2C%20we%20consider%20rectified%20linear%20unit%20%28ReLU%29%20sampling%2C%20where%20only%20positive%20entries%20are%20observed%2C%20as%20well%20as%20a%20generalization%20to%20threshold-based%20sampling.%20We%20first%20empirically%20demonstrate%20that%20the%20landscape%20of%20this%20MC%20problem%20is%20not%20globally%20benign%3A%20Gradient%20descent%20%28GD%29%20with%20random%20initialization%20will%20generally%20converge%20to%20stationary%20points%20that%20are%20not%20globally%20optimal.%20Nevertheless%2C%20we%20prove%20that%20when%20the%20matrix%20factor%20with%20a%20small%20rank%20satisfies%20mild%20assumptions%2C%20the%20nonconvex%20objective%20function%20is%20geodesically%20strongly%20convex%20on%20the%20quotient%20manifold%20in%20a%20neighborhood%20of%20a%20planted%20low-rank%20matrix.%20Moreover%2C%20we%20show%20that%20our%20assumptions%20are%20satisfied%20by%20a%20matrix%20factor%20with%20i.i.d.%20Gaussian%20entries.%20Finally%2C%20we%20develop%20a%20tailor-designed%20initialization%20for%20GD%20to%20solve%20our%20studied%20formulation%2C%20which%20empirically%20always%20achieves%20convergence%20to%20the%20global%20minima.%20We%20also%20conduct%20extensive%20experiments%20and%20compare%20MC%20methods%2C%20investigating%20convergence%20and%20completion%20performance%20with%20respect%20to%20initialization%2C%20noise%20level%2C%20dimension%2C%20and%20rank.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Forty-first%20International%20Conference%20on%20Machine%20Learning%22%2C%22date%22%3A%222024%5C%2F06%5C%2F06%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DVxI0gInNlh%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A12%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22GRD43WCB%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222024-06-06%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20Y.%2C%20Balzano%2C%20L.%2C%20Needell%2C%20D.%2C%20%26amp%3B%20Lyu%2C%20H.%20%282024%2C%20June%206%29.%20%26lt%3Bi%26gt%3BConvergence%20and%20Complexity%20Guarantee%20for%20Inexact%20First-order%20Riemannian%20Optimization%20Algorithms%26lt%3B%5C%2Fi%26gt%3B.%20Forty-first%20International%20Conference%20on%20Machine%20Learning.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D7KtFQnF368%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D7KtFQnF368%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Convergence%20and%20Complexity%20Guarantee%20for%20Inexact%20First-order%20Riemannian%20Optimization%20Algorithms%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuchen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Deanna%22%2C%22lastName%22%3A%22Needell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hanbaek%22%2C%22lastName%22%3A%22Lyu%22%7D%5D%2C%22abstractNote%22%3A%22We%20analyze%20inexact%20Riemannian%20gradient%20descent%20%28RGD%29%20where%20Riemannian%20gradients%20and%20retractions%20are%20inexactly%20%28and%20cheaply%29%20computed.%20Our%20focus%20is%20on%20understanding%20when%20inexact%20RGD%20converges%20and%20what%20is%20the%20complexity%20in%20the%20general%20nonconvex%20and%20constrained%20setting.%20We%20answer%20these%20questions%20in%20a%20general%20framework%20of%20tangential%20Block%20Majorization-Minimization%20%28tBMM%29.%20We%20establish%20that%20tBMM%20converges%20to%20an%20%24%5C%5Cepsilon%24-stationary%20point%20within%20%24O%28%5C%5Cepsilon%5E%7B-2%7D%29%24%20iterations.%20Under%20a%20mild%20assumption%2C%20the%20results%20still%20hold%20when%20the%20subproblem%20is%20solved%20inexactly%20in%20each%20iteration%20provided%20the%20total%20optimality%20gap%20is%20bounded.%20Our%20general%20analysis%20applies%20to%20a%20wide%20range%20of%20classical%20algorithms%20with%20Riemannian%20constraints%20including%20inexact%20RGD%20and%20proximal%20gradient%20method%20on%20Stiefel%20manifolds.%20We%20numerically%20validate%20that%20tBMM%20shows%20improved%20performance%20over%20existing%20methods%20when%20applied%20to%20various%20problems%2C%20including%20nonnegative%20tensor%20decomposition%20with%20Riemannian%20constraints%2C%20regularized%20nonnegative%20matrix%20factorization%2C%20and%20low-rank%20matrix%20recovery%20problems.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Forty-first%20International%20Conference%20on%20Machine%20Learning%22%2C%22date%22%3A%222024%5C%2F06%5C%2F06%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D7KtFQnF368%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A12%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22GCNVEBMA%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kwon%20et%20al.%22%2C%22parsedDate%22%3A%222024-04-18%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKwon%2C%20S.%20M.%2C%20Zhang%2C%20Z.%2C%20Song%2C%20D.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282024%29.%20Efficient%20Low-Dimensional%20Compression%20of%20Overparameterized%20Models.%20%26lt%3Bi%26gt%3BProceedings%20of%20The%2027th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%26lt%3B%5C%2Fi%26gt%3B%2C%201009%26%23x2013%3B1017.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv238%5C%2Fmin-kwon24a.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv238%5C%2Fmin-kwon24a.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Efficient%20Low-Dimensional%20Compression%20of%20Overparameterized%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Soo%20Min%22%2C%22lastName%22%3A%22Kwon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zekai%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dogyoon%22%2C%22lastName%22%3A%22Song%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20work%2C%20we%20present%20a%20novel%20approach%20for%20compressing%20overparameterized%20models%2C%20developed%20through%20studying%20their%20learning%20dynamics.%20We%20observe%20that%20for%20many%20deep%20models%2C%20updates%20to%20the%20weight%20matrices%20occur%20within%20a%20low-dimensional%20invariant%20subspace.%20For%20deep%20linear%20models%2C%20we%20demonstrate%20that%20their%20principal%20components%20are%20fitted%20incrementally%20within%20a%20small%20subspace%2C%20and%20use%20these%20insights%20to%20propose%20a%20compression%20algorithm%20for%20deep%20linear%20networks%20that%20involve%20decreasing%20the%20width%20of%20their%20intermediate%20layers.%20We%20empirically%20evaluate%20the%20effectiveness%20of%20our%20compression%20technique%20on%20matrix%20recovery%20problems.%20Remarkably%2C%20by%20using%20an%20initialization%20that%20exploits%20the%20structure%20of%20the%20problem%2C%20we%20observe%20that%20our%20compressed%20network%20converges%20faster%20than%20the%20original%20network%2C%20consistently%20yielding%20smaller%20recovery%20errors.%20We%20substantiate%20this%20observation%20by%20developing%20a%20theory%20focused%20on%20deep%20matrix%20factorization.%20Finally%2C%20we%20empirically%20demonstrate%20how%20our%20compressed%20model%20has%20the%20potential%20to%20improve%20the%20utility%20of%20deep%20nonlinear%20models.%20Overall%2C%20our%20algorithm%20improves%20the%20training%20efficiency%20by%20more%20than%202x%2C%20without%20compromising%20generalization.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20The%2027th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%22%2C%22conferenceName%22%3A%22International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%22%2C%22date%22%3A%222024-04-18%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv238%5C%2Fmin-kwon24a.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222024-04-26T17%3A45%3A38Z%22%7D%7D%2C%7B%22key%22%3A%22VVEDUCPR%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tarzanagh%20et%20al.%22%2C%22parsedDate%22%3A%222024-04-18%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTarzanagh%2C%20D.%20A.%2C%20Nazari%2C%20P.%2C%20Hou%2C%20B.%2C%20Shen%2C%20L.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282024%29.%20Online%20Bilevel%20Optimization%3A%20Regret%20Analysis%20of%20Online%20Alternating%20Gradient%20Methods.%20%26lt%3Bi%26gt%3BProceedings%20of%20The%2027th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%26lt%3B%5C%2Fi%26gt%3B%2C%202854%26%23x2013%3B2862.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv238%5C%2Fataee-tarzanagh24a.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv238%5C%2Fataee-tarzanagh24a.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Online%20Bilevel%20Optimization%3A%20Regret%20Analysis%20of%20Online%20Alternating%20Gradient%20Methods%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Davoud%20Ataee%22%2C%22lastName%22%3A%22Tarzanagh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Parvin%22%2C%22lastName%22%3A%22Nazari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bojian%22%2C%22lastName%22%3A%22Hou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Li%22%2C%22lastName%22%3A%22Shen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20introduces%20%5C%5Ctextit%7Bonline%20bilevel%20optimization%7D%20in%20which%20a%20sequence%20of%20time-varying%20bilevel%20problems%20is%20revealed%20one%20after%20the%20other.%20We%20extend%20the%20known%20regret%20bounds%20for%20single-level%20online%20algorithms%20to%20the%20bilevel%20setting.%20Specifically%2C%20we%20provide%20new%20notions%20of%20%5C%5Ctextit%7Bbilevel%20regret%7D%2C%20develop%20an%20online%20alternating%20time-averaged%20gradient%20method%20that%20is%20capable%20of%20leveraging%20smoothness%2C%20and%20give%20regret%20bounds%20in%20terms%20of%20the%20path-length%20of%20the%20inner%20and%20outer%20minimizer%20sequences.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20The%2027th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%22%2C%22conferenceName%22%3A%22International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%22%2C%22date%22%3A%222024-04-18%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv238%5C%2Fataee-tarzanagh24a.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-04-26T17%3A44%3A07Z%22%7D%7D%2C%7B%22key%22%3A%22MGSKH9VR%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Geelen%20et%20al.%22%2C%22parsedDate%22%3A%222024-03-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGeelen%2C%20R.%2C%20Balzano%2C%20L.%2C%20Wright%2C%20S.%2C%20%26amp%3B%20Willcox%2C%20K.%20%282024%29.%20Learning%20physics-based%20reduced-order%20models%20from%20data%20using%20nonlinear%20manifolds.%20%26lt%3Bi%26gt%3BChaos%3A%20An%20Interdisciplinary%20Journal%20of%20Nonlinear%20Science%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B34%26lt%3B%5C%2Fi%26gt%3B%283%29%2C%20033122.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1063%5C%2F5.0170105%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1063%5C%2F5.0170105%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20physics-based%20reduced-order%20models%20from%20data%20using%20nonlinear%20manifolds%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rudy%22%2C%22lastName%22%3A%22Geelen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stephen%22%2C%22lastName%22%3A%22Wright%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karen%22%2C%22lastName%22%3A%22Willcox%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20a%20novel%20method%20for%20learning%20reduced-order%20models%20of%20dynamical%20systems%20using%20nonlinear%20manifolds.%20First%2C%20we%20learn%20the%20manifold%20by%20identifying%20nonlinear%20structure%20in%20the%20data%20through%20a%20general%20representation%20learning%20problem.%20The%20proposed%20approach%20is%20driven%20by%20embeddings%20of%20low-order%20polynomial%20form.%20A%20projection%20onto%20the%20nonlinear%20manifold%20reveals%20the%20algebraic%20structure%20of%20the%20reduced-space%20system%20that%20governs%20the%20problem%20of%20interest.%20The%20matrix%20operators%20of%20the%20reduced-order%20model%20are%20then%20inferred%20from%20the%20data%20using%20operator%20inference.%20Numerical%20experiments%20on%20a%20number%20of%20nonlinear%20problems%20demonstrate%20the%20generalizability%20of%20the%20methodology%20and%20the%20increase%20in%20accuracy%20that%20can%20be%20obtained%20over%20reduced-order%20modeling%20methods%20that%20employ%20a%20linear%20subspace%20approximation.%22%2C%22date%22%3A%222024-03-12%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1063%5C%2F5.0170105%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1063%5C%2F5.0170105%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221054-1500%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-03-12T17%3A22%3A20Z%22%7D%7D%2C%7B%22key%22%3A%228VCK9UXQ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-03%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20P.%2C%20Li%2C%20X.%2C%20Yaras%2C%20C.%2C%20Zhu%2C%20Z.%2C%20Balzano%2C%20L.%2C%20Hu%2C%20W.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282024%2C%20January%203%29.%20%26lt%3Bi%26gt%3BUnderstanding%20Hierarchical%20Representations%20in%20Deep%20Networks%20via%20Feature%20Compression%20and%20Discrimination%26lt%3B%5C%2Fi%26gt%3B.%20Conference%20on%20Parsimony%20and%20Learning%20%28Recent%20Spotlight%20Track%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DOvuu8LpGZu%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DOvuu8LpGZu%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Understanding%20Hierarchical%20Representations%20in%20Deep%20Networks%20via%20Feature%20Compression%20and%20Discrimination%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhihui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wei%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22Over%20the%20past%20decade%2C%20deep%20learning%20has%20proven%20to%20be%20a%20highly%20effective%20method%20for%20extracting%20meaningful%20features%20from%20high-dimensional%20data.%20This%20work%20attempts%20to%20unveil%20the%20mystery%20of%20feature%20learning%20in%20deep%20networks.%20Specifically%2C%20for%20a%20multi-class%20classification%20problem%2C%20we%20explore%20how%20the%20features%20of%20training%20data%20evolve%20across%20the%20intermediate%20layers%20of%20a%20trained%20neural%20network.%20We%20investigate%20this%20problem%20based%20on%20simple%20deep%20linear%20networks%20trained%20on%20linearly%20separable%20data%2C%20and%20we%20analyze%20how%20the%20output%20features%20in%20each%20layer%20concentrate%20around%20the%20means%20of%20their%20respective%20classes.%20Remarkably%2C%20when%20the%20deep%20linear%20network%20is%20trained%20using%20gradient%20descent%20from%20a%20small%20orthogonal%20initialization%2C%20we%20theoretically%20prove%20that%20the%20features%20exhibit%20a%20linear%20decay%20in%20the%20measure%20of%20within-class%20feature%20variability%20as%20we%20move%20from%20shallow%20to%20deep%20layers.%20Moreover%2C%20our%20extensive%20experiments%20not%20only%20validate%20our%20theoretical%20findings%20numerically%20but%20also%20reveal%20a%20similar%20pattern%20in%20deep%20nonlinear%20networks%20which%20well%20aligns%20with%20recent%20empirical%20studies.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Conference%20on%20Parsimony%20and%20Learning%20%28Recent%20Spotlight%20Track%29%22%2C%22date%22%3A%222024%5C%2F01%5C%2F03%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DOvuu8LpGZu%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-03-19T15%3A12%3A46Z%22%7D%7D%2C%7B%22key%22%3A%222U3I7Y7H%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kwon%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-03%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKwon%2C%20S.%20M.%2C%20Zhang%2C%20Z.%2C%20Song%2C%20D.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282024%2C%20January%203%29.%20%26lt%3Bi%26gt%3BEfficient%20Low-Dimensional%20Compression%20of%20Overparameterized%20Networks%26lt%3B%5C%2Fi%26gt%3B.%20Conference%20on%20Parsimony%20and%20Learning%20%28Recent%20Spotlight%20Track%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D1AVb9oEdK7%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D1AVb9oEdK7%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Efficient%20Low-Dimensional%20Compression%20of%20Overparameterized%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Soo%20Min%22%2C%22lastName%22%3A%22Kwon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zekai%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dogyoon%22%2C%22lastName%22%3A%22Song%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22Overparameterized%20models%20have%20proven%20to%20be%20powerful%20tools%20for%20solving%20various%20machine%20learning%20tasks.%20Their%20effectiveness%20is%20often%20attributed%2C%20at%20least%20in%20part%2C%20to%20the%20implicit%20bias%20inherent%20in%20their%20learning%20dynamics%2C%20which%20favors%20certain%20solutions%20that%20generalize%20well.%20This%20bias%20has%20particularly%20beneficial%20properties%20when%20learning%20low-rank%20models%2C%20such%20as%20reducing%20sample%20complexity%20and%20accelerating%20convergence.%20However%2C%20overparameterization%20often%20leads%20to%20a%20substantial%20increase%20in%20computational%20and%20memory%20costs%2C%20limiting%20the%20applicability%20of%20these%20models%20to%20real-world%20problems%20at%20scale.%20In%20this%20work%2C%20we%20aim%20to%20reduce%20this%20complexity%20by%20studying%20the%20compression%20of%20deep%20linear%20models.%20By%20extensively%20studying%20the%20learning%20dynamics%20of%20these%20models%2C%20we%20propose%20a%20simple%2C%20yet%20effective%20technique%20to%20compress%20deep%20linear%20networks%20that%20involve%20decreasing%20the%20width%20of%20the%20intermediate%20layers.%20Remarkably%2C%20we%20observe%20that%20with%20a%20particular%20choice%20of%20initialization%2C%20the%20compressed%20network%20converges%20faster%20than%20the%20original%20network%2C%20consistently%20yielding%20smaller%20recovery%20errors%20throughout%20all%20iterations%20of%20gradient%20descent.%20We%20substantiate%20this%20observation%20by%20developing%20a%20theory%20focused%20on%20deep%20matrix%20factorization%20problem%2C%20and%20by%20conducting%20empirical%20evaluations%20on%20two%20canonical%20matrix%20recovery%20problems%3A%20matrix%20sensing%20and%20completion.%20Further%2C%20we%20demonstrate%20how%20the%20use%20of%20compressed%20network%20can%20improve%20the%20efficiency%20of%20deep%20nonlinear%20networks.%20Overall%2C%20we%20observe%20that%20our%20compression%20technique%20accelerates%20the%20training%20process%20by%20more%20than%20%242%5C%5Ctimes%24%2C%20without%20compromising%20model%20quality.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Conference%20on%20Parsimony%20and%20Learning%20%28Recent%20Spotlight%20Track%29%22%2C%22date%22%3A%222024%5C%2F01%5C%2F03%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D1AVb9oEdK7%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-03-19T15%3A12%3A32Z%22%7D%7D%2C%7B%22key%22%3A%22BTVYVGYK%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yaras%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-03%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYaras%2C%20C.%2C%20Wang%2C%20P.%2C%20Hu%2C%20W.%2C%20Zhu%2C%20Z.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282024%2C%20January%203%29.%20%26lt%3Bi%26gt%3BInvariant%20Low-Dimensional%20Subspaces%20in%20Gradient%20Descent%20for%20Learning%20Deep%20Linear%20Networks%26lt%3B%5C%2Fi%26gt%3B.%20Conference%20on%20Parsimony%20and%20Learning%20%28Recent%20Spotlight%20Track%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DoSzCKf1I5N%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DoSzCKf1I5N%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Invariant%20Low-Dimensional%20Subspaces%20in%20Gradient%20Descent%20for%20Learning%20Deep%20Linear%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wei%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhihui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22Over%20the%20past%20few%20years%2C%20an%20extensively%20studied%20phenomenon%20in%20training%20deep%20networks%20is%20the%20implicit%20bias%20of%20gradient%20descent%20towards%20parsimonious%20solutions.%20In%20this%20work%2C%20we%20investigate%20this%20phenomenon%20by%20narrowing%20our%20focus%20to%20deep%20linear%20networks.%20Through%20our%20analysis%2C%20we%20reveal%20a%20surprising%20%60%60law%20of%20parsimony%26%23039%3B%26%23039%3B%20in%20the%20learning%20dynamics%20when%20the%20data%20possesses%20low-dimensional%20structures.%20Specifically%2C%20we%20show%20that%20the%20evolution%20of%20gradient%20descent%20starting%20from%20orthogonal%20initialization%20only%20affects%20a%20minimal%20portion%20of%20singular%20vector%20spaces%20across%20all%20weight%20matrices.%20In%20other%20words%2C%20the%20learning%20process%20happens%20only%20within%20a%20small%20invariant%20subspace%20of%20each%20weight%20matrix%2C%20despite%20the%20fact%20that%20all%20weight%20parameters%20are%20updated%20throughout%20training.%20This%20simplicity%20in%20learning%20dynamics%20could%20have%20significant%20implications%20for%20both%20efficient%20training%20and%20a%20better%20understanding%20of%20deep%20networks.%20First%2C%20the%20analysis%20enables%20us%20to%20considerably%20improve%20training%20efficiency%20by%20taking%20advantage%20of%20the%20low-dimensional%20structure%20in%20learning%20dynamics.%20We%20can%20construct%20smaller%2C%20equivalent%20deep%20linear%20networks%20without%20sacrificing%20the%20benefits%20associated%20with%20the%20wider%20counterparts.%20Second%2C%20it%20allows%20us%20to%20better%20understand%20deep%20representation%20learning%20by%20elucidating%20the%20linear%20progressive%20separation%20and%20concentration%20of%20representations%20from%20shallow%20to%20deep%20layers.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Conference%20on%20Parsimony%20and%20Learning%20%28Recent%20Spotlight%20Track%29%22%2C%22date%22%3A%222024%5C%2F01%5C%2F03%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DoSzCKf1I5N%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-03-19T15%3A12%3A20Z%22%7D%7D%5D%7D
Hume, J., & Balzano, L. (2024, November 19). A Spectral Framework for Tracking Communities in Evolving Networks. The Third Learning on Graphs Conference. https://openreview.net/forum?id=es9LIeVa9s
Yaras, C., Wang, P., Balzano, L., & Qu, Q. (2024, June 6). Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation. Forty-first International Conference on Machine Learning. https://openreview.net/forum?id=uDkXoZMzBv
Liu, H., Wang, P., Huang, L., Qu, Q., & Balzano, L. (2024, June 6). Symmetric Matrix Completion with ReLU Sampling. Forty-first International Conference on Machine Learning. https://openreview.net/forum?id=VxI0gInNlh
Li, Y., Balzano, L., Needell, D., & Lyu, H. (2024, June 6). Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms. Forty-first International Conference on Machine Learning. https://openreview.net/forum?id=7KtFQnF368
Kwon, S. M., Zhang, Z., Song, D., Balzano, L., & Qu, Q. (2024). Efficient Low-Dimensional Compression of Overparameterized Models. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, 1009–1017. https://proceedings.mlr.press/v238/min-kwon24a.html
Tarzanagh, D. A., Nazari, P., Hou, B., Shen, L., & Balzano, L. (2024). Online Bilevel Optimization: Regret Analysis of Online Alternating Gradient Methods. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, 2854–2862. https://proceedings.mlr.press/v238/ataee-tarzanagh24a.html
Geelen, R., Balzano, L., Wright, S., & Willcox, K. (2024). Learning physics-based reduced-order models from data using nonlinear manifolds. Chaos: An Interdisciplinary Journal of Nonlinear Science, 34(3), 033122. https://doi.org/10.1063/5.0170105
Wang, P., Li, X., Yaras, C., Zhu, Z., Balzano, L., Hu, W., & Qu, Q. (2024, January 3). Understanding Hierarchical Representations in Deep Networks via Feature Compression and Discrimination. Conference on Parsimony and Learning (Recent Spotlight Track). https://openreview.net/forum?id=Ovuu8LpGZu
Kwon, S. M., Zhang, Z., Song, D., Balzano, L., & Qu, Q. (2024, January 3). Efficient Low-Dimensional Compression of Overparameterized Networks. Conference on Parsimony and Learning (Recent Spotlight Track). https://openreview.net/forum?id=1AVb9oEdK7
Yaras, C., Wang, P., Hu, W., Zhu, Z., Balzano, L., & Qu, Q. (2024, January 3). Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Linear Networks. Conference on Parsimony and Learning (Recent Spotlight Track). https://openreview.net/forum?id=oSzCKf1I5N

2023

1399621 DZFDBB6V 2023 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22D3LM67BL%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Newton%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-19%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BNewton%2C%20R.%2C%20Du%2C%20Z.%2C%20Seiler%2C%20P.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282023%29.%20Optimality%20of%20POD%20for%20Data-Driven%20LQR%20With%20Low-Rank%20Structures.%20%26lt%3Bi%26gt%3BIEEE%20Control%20Systems%20Letters%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B8%26lt%3B%5C%2Fi%26gt%3B%2C%2085%26%23x2013%3B90.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FLCSYS.2023.3344147%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FLCSYS.2023.3344147%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Optimality%20of%20POD%20for%20Data-Driven%20LQR%20With%20Low-Rank%20Structures%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rachel%22%2C%22lastName%22%3A%22Newton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Seiler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22The%20optimal%20state-feedback%20gain%20for%20the%20Linear%20Quadratic%20Regulator%20%28LQR%29%20problem%20is%20computationally%20costly%20to%20compute%20for%20high-order%20systems.%20Reduced-order%20models%20%28ROMs%29%20can%20be%20used%20to%20compute%20feedback%20gains%20with%20reduced%20computational%20cost.%20However%2C%20the%20performance%20of%20this%20common%20practice%20is%20not%20fully%20understood.%20This%20letter%20studies%20this%20practice%20in%20the%20context%20of%20data-driven%20LQR%20problems.%20We%20show%20that%2C%20for%20a%20class%20of%20LQR%20problems%20with%20low-rank%20structures%2C%20the%20controllers%20designed%20via%20their%20ROM%2C%20based%20on%20the%20Proper%20Orthogonal%20Decomposition%20%28POD%29%2C%20are%20indeed%20optimal.%20Experimental%20results%20not%20only%20validate%20our%20theory%20but%20also%20demonstrate%20that%20even%20with%20moderate%20perturbations%20on%20the%20low-rank%20structure%2C%20the%20incurred%20suboptimality%20is%20mild.%22%2C%22date%22%3A%2212%5C%2F19%5C%2F2023%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLCSYS.2023.3344147%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fabstract%5C%2Fdocument%5C%2F10365496%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222475-1456%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-12-12T22%3A11%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22A8NXN2BE%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Geelen%20et%20al.%22%2C%22parsedDate%22%3A%222023-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGeelen%2C%20R.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Willcox%2C%20K.%20%282023%29.%20Learning%20Latent%20Representations%20in%20High-Dimensional%20State%20Spaces%20Using%20Polynomial%20Manifold%20Constructions.%20%26lt%3Bi%26gt%3B2023%2062nd%20IEEE%20Conference%20on%20Decision%20and%20Control%20%28CDC%29%26lt%3B%5C%2Fi%26gt%3B%2C%204960%26%23x2013%3B4965.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCDC49753.2023.10384209%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCDC49753.2023.10384209%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Learning%20Latent%20Representations%20in%20High-Dimensional%20State%20Spaces%20Using%20Polynomial%20Manifold%20Constructions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rudy%22%2C%22lastName%22%3A%22Geelen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karen%22%2C%22lastName%22%3A%22Willcox%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20a%20novel%20framework%20for%20learning%20cost-efficient%20latent%20representations%20in%20problems%20with%20high-dimensional%20state%20spaces%20through%20nonlinear%20dimension%20reduction.%20By%20enriching%20linear%20state%20approximations%20with%20low-order%20polynomial%20terms%20we%20account%20for%20key%20nonlinear%20interactions%20existing%20in%20the%20data%20thereby%20reducing%20the%20problem%26%23039%3Bs%20intrinsic%20dimensionality.%20Two%20methods%20are%20introduced%20for%20learning%20the%20representation%20of%20such%20low-dimensional%2C%20polynomial%20manifolds%20for%20embedding%20the%20data.%20The%20manifold%20parametrization%20co-efficients%20can%20be%20obtained%20by%20regression%20via%20either%20a%20proper%20orthogonal%20decomposition%20or%20an%20alternating%20minimization%20based%20approach.%20Our%20numerical%20results%20focus%20on%20the%20one-dimensional%20Korteweg-de%20Vries%20equation%20where%20accounting%20for%20nonlinear%20correlations%20in%20the%20data%20was%20found%20to%20lower%20the%20representation%20error%20by%20up%20to%20two%20orders%20of%20magnitude%20compared%20to%20linear%20dimension%20reduction%20techniques.%22%2C%22proceedingsTitle%22%3A%222023%2062nd%20IEEE%20Conference%20on%20Decision%20and%20Control%20%28CDC%29%22%2C%22conferenceName%22%3A%222023%2062nd%20IEEE%20Conference%20on%20Decision%20and%20Control%20%28CDC%29%22%2C%22date%22%3A%222023-12%22%2C%22DOI%22%3A%2210.1109%5C%2FCDC49753.2023.10384209%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10384209%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-04-19T18%3A26%3A22Z%22%7D%7D%2C%7B%22key%22%3A%22IXDAATS7%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yaras%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-07%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYaras%2C%20C.%2C%20Wang%2C%20P.%2C%20Hu%2C%20W.%2C%20Zhu%2C%20Z.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282023%2C%20November%207%29.%20%26lt%3Bi%26gt%3BInvariant%20Low-Dimensional%20Subspaces%20in%20Gradient%20Descent%20for%20Learning%20Deep%20Matrix%20Factorizations%26lt%3B%5C%2Fi%26gt%3B.%20NeurIPS%202023%20Workshop%20on%20Mathematics%20of%20Modern%20Machine%20Learning.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D4pPnQqUMLS%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D4pPnQqUMLS%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Invariant%20Low-Dimensional%20Subspaces%20in%20Gradient%20Descent%20for%20Learning%20Deep%20Matrix%20Factorizations%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wei%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhihui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22An%20extensively%20studied%20phenomenon%20of%20the%20past%20few%20years%20in%20training%20deep%20networks%20is%20the%20implicit%20bias%20of%20gradient%20descent%20towards%20parsimonious%20solutions.%20In%20this%20work%2C%20we%20further%20investigate%20this%20phenomenon%20by%20narrowing%20our%20focus%20to%20deep%20matrix%20factorization%2C%20where%20we%20reveal%20surprising%20low-dimensional%20structures%20in%20the%20learning%20dynamics%20when%20the%20target%20matrix%20is%20low-rank.%20Specifically%2C%20we%20show%20that%20the%20evolution%20of%20gradient%20descent%20starting%20from%20arbitrary%20orthogonal%20initialization%20only%20affects%20a%20minimal%20portion%20of%20singular%20vector%20spaces%20across%20all%20weight%20matrices.%20In%20other%20words%2C%20the%20learning%20process%20happens%20only%20within%20a%20small%20invariant%20subspace%20of%20each%20weight%20matrix%2C%20despite%20the%20fact%20that%20all%20parameters%20are%20updated%20throughout%20training.%20From%20this%2C%20we%20provide%20rigorous%20justification%20for%20low-rank%20training%20in%20a%20specific%2C%20yet%20practical%20setting.%20In%20particular%2C%20we%20demonstrate%20that%20we%20can%20construct%20compressed%20factorizations%20that%20are%20equivalent%20to%20full-width%2C%20deep%20factorizations%20throughout%20training%20for%20solving%20low-rank%20matrix%20completion%20problems%20efficiently.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22NeurIPS%202023%20Workshop%20on%20Mathematics%20of%20Modern%20Machine%20Learning%22%2C%22date%22%3A%222023%5C%2F11%5C%2F07%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3D4pPnQqUMLS%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-01-04T18%3A13%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22SFRRJHDN%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Newton%20et%20al.%22%2C%22parsedDate%22%3A%222023-09%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BNewton%2C%20R.%2C%20Du%2C%20Z.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Seiler%2C%20P.%20%282023%29.%20Manifold%20Optimization%20for%20Data%20Driven%20Reduced-Order%20Modeling%2A.%20%26lt%3Bi%26gt%3B2023%2059th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B6.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FAllerton58177.2023.10313500%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FAllerton58177.2023.10313500%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Manifold%20Optimization%20for%20Data%20Driven%20Reduced-Order%20Modeling%2A%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rachel%22%2C%22lastName%22%3A%22Newton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Seiler%22%7D%5D%2C%22abstractNote%22%3A%22The%20focus%20of%20this%20paper%20is%20on%20data-driven%20reduced-order%20modeling.%20We%20assume%20a%20high%20fidelity%2C%20discrete-time%20model%20is%20available%20for%20simulation.%20The%20simulator%20allows%20state%20and%20output%20trajectories%20to%20be%20collected%20for%20any%20specified%20initial%20condition%20and%20input%20signal.%20An%20optimal%20reduced-order%20model%20%28ROM%29%20requires%3A%20%28i%29%20the%20selection%20of%20a%20lower%20dimensional%20subspace%20for%20the%20state%20of%20the%20ROM%2C%20and%20%28ii%29%20an%20optimal%20reduced-order%20state-space%20model%20%28evolving%20on%20the%20lower%20dimensional%20subspace%29.%20A%20common%20heuristic%20is%20to%3A%20%28i%29%20select%20the%20lower-order%20subspace%20using%20proper%20orthogonal%20decomposition%20%28POD%29%2C%20and%20%28ii%29%20use%20least-squares%20to%20fit%20the%20reduced-order%20state-space%20model%20on%20the%20POD%20subspace.%20We%20demonstrate%20the%20potential%20deficiencies%20of%20this%20heuristic%20via%20two%20simple%20examples.%20In%20order%20to%20address%20these%20deficiencies%2C%20we%20propose%20a%20novel%20method%20to%20optimize%20the%20choice%20of%20subspace%20using%20the%20Grassmann%20manifold.%20Finally%2C%20we%20show%20that%20our%20proposed%20manifold%20optimization%20empirically%20outperforms%20the%20POD%20heuristic%20on%20the%20two%20motivating%20examples%20and%20a%20planar%20wind%20farm%20model.%22%2C%22proceedingsTitle%22%3A%222023%2059th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22conferenceName%22%3A%222023%2059th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22date%22%3A%222023-09%22%2C%22DOI%22%3A%2210.1109%5C%2FAllerton58177.2023.10313500%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10313500%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-01-04T17%3A34%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22DIAVF3AB%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Cavazos%20et%20al.%22%2C%22parsedDate%22%3A%222023-07%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCavazos%2C%20J.%20S.%2C%20Fessler%2C%20J.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282023%29.%20ALPCAH%3A%20Sample-wise%20Heteroscedastic%20PCA%20with%20Tail%20Singular%20Value%20Regularization.%20%26lt%3Bi%26gt%3B2023%20International%20Conference%20on%20Sampling%20Theory%20and%20Applications%20%28SampTA%29%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B6.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSampTA59647.2023.10301206%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSampTA59647.2023.10301206%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22ALPCAH%3A%20Sample-wise%20Heteroscedastic%20PCA%20with%20Tail%20Singular%20Value%20Regularization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Javier%20Salazar%22%2C%22lastName%22%3A%22Cavazos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Principal%20component%20analysis%20%28PCA%29%20is%20a%20key%20tool%20in%20the%20field%20of%20data%20dimensionality%20reduction%20that%20is%20useful%20for%20various%20data%20science%20problems.%20However%2C%20many%20applications%20involve%20heterogeneous%20data%20that%20varies%20in%20quality%20due%20to%20noise%20characteristics%20associated%20with%20different%20sources%20of%20the%20data.%20Methods%20that%20deal%20with%20this%20mixed%20dataset%20are%20known%20as%20heteroscedastic%20methods.%20Current%20methods%20like%20HePPCAT%20make%20Gaussian%20assumptions%20of%20the%20basis%20coefficients%20that%20may%20not%20hold%20in%20practice.%20Other%20methods%20such%20as%20Weighted%20PCA%20%28WPCA%29%20assume%20the%20noise%20variances%20are%20known%2C%20which%20may%20be%20difficult%20to%20know%20in%20practice.%20This%20paper%20develops%20a%20PCA%20method%20that%20can%20estimate%20the%20sample-wise%20noise%20variances%20and%20use%20this%20information%20in%20the%20model%20to%20improve%20the%20estimate%20of%20the%20subspace%20basis%20associated%20with%20the%20low-rank%20structure%20of%20the%20data.%20This%20is%20done%20without%20distributional%20assumptions%20of%20the%20low-rank%20component%20and%20without%20assuming%20the%20noise%20variances%20are%20known.%20Simulations%20show%20the%20effectiveness%20of%20accounting%20for%20such%20heteroscedasticity%20in%20the%20data%2C%20the%20benefits%20of%20using%20such%20a%20method%20with%20all%20of%20the%20data%20versus%20retaining%20only%20good%20data%2C%20and%20comparisons%20are%20made%20against%20other%20PCA%20methods%20established%20in%20the%20literature%20like%20PCA%2C%20Robust%20PCA%20%28RPCA%29%2C%20and%20HePPCAT.%20Code%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fjaviersc1%5C%2FALPCAH.%22%2C%22proceedingsTitle%22%3A%222023%20International%20Conference%20on%20Sampling%20Theory%20and%20Applications%20%28SampTA%29%22%2C%22conferenceName%22%3A%222023%20International%20Conference%20on%20Sampling%20Theory%20and%20Applications%20%28SampTA%29%22%2C%22date%22%3A%222023-07%22%2C%22DOI%22%3A%2210.1109%5C%2FSampTA59647.2023.10301206%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10301206%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-01-04T17%3A23%3A26Z%22%7D%7D%2C%7B%22key%22%3A%222RPYGM6D%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Soleymani%20et%20al.%22%2C%22parsedDate%22%3A%222023-06%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSoleymani%2C%20M.%2C%20Liu%2C%20Q.%2C%20Mahdavifar%2C%20H.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282023%29.%20Matrix%20Completion%20over%20Finite%20Fields%3A%20Bounds%20and%20Belief%20Propagation%20Algorithms.%20%26lt%3Bi%26gt%3B2023%20IEEE%20International%20Symposium%20on%20Information%20Theory%20%28ISIT%29%26lt%3B%5C%2Fi%26gt%3B%2C%201166%26%23x2013%3B1171.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FISIT54713.2023.10206551%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FISIT54713.2023.10206551%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Matrix%20Completion%20over%20Finite%20Fields%3A%20Bounds%20and%20Belief%20Propagation%20Algorithms%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mahdi%22%2C%22lastName%22%3A%22Soleymani%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qiang%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hessam%22%2C%22lastName%22%3A%22Mahdavifar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20consider%20the%20low%20rank%20matrix%20completion%20problem%20over%20finite%20fields.%20This%20problem%20has%20been%20extensively%20studied%20in%20the%20domain%20of%20real%5C%2Fcomplex%20numbers%2C%20however%2C%20to%20the%20best%20of%20authors%5Cu2019%20knowledge%2C%20there%20exists%20merely%20one%20efficient%20algorithm%20to%20tackle%20the%20problem%20in%20the%20binary%20field%2C%20due%20to%20Saunderson%20et%20al.%20%5B1%5D.%20In%20this%20paper%2C%20we%20improve%20upon%20the%20theoretical%20guarantees%20for%20the%20algorithm%20provided%20in%20%5B1%5D.%20Furthermore%2C%20we%20formulate%20a%20new%20graphical%20model%20for%20the%20matrix%20completion%20problem%20over%20the%20finite%20field%20of%20size%20q%2C%20%5C%5CmathbbF_q%2C%20and%20present%20a%20message%20passing%20%28MP%29%20based%20approach%20to%20solve%20this%20problem.%20The%20proposed%20algorithm%20is%20the%20first%20one%20for%20the%20considered%20matrix%20completion%20problem%20over%20finite%20fields%20of%20arbitrary%20size.%20Our%20proposed%20method%20has%20a%20significantly%20lower%20computational%20complexity%2C%20reducing%20it%20from%20O%28n2r%2B3%29%20in%20%5B1%5D%20down%20to%20O%28n2%29%20%28where%2C%20the%20underlying%20matrix%20has%20dimension%20n%20%5Cu00d7%20n%20and%20r%20denotes%20its%20rank%29%2C%20while%20also%20improving%20the%20performance.%22%2C%22proceedingsTitle%22%3A%222023%20IEEE%20International%20Symposium%20on%20Information%20Theory%20%28ISIT%29%22%2C%22conferenceName%22%3A%222023%20IEEE%20International%20Symposium%20on%20Information%20Theory%20%28ISIT%29%22%2C%22date%22%3A%222023-06%22%2C%22DOI%22%3A%2210.1109%5C%2FISIT54713.2023.10206551%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222023-09-05T16%3A19%3A09Z%22%7D%7D%2C%7B%22key%22%3A%22VPHKSSA5%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20et%20al.%22%2C%22parsedDate%22%3A%222023-06%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXu%2C%20A.%20S.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Fessler%2C%20J.%20A.%20%282023%29.%20HeMPPCAT%3A%20Mixtures%20of%20Probabilistic%20Principal%20Component%20analysers%20for%20data%20with%20heteroscedastic%20noise.%20%26lt%3Bi%26gt%3BICASSP%202023%20-%202023%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B5.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FICASSP49357.2023.10094719%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FICASSP49357.2023.10094719%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22HeMPPCAT%3A%20Mixtures%20of%20Probabilistic%20Principal%20Component%20analysers%20for%20data%20with%20heteroscedastic%20noise%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alec%20S.%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%5D%2C%22abstractNote%22%3A%22Mixtures%20of%20probabilistic%20principal%20component%20analysis%20%28MPPCA%29%20is%20a%20well-known%20mixture%20model%20extension%20of%20principal%20component%20analysis%20%28PCA%29.%20Similar%20to%20PCA%2C%20MPPCA%20assumes%20the%20data%20samples%20in%20each%20mixture%20contain%20homoscedastic%20noise.%20However%2C%20datasets%20with%20heterogeneous%20noise%20across%20samples%20are%20becoming%20increasingly%20common%2C%20as%20larger%20datasets%20are%20generated%20by%20collecting%20samples%20from%20several%20sources%20with%20varying%20noise%20profiles.%20The%20performance%20of%20MPPCA%20is%20suboptimal%20for%20data%20with%20heteroscedastic%20noise%20across%20samples.%20This%20paper%20proposes%20a%20heteroscedastic%20mixtures%20of%20probabilistic%20PCA%20technique%20%28HeMPPCAT%29%20that%20uses%20a%20gen-eralized%20expectation-maximization%20%28GEM%29%20algorithm%20to%20jointly%20estimate%20the%20unknown%20underlying%20factors%2C%20means%2C%20and%20noise%20variances%20under%20a%20heteroscedastic%20noise%20setting.%20Simulation%20results%20illustrate%20the%20improved%20factor%20estimates%20and%20clustering%20accuracies%20of%20HeMPPCAT%20compared%20to%20MPPCA.%22%2C%22proceedingsTitle%22%3A%22ICASSP%202023%20-%202023%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22conferenceName%22%3A%22ICASSP%202023%20-%202023%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22date%22%3A%222023-06%22%2C%22DOI%22%3A%2210.1109%5C%2FICASSP49357.2023.10094719%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222023-08-30T19%3A42%3A27Z%22%7D%7D%2C%7B%22key%22%3A%225NMCPZ4B%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hong%20et%20al.%22%2C%22parsedDate%22%3A%222023-03-31%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHong%2C%20D.%2C%20Yang%2C%20F.%2C%20Fessler%2C%20J.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282023%29.%20Optimally%20Weighted%20PCA%20for%20High-Dimensional%20Heteroscedastic%20Data.%20%26lt%3Bi%26gt%3BSIAM%20Journal%20on%20Mathematics%20of%20Data%20Science%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B5%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%20222%26%23x2013%3B250.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1137%5C%2F22M1470244%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1137%5C%2F22M1470244%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Optimally%20Weighted%20PCA%20for%20High-Dimensional%20Heteroscedastic%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fan%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222023-03-31%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1137%5C%2F22M1470244%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fepubs-siam-org.proxy.lib.umich.edu%5C%2Fdoi%5C%2Fabs%5C%2F10.1137%5C%2F22M1470244%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222024-03-08T19%3A58%3A40Z%22%7D%7D%5D%7D
Newton, R., Du, Z., Seiler, P., & Balzano, L. (2023). Optimality of POD for Data-Driven LQR With Low-Rank Structures. IEEE Control Systems Letters, 8, 85–90. https://doi.org/10.1109/LCSYS.2023.3344147
Geelen, R., Balzano, L., & Willcox, K. (2023). Learning Latent Representations in High-Dimensional State Spaces Using Polynomial Manifold Constructions. 2023 62nd IEEE Conference on Decision and Control (CDC), 4960–4965. https://doi.org/10.1109/CDC49753.2023.10384209
Yaras, C., Wang, P., Hu, W., Zhu, Z., Balzano, L., & Qu, Q. (2023, November 7). Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Matrix Factorizations. NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning. https://openreview.net/forum?id=4pPnQqUMLS
Newton, R., Du, Z., Balzano, L., & Seiler, P. (2023). Manifold Optimization for Data Driven Reduced-Order Modeling*. 2023 59th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 1–6. https://doi.org/10.1109/Allerton58177.2023.10313500
Cavazos, J. S., Fessler, J. A., & Balzano, L. (2023). ALPCAH: Sample-wise Heteroscedastic PCA with Tail Singular Value Regularization. 2023 International Conference on Sampling Theory and Applications (SampTA), 1–6. https://doi.org/10.1109/SampTA59647.2023.10301206
Soleymani, M., Liu, Q., Mahdavifar, H., & Balzano, L. (2023). Matrix Completion over Finite Fields: Bounds and Belief Propagation Algorithms. 2023 IEEE International Symposium on Information Theory (ISIT), 1166–1171. https://doi.org/10.1109/ISIT54713.2023.10206551
Xu, A. S., Balzano, L., & Fessler, J. A. (2023). HeMPPCAT: Mixtures of Probabilistic Principal Component analysers for data with heteroscedastic noise. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. https://doi.org/10.1109/ICASSP49357.2023.10094719
Hong, D., Yang, F., Fessler, J. A., & Balzano, L. (2023). Optimally Weighted PCA for High-Dimensional Heteroscedastic Data. SIAM Journal on Mathematics of Data Science, 5(1), 222–250. https://doi.org/10.1137/22M1470244

2022

1399621 DZFDBB6V 2022 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%2268DNFM6X%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222022-11-26%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20P.%2C%20Liu%2C%20H.%2C%20Yaras%2C%20C.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282022%2C%20November%2026%29.%20%26lt%3Bi%26gt%3BLinear%20Convergence%20Analysis%20of%20Neural%20Collapse%20with%20Unconstrained%20Features%26lt%3B%5C%2Fi%26gt%3B.%20OPT%202022%3A%20Optimization%20for%20Machine%20Learning%20%28NeurIPS%202022%20Workshop%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DWC9im-M_y5%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DWC9im-M_y5%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Linear%20Convergence%20Analysis%20of%20Neural%20Collapse%20with%20Unconstrained%20Features%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huikang%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20work%2C%20we%20study%20the%20recently%20discovered%20neural%20collapse%20%28NC%29%20phenomenon%2C%20which%20is%20prevalent%20in%20training%20over-parameterized%20deep%20neural%20networks%20for%20classification%20tasks.%20Existing%20work%20has%20shown%20that%20any%20optimal%20solution%20of%20the%20trained%20problem%20for%20classification%20tasks%20is%20an%20NC%20solution%20and%20has%20a%20benign%20landscape%20under%20the%20unconstrained%20feature%20model.%20However%2C%20these%20results%20do%20not%20provide%20an%20answer%20to%20the%20question%20of%20how%20quickly%20gradient%20descent%20can%20find%20an%20NC%20solution.%20To%20answer%20this%20question%2C%20we%20prove%20an%20error%20bound%20property%20of%20the%20trained%20problem%2C%20which%20refers%20to%20the%20inequality%20that%20bounds%20the%20distance%20of%20a%20point%20to%20the%20optimal%20solution%20set%20by%20the%20norm%20of%20its%20gradient%2C%20under%20the%20unconstrained%20feature%20model.%20Using%20this%20error%20bound%2C%20we%20show%20linear%20convergence%20of%20gradient%20descent%20for%20finding%20an%20NC%20solution.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22OPT%202022%3A%20Optimization%20for%20Machine%20Learning%20%28NeurIPS%202022%20Workshop%29%22%2C%22date%22%3A%222022%5C%2F11%5C%2F26%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DWC9im-M_y5%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222023-01-26T21%3A34%3A03Z%22%7D%7D%2C%7B%22key%22%3A%22EDUZZZVW%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Naik%20et%20al.%22%2C%22parsedDate%22%3A%222022-08%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BNaik%2C%20R.%2C%20Trivedi%2C%20N.%2C%20Tarzanagh%2C%20D.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282022%29.%20Truncated%20Matrix%20Completion%20-%20An%20Empirical%20Study.%20%26lt%3Bi%26gt%3B2022%2030th%20European%20Signal%20Processing%20Conference%20%28EUSIPCO%29%26lt%3B%5C%2Fi%26gt%3B%2C%20847%26%23x2013%3B851.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.23919%5C%2FEUSIPCO55093.2022.9909952%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.23919%5C%2FEUSIPCO55093.2022.9909952%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Truncated%20Matrix%20Completion%20-%20An%20Empirical%20Study%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rishhabh%22%2C%22lastName%22%3A%22Naik%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nisarg%22%2C%22lastName%22%3A%22Trivedi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Davoud%20Ataee%22%2C%22lastName%22%3A%22Tarzanagh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Low-rank%20Matrix%20Completion%20%28LRMC%29%20describes%20the%20problem%20where%20we%20wish%20to%20recover%20missing%20entries%20of%20partially%20observed%20low-rank%20matrix.%20Most%20existing%20matrix%20completion%20work%20deals%20with%20sampling%20procedures%20that%20are%20independent%20of%20the%20underlying%20data%20values.%20While%20this%20assumption%20allows%20the%20derivation%20of%20nice%20theoretical%20guarantees%2C%20it%20seldom%20holds%20in%20real-world%20applications.%20In%20this%20paper%2C%20we%20consider%20various%20settings%20where%20the%20sampling%20mask%20is%20dependent%20on%20the%20underlying%20data%20values%2C%20motivated%20by%20applications%20in%20sensing%2C%20sequential%20decision-making%2C%20and%20recommender%20systems.%20Through%20a%20series%20of%20experiments%2C%20we%20study%20and%20compare%20the%20performance%20of%20various%20LRMC%20algorithms%20that%20were%20originally%20successful%20for%20data-independent%20sampling%20patterns.%22%2C%22proceedingsTitle%22%3A%222022%2030th%20European%20Signal%20Processing%20Conference%20%28EUSIPCO%29%22%2C%22conferenceName%22%3A%222022%2030th%20European%20Signal%20Processing%20Conference%20%28EUSIPCO%29%22%2C%22date%22%3A%222022-08%22%2C%22DOI%22%3A%2210.23919%5C%2FEUSIPCO55093.2022.9909952%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222023-01-26T21%3A39%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22HXS2JBFQ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-28%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20P.%2C%20Liu%2C%20H.%2C%20So%2C%20A.%20M.-C.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282022%29.%20Convergence%20and%20Recovery%20Guarantees%20of%20the%20K-Subspaces%20Method%20for%20Subspace%20Clustering.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%2039th%20International%20Conference%20on%20Machine%20Learning%26lt%3B%5C%2Fi%26gt%3B%2C%2022884%26%23x2013%3B22918.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv162%5C%2Fwang22r.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv162%5C%2Fwang22r.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Convergence%20and%20Recovery%20Guarantees%20of%20the%20K-Subspaces%20Method%20for%20Subspace%20Clustering%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huikang%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anthony%20Man-Cho%22%2C%22lastName%22%3A%22So%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22The%20K-subspaces%20%28KSS%29%20method%20is%20a%20generalization%20of%20the%20K-means%20method%20for%20subspace%20clustering.%20In%20this%20work%2C%20we%20present%20local%20convergence%20analysis%20and%20a%20recovery%20guarantee%20for%20KSS%2C%20assuming%20data%20are%20generated%20by%20the%20semi-random%20union%20of%20subspaces%20model%2C%20where%20%5Cud835%5Cudc41NN%20points%20are%20randomly%20sampled%20from%20%5Cud835%5Cudc3e%5Cu22652K%5Cu22652K%20%5C%5Cge%202%20overlapping%20subspaces.%20We%20show%20that%20if%20the%20initial%20assignment%20of%20the%20KSS%20method%20lies%20within%20a%20neighborhood%20of%20a%20true%20clustering%2C%20it%20converges%20at%20a%20superlinear%20rate%20and%20finds%20the%20correct%20clustering%20within%20%5Cu0398%28loglog%5Cud835%5Cudc41%29%5Cu0398%28log%5Cu2061log%5Cu2061N%29%5C%5CTheta%28%5C%5Clog%5C%5Clog%20N%29%20iterations%20with%20high%20probability.%20Moreover%2C%20we%20propose%20a%20thresholding%20inner-product%20based%20spectral%20method%20for%20initialization%20and%20prove%20that%20it%20produces%20a%20point%20in%20this%20neighborhood.%20We%20also%20present%20numerical%20results%20of%20the%20studied%20method%20to%20support%20our%20theoretical%20developments.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2039th%20International%20Conference%20on%20Machine%20Learning%22%2C%22conferenceName%22%3A%22International%20Conference%20on%20Machine%20Learning%22%2C%22date%22%3A%222022-06-28%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv162%5C%2Fwang22r.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222022-08-10T03%3A40%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22XSZ3X7PW%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sattar%20et%20al.%22%2C%22parsedDate%22%3A%222022-06%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSattar%2C%20Y.%2C%20Du%2C%20Z.%2C%20Tarzanagh%2C%20D.%20A.%2C%20Oymak%2C%20S.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Ozay%2C%20N.%20%282022%29.%20Certainty%20Equivalent%20Quadratic%20Control%20for%20Markov%20Jump%20Systems.%20%26lt%3Bi%26gt%3B2022%20American%20Control%20Conference%20%28ACC%29%26lt%3B%5C%2Fi%26gt%3B%2C%202871%26%23x2013%3B2878.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.23919%5C%2FACC53348.2022.9867208%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.23919%5C%2FACC53348.2022.9867208%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Certainty%20Equivalent%20Quadratic%20Control%20for%20Markov%20Jump%20Systems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yahya%22%2C%22lastName%22%3A%22Sattar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Davoud%20Ataee%22%2C%22lastName%22%3A%22Tarzanagh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samet%22%2C%22lastName%22%3A%22Oymak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Necmiye%22%2C%22lastName%22%3A%22Ozay%22%7D%5D%2C%22abstractNote%22%3A%22Real-world%20control%20applications%20often%20involve%20complex%20dynamics%20subject%20to%20abrupt%20changes%20or%20variations.%20Markov%20jump%20linear%20systems%20%28MJS%29%20provide%20a%20rich%20framework%20for%20modeling%20such%20dynamics.%20Despite%20an%20extensive%20history%2C%20theoretical%20understanding%20of%20parameter%20sensitivities%20of%20MJS%20control%20is%20somewhat%20lacking.%20Motivated%20by%20this%2C%20we%20investigate%20robustness%20aspects%20of%20certainty%20equivalent%20model-based%20optimal%20control%20for%20MJS%20with%20a%20quadratic%20cost%20function.%20Given%20the%20uncertainty%20in%20the%20system%20matrices%20and%20in%20the%20Markov%20transition%20matrix%20is%20bounded%20by%20%5Cu03f5%20and%20%5Cu03b7%20respectively%2C%20robustness%20results%20are%20established%20for%20%28i%29%20the%20solution%20to%20coupled%20Riccati%20equations%20and%20%28ii%29%20the%20optimal%20cost%2C%20by%20providing%20explicit%20perturbation%20bounds%20that%20decay%20as%20%24%5C%5CmathcalO%5C%5Cleft%28%20%5C%5Cvarepsilon%20%2B%20%5C%5Ceta%20%5C%5Cright%29%24%20and%20%24%5C%5CmathcalO%5C%5Cleft%28%20%5C%5Cleft%28%20%5C%5Cvarepsilon%20%2B%20%5C%5Ceta%20%5C%5Cright%29%5E2%20%5C%5Cright%29%24%20respectively.%22%2C%22proceedingsTitle%22%3A%222022%20American%20Control%20Conference%20%28ACC%29%22%2C%22conferenceName%22%3A%222022%20American%20Control%20Conference%20%28ACC%29%22%2C%22date%22%3A%222022-06%22%2C%22DOI%22%3A%2210.23919%5C%2FACC53348.2022.9867208%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222022-09-14T14%3A36%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22RRC5TZTY%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222022-06%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%2C%20Z.%2C%20Sattar%2C%20Y.%2C%20Tarzanagh%2C%20D.%20A.%2C%20Balzano%2C%20L.%2C%20Ozay%2C%20N.%2C%20%26amp%3B%20Oymak%2C%20S.%20%282022%29.%20Data-Driven%20Control%20of%20Markov%20Jump%20Systems%3A%20Sample%20Complexity%20and%20Regret%20Bounds.%20%26lt%3Bi%26gt%3B2022%20American%20Control%20Conference%20%28ACC%29%26lt%3B%5C%2Fi%26gt%3B%2C%204901%26%23x2013%3B4908.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.23919%5C%2FACC53348.2022.9867863%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.23919%5C%2FACC53348.2022.9867863%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Data-Driven%20Control%20of%20Markov%20Jump%20Systems%3A%20Sample%20Complexity%20and%20Regret%20Bounds%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yahya%22%2C%22lastName%22%3A%22Sattar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Davoud%20Ataee%22%2C%22lastName%22%3A%22Tarzanagh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Necmiye%22%2C%22lastName%22%3A%22Ozay%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samet%22%2C%22lastName%22%3A%22Oymak%22%7D%5D%2C%22abstractNote%22%3A%22Learning%20how%20to%20effectively%20control%20unknown%20dynamical%20systems%20from%20data%20is%20crucial%20for%20intelligent%20autonomous%20systems.%20This%20task%20becomes%20a%20significant%20challenge%20when%20the%20underlying%20dynamics%20are%20changing%20with%20time.%20Motivated%20by%20this%20challenge%2C%20this%20paper%20considers%20the%20problem%20of%20controlling%20an%20unknown%20Markov%20jump%20linear%20system%20%28MJS%29%20to%20optimize%20a%20quadratic%20objective%20in%20a%20data-driven%20way.%20By%20taking%20a%20model-based%20perspective%2C%20we%20consider%20identification-based%20adaptive%20control%20for%20MJS.%20We%20first%20provide%20a%20system%20identification%20algorithm%20for%20MJS%20to%20learn%20the%20dynamics%20in%20each%20mode%20as%20well%20as%20the%20Markov%20transition%20matrix%2C%20underlying%20the%20evolution%20of%20the%20mode%20switches%2C%20from%20a%20single%20trajectory%20of%20the%20system%20states%2C%20inputs%2C%20and%20modes.%20Through%20mixing-time%20arguments%2C%20sample%20complexity%20of%20this%20algorithm%20is%20shown%20to%20be%20%24%5C%5CmathcalO%5C%5Cleft%28%201%5C%2F%5C%5Csqrt%20T%20%5C%5Cright%29%24.%20We%20then%20propose%20an%20adaptive%20control%20scheme%20that%20performs%20system%20identification%20together%20with%20certainty%20equivalent%20control%20to%20adapt%20the%20controllers%20in%20an%20episodic%20fashion.%20Combining%20our%20sample%20complexity%20results%20with%20recent%20perturbation%20results%20for%20certainty%20equivalent%20control%2C%20we%20prove%20that%20when%20the%20episode%20lengths%20are%20appropriately%20chosen%2C%20the%20proposed%20adaptive%20control%20scheme%20achieves%20%24%5C%5CmathcalO%5C%5Cleft%28%20%5C%5Csqrt%20T%20%5C%5Cright%29%24%20regret.%20Our%20proof%20strategy%20introduces%20innovations%20to%20handle%20Markovian%20jumps%20and%20a%20weaker%20notion%20of%20stability%20common%20in%20MJSs.%20Our%20analysis%20provides%20insights%20into%20system%20theoretic%20quantities%20that%20affect%20learning%20accuracy%20and%20control%20performance.%20Numerical%20simulations%20are%20presented%20to%20further%20reinforce%20these%20insights.%22%2C%22proceedingsTitle%22%3A%222022%20American%20Control%20Conference%20%28ACC%29%22%2C%22conferenceName%22%3A%222022%20American%20Control%20Conference%20%28ACC%29%22%2C%22date%22%3A%222022-06%22%2C%22DOI%22%3A%2210.23919%5C%2FACC53348.2022.9867863%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222022-09-14T14%3A37%3A16Z%22%7D%7D%2C%7B%22key%22%3A%226XZLHUNA%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222022-05-11%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%2C%20Z.%2C%20Ozay%2C%20N.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282022%29.%20Clustering-based%20Mode%20Reduction%20for%20Markov%20Jump%20Systems.%20%26lt%3Bi%26gt%3BProceedings%20of%20The%204th%20Annual%20Learning%20for%20Dynamics%20and%20Control%20Conference%26lt%3B%5C%2Fi%26gt%3B%2C%20689%26%23x2013%3B701.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv168%5C%2Fdu22a.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv168%5C%2Fdu22a.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Clustering-based%20Mode%20Reduction%20for%20Markov%20Jump%20Systems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Necmiye%22%2C%22lastName%22%3A%22Ozay%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22While%20Markov%20jump%20systems%20%28MJSs%29%20are%20more%20appropriate%20than%20LTI%20systems%20in%20terms%20of%20modeling%20abruptly%20changing%20dynamics%2C%20MJSs%20%28and%20other%20switched%20systems%29%20may%20suffer%20from%20the%20model%20complexity%20brought%20by%20the%20potentially%20sheer%20number%20of%20switching%20modes.%20Much%20of%20the%20existing%20work%20on%20reducing%20switched%20systems%20focuses%20on%20the%20state%20space%20where%20techniques%20such%20as%20discretization%20and%20dimension%20reduction%20are%20performed%2C%20yet%20reducing%20mode%20complexity%20receives%20few%20attention.%20In%20this%20work%2C%20inspired%20by%20clustering%20techniques%20from%20unsupervised%20learning%2C%20we%20propose%20a%20reduction%20method%20for%20MJS%20such%20that%20a%20mode-reduced%20MJS%20can%20be%20constructed%20with%20guaranteed%20approximation%20performance.%20Furthermore%2C%20we%20show%20how%20this%20reduced%20MJS%20can%20be%20used%20in%20designing%20controllers%20for%20the%20original%20MJS%20to%20reduce%20the%20computation%20cost%20while%20maintaining%20guaranteed%20suboptimality.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20The%204th%20Annual%20Learning%20for%20Dynamics%20and%20Control%20Conference%22%2C%22conferenceName%22%3A%22Learning%20for%20Dynamics%20and%20Control%20Conference%22%2C%22date%22%3A%222022-05-11%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv168%5C%2Fdu22a.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222022-06-30T16%3A17%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22AEIWUC8S%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%22%2C%22parsedDate%22%3A%222022-05-03%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%20%282022%29.%20On%20the%20equivalence%20of%20Oja%26%23x2019%3Bs%20algorithm%20and%20GROUSE.%20%26lt%3Bi%26gt%3BProceedings%20of%20The%2025th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%26lt%3B%5C%2Fi%26gt%3B%2C%207014%26%23x2013%3B7030.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv151%5C%2Fbalzano22a.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv151%5C%2Fbalzano22a.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22On%20the%20equivalence%20of%20Oja%5Cu2019s%20algorithm%20and%20GROUSE%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22The%20analysis%20of%20streaming%20PCA%20has%20gained%20significant%20traction%20through%20the%20analysis%20of%20an%20early%20simple%20variant%3A%20Oja%5Cu2019s%20algorithm%2C%20which%20implements%20online%20projected%20gradient%20descent%20for%20the%20trace%20objective.%20Several%20other%20streaming%20PCA%20algorithms%20have%20been%20developed%2C%20each%20with%20their%20own%20performance%20guarantees%20or%20empirical%20studies%2C%20and%20the%20question%20arises%20whether%20there%20is%20a%20relationship%20between%20the%20algorithms.%20We%20show%20that%20the%20Grassmannian%20Rank-One%20Subspace%20Estimation%20%28GROUSE%29%20algorithm%20is%20indeed%20equivalent%20to%20Oja%5Cu2019s%20algorithm%20in%20the%20sense%20that%2C%20at%20each%20iteration%2C%20given%20a%20step%20size%20for%20one%20of%20the%20algorithms%2C%20we%20may%20construct%20a%20step%20size%20for%20the%20other%20algorithm%20that%20results%20in%20an%20identical%20update.%20This%20allows%20us%20to%20apply%20all%20results%20on%20one%20algorithm%20to%20the%20other.%20In%20particular%2C%20we%20have%20%281%29%20better%20global%20convergence%20guarantees%20of%20GROUSE%20to%20the%20global%20minimizer%20of%20the%20PCA%20objective%20with%20full%20data%3B%20and%20%282%29%20local%20convergence%20guarantees%20for%20Oja%5Cu2019s%20algorithm%20with%20incomplete%20or%20compressed%20data.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20The%2025th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%22%2C%22conferenceName%22%3A%22International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%22%2C%22date%22%3A%222022-05-03%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv151%5C%2Fbalzano22a.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222022-09-14T14%3A41%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22S2DXS6BV%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20and%20Balzano%22%2C%22parsedDate%22%3A%222022-02-20%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20D.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282022%29.%20Convergence%20of%20a%20Grassmannian%20Gradient%20Descent%20Algorithm%20for%20Subspace%20Estimation%20From%20Undersampled%20Data.%20%26lt%3Bi%26gt%3BUniversity%20of%20Michigan%20Technical%20Report%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.7302%5C%2F4151%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.7302%5C%2F4151%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Convergence%20of%20a%20Grassmannian%20Gradient%20Descent%20Algorithm%20for%20Subspace%20Estimation%20From%20Undersampled%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Subspace%20learning%20and%20matrix%20factorization%20problems%20have%20great%20many%20applications%20in%20science%20and%20engineering%2C%20and%20efficient%20algorithms%20are%20critical%20as%20dataset%20sizes%20continue%20to%20grow.%20Many%20relevant%20problem%20formulations%20are%20non-convex%2C%20and%20in%20a%20variety%20of%20contexts%20it%20has%20been%20observed%20that%20solving%20the%20non-convex%20problem%20directly%20is%20not%20only%20efficient%20but%20reliably%20accurate.%20We%20discuss%20convergence%20theory%20for%20a%20particular%20method%3A%20first%20order%20incremental%20gradient%20descent%20constrained%20to%20the%20Grassmannian.%20The%20output%20of%20the%20algorithm%20is%20an%20orthonormal%20basis%20for%20a%20%24d%24-dimensional%20subspace%20spanned%20by%20an%20input%20streaming%20data%20matrix.%20We%20study%20two%20sampling%20cases%3A%20where%20each%20data%20vector%20of%20the%20streaming%20matrix%20is%20fully%20sampled%2C%20or%20where%20it%20is%20undersampled%20by%20a%20sampling%20matrix%20%24A_t%5C%5Cin%20%5C%5Cmathbb%7BR%7D%5E%7Bm%5C%5Ctimes%20n%7D%24%20with%20%24m%5C%5Cll%20n%24.%20Our%20results%20cover%20two%20cases%2C%20where%20%24A_t%24%20is%20Gaussian%20or%20a%20subset%20of%20rows%20of%20the%20identity%20matrix.%20We%20propose%20an%20adaptive%20stepsize%20scheme%20that%20depends%20only%20on%20the%20sampled%20data%20and%20algorithm%20outputs.%20We%20prove%20that%20with%20fully%20sampled%20data%2C%20the%20stepsize%20scheme%20maximizes%20the%20improvement%20of%20our%20convergence%20metric%20at%20each%20iteration%2C%20and%20this%20method%20converges%20from%20any%20random%20initialization%20to%20the%20true%20subspace%2C%20despite%20the%20non-convex%20formulation%20and%20orthogonality%20constraints.%20For%20the%20case%20of%20undersampled%20data%2C%20we%20establish%20monotonic%20expected%20improvement%20on%20the%20defined%20convergence%20metric%20for%20each%20iteration%20with%20high%20probability.%22%2C%22date%22%3A%222022-02-20%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.7302%5C%2F4151%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fdeepblue.lib.umich.edu%5C%2Fhandle%5C%2F2027.42%5C%2F171760%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en_US%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222022-04-15T19%3A23%3A24Z%22%7D%7D%2C%7B%22key%22%3A%22SHTV82B7%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yaras%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYaras%2C%20C.%2C%20Wang%2C%20P.%2C%20Zhu%2C%20Z.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Qu%2C%20Q.%20%282022%29.%20%26lt%3Bi%26gt%3BNeural%20Collapse%20with%20Normalized%20Features%3A%20A%20Geometric%20Analysis%20over%20the%20Riemannian%20Manifold%26lt%3B%5C%2Fi%26gt%3B.%20Advances%20in%20Neural%20Information%20Processing%20Systems.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DZvh6lF5b26N%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DZvh6lF5b26N%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Neural%20Collapse%20with%20Normalized%20Features%3A%20A%20Geometric%20Analysis%20over%20the%20Riemannian%20Manifold%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%22%2C%22lastName%22%3A%22Yaras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhihui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Qu%22%7D%5D%2C%22abstractNote%22%3A%22When%20training%20overparameterized%20deep%20networks%20for%20classification%20tasks%2C%20it%20has%20been%20widely%20observed%20that%20the%20learned%20features%20exhibit%20a%20so-called%20%26quot%3Bneural%20collapse%26%23039%3B%26quot%3B%20phenomenon.%20More%20specifically%2C%20for%20the%20output%20features%20of%20the%20penultimate%20layer%2C%20for%20each%20class%20the%20within-class%20features%20converge%20to%20their%20means%2C%20and%20the%20means%20of%20different%20classes%20exhibit%20a%20certain%20tight%20frame%20structure%2C%20which%20is%20also%20aligned%20with%20the%20last%20layer%26%23039%3Bs%20classifier.%20As%20feature%20normalization%20in%20the%20last%20layer%20becomes%20a%20common%20practice%20in%20modern%20representation%20learning%2C%20in%20this%20work%20we%20theoretically%20justify%20the%20neural%20collapse%20phenomenon%20under%20normalized%20features.%20Based%20on%20an%20unconstrained%20feature%20model%2C%20we%20simplify%20the%20empirical%20loss%20function%20in%20a%20multi-class%20classification%20task%20into%20a%20nonconvex%20optimization%20problem%20over%20the%20Riemannian%20manifold%20by%20constraining%20all%20features%20and%20classifiers%20over%20the%20sphere.%20In%20this%20context%2C%20we%20analyze%20the%20nonconvex%20landscape%20of%20the%20Riemannian%20optimization%20problem%20over%20the%20product%20of%20spheres%2C%20showing%20a%20benign%20global%20landscape%20in%20the%20sense%20that%20the%20only%20global%20minimizers%20are%20the%20neural%20collapse%20solutions%20while%20all%20other%20critical%20points%20are%20strict%20saddle%20points%20with%20negative%20curvature.%20Experimental%20results%20on%20practical%20deep%20networks%20corroborate%20our%20theory%20and%20demonstrate%20that%20better%20representations%20can%20be%20learned%20faster%20via%20feature%20normalization.%20Code%20for%20our%20experiments%20can%20be%20found%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fcjyaras%5C%2Fnormalized-neural-collapse.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Advances%20in%20Neural%20Information%20Processing%20Systems%22%2C%22date%22%3A%222022%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DZvh6lF5b26N%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222022-12-05T21%3A10%3A29Z%22%7D%7D%2C%7B%22key%22%3A%22SY6JUJEX%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%2C%20Z.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Ozay%2C%20N.%20%282022%29.%20Mode%20Reduction%20for%20Markov%20Jump%20Systems.%20%26lt%3Bi%26gt%3BIEEE%20Open%20Journal%20of%20Control%20Systems%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B19.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FOJCSYS.2022.3212613%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FOJCSYS.2022.3212613%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Mode%20Reduction%20for%20Markov%20Jump%20Systems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Necmiye%22%2C%22lastName%22%3A%22Ozay%22%7D%5D%2C%22abstractNote%22%3A%22Switched%20systems%20are%20capable%20of%20modeling%20processes%20with%20underlying%20dynamics%20that%20may%20change%20abruptly%20over%20time.%20To%20achieve%20accurate%20modeling%20in%20practice%2C%20one%20may%20need%20a%20large%20number%20of%20modes%2C%20but%20this%20may%20in%20turn%20increase%20the%20model%20complexity%20drastically.%20Existing%20work%20on%20reducing%20system%20complexity%20mainly%20considers%20state%20space%20reduction%2C%20whereas%20reducing%20the%20number%20of%20modes%20is%20less%20studied.%20In%20this%20work%2C%20we%20consider%20Markov%20jump%20linear%20systems%20%28MJSs%29%2C%20a%20special%20class%20of%20switched%20systems%20where%20the%20active%20mode%20switches%20according%20to%20a%20Markov%20chain%2C%20and%20several%20issues%20associated%20with%20its%20mode%20complexity.%20Specifically%2C%20inspired%20by%20clustering%20techniques%20from%20unsupervised%20learning%2C%20we%20are%20able%20to%20construct%20a%20reduced%20MJS%20with%20fewer%20modes%20that%20approximates%20the%20original%20MJS%20well%20under%20various%20metrics.%20Furthermore%2C%20both%20theoretically%20and%20empirically%2C%20we%20show%20how%20one%20can%20use%20the%20reduced%20MJS%20to%20analyze%20stability%20and%20design%20controllers%20with%20significant%20reduction%20in%20computational%20cost%20while%20achieving%20guaranteed%20accuracy.%22%2C%22date%22%3A%222022%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FOJCSYS.2022.3212613%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222694-085X%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%5D%2C%22dateModified%22%3A%222022-11-04T20%3A36%3A09Z%22%7D%7D%2C%7B%22key%22%3A%224V8L32TJ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gilman%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGilman%2C%20K.%2C%20Tarzanagh%2C%20D.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282022%29.%20Grassmannian%20Optimization%20for%20Online%20Tensor%20Completion%20and%20Tracking%20With%20the%20t-SVD.%20%26lt%3Bi%26gt%3BIEEE%20Transactions%20on%20Signal%20Processing%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B70%26lt%3B%5C%2Fi%26gt%3B%2C%202152%26%23x2013%3B2167.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTSP.2022.3164837%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTSP.2022.3164837%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Grassmannian%20Optimization%20for%20Online%20Tensor%20Completion%20and%20Tracking%20With%20the%20t-SVD%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kyle%22%2C%22lastName%22%3A%22Gilman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Davoud%20Ataee%22%2C%22lastName%22%3A%22Tarzanagh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20propose%20a%20new%20fast%20streaming%20algorithm%20for%20the%20tensor%20completion%20problem%20of%20imputing%20missing%20entries%20of%20a%20low-tubal-rank%20tensor%20using%20the%20tensor%20singular%20value%20decomposition%20%28t-SVD%29%20algebraic%20framework.%20We%20show%20the%20t-SVD%20is%20a%20specialization%20of%20the%20well-studied%20block-term%20decomposition%20for%20third-order%20tensors%2C%20and%20we%20present%20an%20algorithm%20under%20this%20model%20that%20can%20track%20changing%20free%20submodules%20from%20incomplete%20streaming%202-D%20data.%20The%20proposed%20algorithm%20uses%20principles%20from%20incremental%20gradient%20descent%20on%20the%20Grassmann%20manifold%20of%20subspaces%20to%20solve%20the%20tensor%20completion%20problem%20with%20linear%20complexity%20and%20constant%20memory%20in%20the%20number%20of%20time%20samples.%20We%20provide%20a%20local%20expected%20linear%20convergence%20result%20for%20our%20algorithm.%20Our%20empirical%20results%20are%20competitive%20in%20accuracy%20but%20much%20faster%20in%20compute%20time%20than%20state-of-the-art%20tensor%20completion%20algorithms%20on%20real%20applications%20to%20recover%20temporal%20chemo-sensing%20and%20MRI%20data%20under%20limited%20sampling.%22%2C%22date%22%3A%222022%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTSP.2022.3164837%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221941-0476%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%5D%2C%22dateModified%22%3A%222022-06-30T16%3A12%3A26Z%22%7D%7D%5D%7D
Wang, P., Liu, H., Yaras, C., Balzano, L., & Qu, Q. (2022, November 26). Linear Convergence Analysis of Neural Collapse with Unconstrained Features. OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop). https://openreview.net/forum?id=WC9im-M_y5
Naik, R., Trivedi, N., Tarzanagh, D. A., & Balzano, L. (2022). Truncated Matrix Completion - An Empirical Study. 2022 30th European Signal Processing Conference (EUSIPCO), 847–851. https://doi.org/10.23919/EUSIPCO55093.2022.9909952
Wang, P., Liu, H., So, A. M.-C., & Balzano, L. (2022). Convergence and Recovery Guarantees of the K-Subspaces Method for Subspace Clustering. Proceedings of the 39th International Conference on Machine Learning, 22884–22918. https://proceedings.mlr.press/v162/wang22r.html
Sattar, Y., Du, Z., Tarzanagh, D. A., Oymak, S., Balzano, L., & Ozay, N. (2022). Certainty Equivalent Quadratic Control for Markov Jump Systems. 2022 American Control Conference (ACC), 2871–2878. https://doi.org/10.23919/ACC53348.2022.9867208
Du, Z., Sattar, Y., Tarzanagh, D. A., Balzano, L., Ozay, N., & Oymak, S. (2022). Data-Driven Control of Markov Jump Systems: Sample Complexity and Regret Bounds. 2022 American Control Conference (ACC), 4901–4908. https://doi.org/10.23919/ACC53348.2022.9867863
Du, Z., Ozay, N., & Balzano, L. (2022). Clustering-based Mode Reduction for Markov Jump Systems. Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 689–701. https://proceedings.mlr.press/v168/du22a.html
Balzano, L. (2022). On the equivalence of Oja’s algorithm and GROUSE. Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, 7014–7030. https://proceedings.mlr.press/v151/balzano22a.html
Zhang, D., & Balzano, L. (2022). Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation From Undersampled Data. University of Michigan Technical Report. https://doi.org/10.7302/4151
Yaras, C., Wang, P., Zhu, Z., Balzano, L., & Qu, Q. (2022). Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold. Advances in Neural Information Processing Systems. https://openreview.net/forum?id=Zvh6lF5b26N
Du, Z., Balzano, L., & Ozay, N. (2022). Mode Reduction for Markov Jump Systems. IEEE Open Journal of Control Systems, 1–19. https://doi.org/10.1109/OJCSYS.2022.3212613
Gilman, K., Tarzanagh, D. A., & Balzano, L. (2022). Grassmannian Optimization for Online Tensor Completion and Tracking With the t-SVD. IEEE Transactions on Signal Processing, 70, 2152–2167. https://doi.org/10.1109/TSP.2022.3164837

2021

1399621 DZFDBB6V 2021 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%223WT26UID%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lipor%20et%20al.%22%2C%22parsedDate%22%3A%222021-03-16%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLipor%2C%20J.%2C%20Hong%2C%20D.%2C%20Tan%2C%20Y.%20S.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282021%29.%20Subspace%20clustering%20using%20ensembles%20of%20K-subspaces.%20%26lt%3Bi%26gt%3BInformation%20and%20Inference%3A%20A%20Journal%20of%20the%20IMA%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B10%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%2073%26%23x2013%3B107.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1093%5C%2Fimaiai%5C%2Fiaaa031%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1093%5C%2Fimaiai%5C%2Fiaaa031%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Subspace%20clustering%20using%20ensembles%20of%20K-subspaces%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22John%22%2C%22lastName%22%3A%22Lipor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yan%20Shuo%22%2C%22lastName%22%3A%22Tan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Subspace%20clustering%20is%20the%20unsupervised%20grouping%20of%20points%20lying%20near%20a%20union%20of%20low-dimensional%20linear%20subspaces.%20Algorithms%20based%20directly%20on%20geometric%20properties%20of%20such%20data%20tend%20to%20either%20provide%20poor%20empirical%20performance%2C%20lack%20theoretical%20guarantees%20or%20depend%20heavily%20on%20their%20initialization.%20We%20present%20a%20novel%20geometric%20approach%20to%20the%20subspace%20clustering%20problem%20that%20leverages%20ensembles%20of%20the%20%24K%24-subspace%20%28KSS%29%20algorithm%20via%20the%20evidence%20accumulation%20clustering%20framework.%20Our%20algorithm%2C%20referred%20to%20as%20ensemble%20%24K%24-subspaces%20%28EKSSs%29%2C%20forms%20a%20co-association%20matrix%20whose%20%24%28i%2Cj%29%24th%20entry%20is%20the%20number%20of%20times%20points%20%24i%24%20and%20%24j%24%20are%20clustered%20together%20by%20several%20runs%20of%20KSS%20with%20random%20initializations.%20We%20prove%20general%20recovery%20guarantees%20for%20any%20algorithm%20that%20forms%20an%20affinity%20matrix%20with%20entries%20close%20to%20a%20monotonic%20transformation%20of%20pairwise%20absolute%20inner%20products.%20We%20then%20show%20that%20a%20specific%20instance%20of%20EKSS%20results%20in%20an%20affinity%20matrix%20with%20entries%20of%20this%20form%2C%20and%20hence%20our%20proposed%20algorithm%20can%20provably%20recover%20subspaces%20under%20similar%20conditions%20to%20state-of-the-art%20algorithms.%20The%20finding%20is%2C%20to%20the%20best%20of%20our%20knowledge%2C%20the%20first%20recovery%20guarantee%20for%20evidence%20accumulation%20clustering%20and%20for%20KSS%20variants.%20We%20show%20on%20synthetic%20data%20that%20our%20method%20performs%20well%20in%20the%20traditionally%20challenging%20settings%20of%20subspaces%20with%20large%20intersection%2C%20subspaces%20with%20small%20principal%20angles%20and%20noisy%20data.%20Finally%2C%20we%20evaluate%20our%20algorithm%20on%20six%20common%20benchmark%20datasets%20and%20show%20that%20unlike%20existing%20methods%2C%20EKSS%20achieves%20excellent%20empirical%20performance%20when%20there%20are%20both%20a%20small%20and%20large%20number%20of%20points%20per%20subspace.%22%2C%22date%22%3A%22March%2016%2C%202021%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1093%5C%2Fimaiai%5C%2Fiaaa031%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1093%5C%2Fimaiai%5C%2Fiaaa031%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222049-8772%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%226JKB3X7P%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222021-03-16T15%3A37%3A52Z%22%7D%7D%2C%7B%22key%22%3A%22VJTA8NLZ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ongie%20et%20al.%22%2C%22parsedDate%22%3A%222021-01-01%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOngie%2C%20G.%2C%20Pimentel-Alarc%26%23xF3%3Bn%2C%20D.%2C%20Balzano%2C%20L.%2C%20Willett%2C%20R.%2C%20%26amp%3B%20Nowak%2C%20R.%20D.%20%282021%29.%20Tensor%20Methods%20for%20Nonlinear%20Matrix%20Completion.%20%26lt%3Bi%26gt%3BSIAM%20Journal%20on%20Mathematics%20of%20Data%20Science%26lt%3B%5C%2Fi%26gt%3B%2C%20253%26%23x2013%3B279.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1137%5C%2F20M1323448%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1137%5C%2F20M1323448%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Tensor%20Methods%20for%20Nonlinear%20Matrix%20Completion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Greg%22%2C%22lastName%22%3A%22Ongie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Pimentel-Alarc%5Cu00f3n%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rebecca%22%2C%22lastName%22%3A%22Willett%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%20D.%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22In%20the%20low-rank%20matrix%20completion%20%28LRMC%29%20problem%2C%20the%20low-rank%20assumption%20means%20that%20the%20columns%20%28or%20rows%29%20of%20the%20matrix%20to%20be%20completed%20are%20points%20on%20a%20low-dimensional%20linear%20algebraic%20variety.%20This%20paper%20extends%20this%20thinking%20to%20cases%20where%20the%20columns%20are%20points%20on%20a%20low-dimensional%20nonlinear%20algebraic%20variety%2C%20a%20problem%20we%20call%20low%20algebraic%20dimension%20matrix%20completion%20%28LADMC%29.%20Matrices%20whose%20columns%20belong%20to%20a%20union%20of%20subspaces%20are%20an%20important%20special%20case.%20We%20propose%20an%20LADMC%20algorithm%20that%20leverages%20existing%20LRMC%20methods%20on%20a%20tensorized%20representation%20of%20the%20data.%20For%20example%2C%20a%20second-order%20tensorized%20representation%20is%20formed%20by%20taking%20the%20Kronecker%20product%20of%20each%20column%20with%20itself%2C%20and%20we%20consider%20higher-order%20tensorizations%20as%20well.%20This%20approach%20will%20succeed%20in%20many%20cases%20where%20traditional%20LRMC%20is%20guaranteed%20to%20fail%20because%20the%20data%20are%20low-rank%20in%20the%20tensorized%20representation%20but%20are%20not%20in%20the%20original%20representation.%20We%20also%20provide%20a%20formal%20mathematical%20justification%20for%20the%20success%20of%20our%20method.%20In%20particular%2C%20we%20give%20bounds%20on%20the%20rank%20of%20these%20data%20in%20the%20tensorized%20representation%2C%20and%20we%20prove%20sampling%20requirements%20to%20guarantee%20uniqueness%20of%20the%20solution.%20We%20also%20provide%20experimental%20results%20showing%20that%20the%20new%20approach%20outperforms%20existing%20state-of-the-art%20methods%20for%20matrix%20completion%20under%20a%20union-of-subspaces%20model.%22%2C%22date%22%3A%22January%201%2C%202021%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1137%5C%2F20M1323448%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fepubs.siam.org%5C%2Fdoi%5C%2Fabs%5C%2F10.1137%5C%2F20M1323448%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%22WUTP7CH6%22%2C%22F7QF4T2Q%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222021-03-10T15%3A33%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22HFBDC5QT%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hong%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHong%2C%20D.%2C%20Gilman%2C%20K.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Fessler%2C%20J.%20A.%20%282021%29.%20HePPCAT%3A%20Probabilistic%20PCA%20for%20Data%20With%20Heteroscedastic%20Noise.%20%26lt%3Bi%26gt%3BIEEE%20Transactions%20on%20Signal%20Processing%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B69%26lt%3B%5C%2Fi%26gt%3B%2C%204819%26%23x2013%3B4834.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTSP.2021.3104979%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTSP.2021.3104979%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22HePPCAT%3A%20Probabilistic%20PCA%20for%20Data%20With%20Heteroscedastic%20Noise%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kyle%22%2C%22lastName%22%3A%22Gilman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%5D%2C%22abstractNote%22%3A%22Principal%20component%20analysis%20%28PCA%29%20is%20a%20classical%20and%20ubiquitous%20method%20for%20reducing%20data%20dimensionality%2C%20but%20it%20is%20suboptimal%20for%20heterogeneous%20data%20that%20are%20increasingly%20common%20in%20modern%20applications.%20PCA%20treats%20all%20samples%20uniformly%20so%20degrades%20when%20the%20noise%20is%20heteroscedastic%20across%20samples%2C%20as%20occurs%2C%20e.g.%2C%20when%20samples%20come%20from%20sources%20of%20heterogeneous%20quality.%20This%20paper%20develops%20a%20probabilistic%20PCA%20variant%20that%20estimates%20and%20accounts%20for%20this%20heterogeneity%20by%20incorporating%20it%20in%20the%20statistical%20model.%20Unlike%20in%20the%20homoscedastic%20setting%2C%20the%20resulting%20nonconvex%20optimization%20problem%20is%20not%20seemingly%20solved%20by%20singular%20value%20decomposition.%20This%20paper%20develops%20a%20heteroscedastic%20probabilistic%20PCA%20technique%20%28HePPCAT%29%20that%20uses%20efficient%20alternating%20maximization%20algorithms%20to%20jointly%20estimate%20both%20the%20underlying%20factors%20and%20the%20unknown%20noise%20variances.%20Simulation%20experiments%20illustrate%20the%20comparative%20speed%20of%20the%20algorithms%2C%20the%20benefit%20of%20accounting%20for%20heteroscedasticity%2C%20and%20the%20seemingly%20favorable%20optimization%20landscape%20of%20this%20problem.%20Real%20data%20experiments%20on%20environmental%20air%20quality%20data%20show%20that%20HePPCAT%20can%20give%20a%20better%20PCA%20estimate%20than%20techniques%20that%20do%20not%20account%20for%20heteroscedasticity.%22%2C%22date%22%3A%222021%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTSP.2021.3104979%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fabstract%5C%2Fdocument%5C%2F9514397%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221941-0476%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222025-02-17T21%3A12%3A17Z%22%7D%7D%5D%7D
Lipor, J., Hong, D., Tan, Y. S., & Balzano, L. (2021). Subspace clustering using ensembles of K-subspaces. Information and Inference: A Journal of the IMA, 10(1), 73–107. https://doi.org/10.1093/imaiai/iaaa031
Ongie, G., Pimentel-Alarcón, D., Balzano, L., Willett, R., & Nowak, R. D. (2021). Tensor Methods for Nonlinear Matrix Completion. SIAM Journal on Mathematics of Data Science, 253–279. https://doi.org/10.1137/20M1323448
Hong, D., Gilman, K., Balzano, L., & Fessler, J. A. (2021). HePPCAT: Probabilistic PCA for Data With Heteroscedastic Noise. IEEE Transactions on Signal Processing, 69, 4819–4834. https://doi.org/10.1109/TSP.2021.3104979

2020

1399621 DZFDBB6V 2020 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22EAX6HWJ7%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bower%20and%20Balzano%22%2C%22parsedDate%22%3A%222020-11-21%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBower%2C%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282020%29.%20Preference%20Modeling%20with%20Context-Dependent%20Salient%20Features.%20%26lt%3Bi%26gt%3BInternational%20Conference%20on%20Machine%20Learning%26lt%3B%5C%2Fi%26gt%3B%2C%201067%26%23x2013%3B1077.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv119%5C%2Fbower20a.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv119%5C%2Fbower20a.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Preference%20Modeling%20with%20Context-Dependent%20Salient%20Features%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amanda%22%2C%22lastName%22%3A%22Bower%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22International%20Conference%20on%20Machine%20Learning%22%2C%22conferenceName%22%3A%22International%20Conference%20on%20Machine%20Learning%22%2C%22date%22%3A%222020-11-21%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv119%5C%2Fbower20a.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222021-08-25T11%3A19%3A24Z%22%7D%7D%2C%7B%22key%22%3A%22KZIPYXUB%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gilman%20and%20Balzano%22%2C%22parsedDate%22%3A%222020-05%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGilman%2C%20K.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282020%29.%20Online%20Tensor%20Completion%20and%20Free%20Submodule%20Tracking%20With%20The%20T-SVD.%20%26lt%3Bi%26gt%3BICASSP%202020%20-%202020%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%203282%26%23x2013%3B3286.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FICASSP40776.2020.9053199%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FICASSP40776.2020.9053199%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Online%20Tensor%20Completion%20and%20Free%20Submodule%20Tracking%20With%20The%20T-SVD%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kyle%22%2C%22lastName%22%3A%22Gilman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20propose%20a%20new%20online%20algorithm%2C%20called%20TOUCAN%2C%20for%20the%20tensor%20completion%20problem%20of%20imputing%20missing%20entries%20of%20a%20low%20tubal-rank%20tensor%20using%20the%20tensor-tensor%20product%20%28t-%20product%29%20and%20tensor%20singular%20value%20decomposition%20%28t-SVD%29%20algebraic%20framework.%20We%20also%20demonstrate%20TOUCAN%26%23039%3Bs%20ability%20to%20track%20changing%20free%20submodules%20from%20highly%20incomplete%20streaming%202-D%20data.%20TOUCAN%20uses%20principles%20from%20incremental%20gradient%20descent%20on%20the%20Grassmann%20manifold%20to%20solve%20the%20tensor%20completion%20problem%20with%20linear%20complexity%20and%20constant%20memory%20in%20the%20number%20of%20time%20samples.%20We%20compare%20our%20results%20to%20state-of-the-art%20batch%20tensor%20completion%20algorithms%20and%20matrix%20completion%20algorithms.%20We%20show%20our%20results%20on%20real%20applications%20to%20recover%20temporal%20MRI%20data%20under%20limited%20sampling.%22%2C%22proceedingsTitle%22%3A%22ICASSP%202020%20-%202020%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22conferenceName%22%3A%22ICASSP%202020%20-%202020%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22date%22%3A%22May%202020%22%2C%22DOI%22%3A%2210.1109%5C%2FICASSP40776.2020.9053199%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222020-05-13T15%3A54%3A56Z%22%7D%7D%2C%7B%22key%22%3A%22T9FXM6GF%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lipor%20and%20Balzano%22%2C%22parsedDate%22%3A%222020-03-13%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLipor%2C%20J.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282020%29.%20Clustering%20quality%20metrics%20for%20subspace%20clustering.%20%26lt%3Bi%26gt%3BPattern%20Recognition%26lt%3B%5C%2Fi%26gt%3B%2C%20107328.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.patcog.2020.107328%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.patcog.2020.107328%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Clustering%20quality%20metrics%20for%20subspace%20clustering%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22John%22%2C%22lastName%22%3A%22Lipor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20study%20the%20problem%20of%20clustering%20validation%2C%20i.e.%2C%20clustering%20evaluation%20without%20knowledge%20of%20ground-truth%20labels%2C%20for%20the%20increasingly-popular%20framework%20known%20as%20subspace%20clustering.%20Existing%20clustering%20quality%20metrics%20%28CQMs%29%20rely%20heavily%20on%20a%20notion%20of%20distance%20between%20points%2C%20but%20common%20metrics%20fail%20to%20capture%20the%20geometry%20of%20subspace%20clustering.%20We%20propose%20a%20novel%20point-to-point%20pseudometric%20for%20points%20lying%20on%20a%20union%20of%20subspaces%20and%20show%20how%20this%20allows%20for%20the%20application%20of%20existing%20CQMs%20to%20the%20subspace%20clustering%20problem.%20We%20provide%20theoretical%20and%20empirical%20justification%20for%20the%20proposed%20point-to-point%20distance%2C%20and%20then%20demonstrate%20on%20a%20number%20of%20common%20benchmark%20datasets%20that%20our%20proposed%20methods%20generally%20outperform%20existing%20graph-based%20CQMs%20in%20terms%20of%20choosing%20the%20best%20clustering%20and%20the%20number%20of%20clusters.%22%2C%22date%22%3A%22March%2013%2C%202020%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.patcog.2020.107328%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS003132032030131X%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%220031-3203%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%226JKB3X7P%22%5D%2C%22dateModified%22%3A%222020-03-13T21%3A48%3A31Z%22%7D%7D%2C%7B%22key%22%3A%226ECWDABM%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lyu%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLyu%2C%20H.%2C%20Needell%2C%20D.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282020%29.%20Online%20matrix%20factorization%20for%20Markovian%20data%20and%20applications%20to%20Network%20Dictionary%20Learning.%20%26lt%3Bi%26gt%3BJournal%20of%20Machine%20Learning%20Research%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B21%26lt%3B%5C%2Fi%26gt%3B%28251%29%2C%201%26%23x2013%3B49.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fjmlr.org%5C%2Fpapers%5C%2Fv21%5C%2F20-444.html%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fjmlr.org%5C%2Fpapers%5C%2Fv21%5C%2F20-444.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Online%20matrix%20factorization%20for%20Markovian%20data%20and%20applications%20to%20Network%20Dictionary%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hanbaek%22%2C%22lastName%22%3A%22Lyu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Deanna%22%2C%22lastName%22%3A%22Needell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222020%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fjmlr.org%5C%2Fpapers%5C%2Fv21%5C%2F20-444.html%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221533-7928%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%5D%2C%22dateModified%22%3A%222020-12-20T20%3A04%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22JTMLMDUI%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Thong%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BThong%2C%20T.%2C%20Wang%2C%20Y.%2C%20Brooks%2C%20M.%20D.%2C%20Lee%2C%20C.%20T.%2C%20Scott%2C%20C.%2C%20Balzano%2C%20L.%2C%20Wicha%2C%20M.%20S.%2C%20%26amp%3B%20Colacino%2C%20J.%20A.%20%282020%29.%20Hybrid%20Stem%20Cell%20States%3A%20Insights%20Into%20the%20Relationship%20Between%20Mammary%20Development%20and%20Breast%20Cancer%20Using%20Single-Cell%20Transcriptomics.%20%26lt%3Bi%26gt%3BFrontiers%20in%20Cell%20and%20Developmental%20Biology%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B8%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3389%5C%2Ffcell.2020.00288%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3389%5C%2Ffcell.2020.00288%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Hybrid%20Stem%20Cell%20States%3A%20Insights%20Into%20the%20Relationship%20Between%20Mammary%20Development%20and%20Breast%20Cancer%20Using%20Single-Cell%20Transcriptomics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tasha%22%2C%22lastName%22%3A%22Thong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yutong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20D.%22%2C%22lastName%22%3A%22Brooks%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christopher%20T.%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Clayton%22%2C%22lastName%22%3A%22Scott%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Max%20S.%22%2C%22lastName%22%3A%22Wicha%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Justin%20A.%22%2C%22lastName%22%3A%22Colacino%22%7D%5D%2C%22abstractNote%22%3A%22Similarities%20between%20stem%20cells%20and%20cancer%20cells%20have%20implicated%20a%20role%20for%20mammary%20stem%20cells%20in%20breast%20carcinogenesis.%20Recent%20evidence%20suggests%20that%20normal%20breast%20stem%20cells%20exist%20in%20multiple%20phenotypic%20states%3A%20epithelial%2C%20mesenchymal%2C%20and%20hybrid%20epithelial%5C%2Fmesenchymal.%20The%20proportion%20of%20cells%20in%20these%20states%20vary%20between%20individuals%2C%20suggesting%20that%20state%20dynamics%20may%20be%20influenced%20by%20genetics%20or%20environment.%20Conditional%20reprogramming%20%28CR%29%2C%20an%20in%20vitro%20method%20of%20expanding%20patient%20derived%20tissue%20samples%2C%20promotes%20the%20rapid%20induction%20of%20a%20stem-like%20state%20in%20the%20absence%20of%20genetic%20manipulation.%20The%20goal%20of%20this%20study%20was%20to%20use%20single-cell%20RNA%20sequencing%20to%20quantify%20the%20cell%20state%20distributions%20of%20normal%20human%20mammary%20%28NM%29%20cells%20isolated%20from%20patients%20undergoing%20voluntary%20reduction%20mammoplasty%20%28n%3D3%29%20before%20and%20after%20CR%2C%20investigating%20stem%20cell%20populations%2C%20and%20identifying%20gene%20and%20pathway%20drivers%20of%20stem%20cell%20phenotypes.%20Unbiased%20clustering%20revealed%20that%20post-CR%2C%20myoepithelial%20and%20luminal%20cell%20populations%20are%20retained%2C%20while%20fibroblast%2C%20endothelial%2C%20and%20immune%20cell%20populations%20are%20depleted.%20Compared%20to%20NM%20cells%2C%20CR%20cells%20show%20higher%20expression%20of%20an%20embryonic%20stem%20cell%20gene%20signature%20and%20differentially%20express%20stem%20cell%20and%20cancer%20related%20genes%20%28LGALS1%2C%20SKA2%2C%20MKI67%2C%20HJURP%2C%20BIRC5%2C%20CCNB1%2C%20and%20BUB1%29.%20Pseudotime%20analysis%20and%20alignment%20to%20a%20mouse%20single-cell%20transcriptome%20atlas%20spanning%20mammary%20gland%20development%20revealed%20that%20NM%20cells%20align%20most%20closely%20to%20adult%20mouse%20cells%20and%20CR%20cells%20align%20across%20the%20trajectory%20with%20a%20population%20aligning%20to%20the%20embryonic%20mouse%20cells.%20We%20identified%20the%20emergence%20of%20three%20hybrid%20populations%2C%20a%20KRT14%2B%5C%2FKRT18%2B%20population%20%28L%5C%2FB%29%2C%20consistent%20with%20luminal%20progenitor%20cells%2C%20an%20EPCAM%2B%5C%2FVIM%2B%20%28E%5C%2FM%29%20population%2C%20associated%20with%20cells%20undergoing%20the%20epithelial%20to%20mesenchymal%20transition%2C%20and%20a%20quadruple%20positive%20hybrid%20population%2C%20expressing%20all%20four%20markers.%20Pseudotime%20analysis%20and%20alignment%20to%20the%20mouse%20developmental%20trajectory%20revealed%20that%20E%5C%2FM%20hybrids%20are%20the%20most%20developmentally%20immature%2C%20aligning%20along%20both%20luminal%20and%20basal%20developmental%20trajectories.%20Finally%2C%20pseudotime%20analysis%20and%20alignment%20of%20bulk%20breast%20tumors%20from%20the%20cancer%20genome%20atlas%20%28TCGA%29%2C%20revealed%20that%20breast%20cancer%20subtypes%20express%20distinct%20developmental%20signatures%2C%20with%20basal%20tumors%20expressing%20the%20most%20%5Cu201cdevelopmentally%20immature%5Cu201d%20phenotype.%20These%20results%20highlight%20phenotypic%20plasticity%20of%20normal%20mammary%20stem%20cells%20and%20provide%20insight%20into%20the%20relationship%20between%20hybrid%20cell%20populations%2C%20stemness%2C%20and%20cancer.%22%2C%22date%22%3A%222020%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.3389%5C%2Ffcell.2020.00288%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.frontiersin.org%5C%2Farticles%5C%2F10.3389%5C%2Ffcell.2020.00288%5C%2Ffull%3F%26utm_source%3DEmail_to_authors_%26utm_medium%3DEmail%26utm_content%3DT1_11.5e1_author%26utm_campaign%3DEmail_publication%26field%3D%26journalName%3DFrontiers_in_Cell_and_Developmental_Biology%26id%3D520074%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222296-634X%22%2C%22language%22%3A%22English%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%5D%2C%22dateModified%22%3A%222020-05-13T15%3A55%3A56Z%22%7D%7D%5D%7D
Bower, A., & Balzano, L. (2020). Preference Modeling with Context-Dependent Salient Features. International Conference on Machine Learning, 1067–1077. https://proceedings.mlr.press/v119/bower20a.html
Gilman, K., & Balzano, L. (2020). Online Tensor Completion and Free Submodule Tracking With The T-SVD. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3282–3286. https://doi.org/10.1109/ICASSP40776.2020.9053199
Lipor, J., & Balzano, L. (2020). Clustering quality metrics for subspace clustering. Pattern Recognition, 107328. https://doi.org/10.1016/j.patcog.2020.107328
Lyu, H., Needell, D., & Balzano, L. (2020). Online matrix factorization for Markovian data and applications to Network Dictionary Learning. Journal of Machine Learning Research, 21(251), 1–49. http://jmlr.org/papers/v21/20-444.html
Thong, T., Wang, Y., Brooks, M. D., Lee, C. T., Scott, C., Balzano, L., Wicha, M. S., & Colacino, J. A. (2020). Hybrid Stem Cell States: Insights Into the Relationship Between Mammary Development and Breast Cancer Using Single-Cell Transcriptomics. Frontiers in Cell and Developmental Biology, 8. https://doi.org/10.3389/fcell.2020.00288

2019

1399621 DZFDBB6V 2019 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22T65D3EL6%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222019-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%2C%20Z.%2C%20Ozay%2C%20N.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282019%29.%20Mode%20Clustering%20for%20Markov%20Jump%20Systems.%20%26lt%3Bi%26gt%3B2019%20IEEE%208th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%26lt%3B%5C%2Fi%26gt%3B%2C%20126%26%23x2013%3B130.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCAMSAP45676.2019.9022650%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCAMSAP45676.2019.9022650%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Mode%20Clustering%20for%20Markov%20Jump%20Systems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Necmiye%22%2C%22lastName%22%3A%22Ozay%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20work%2C%20we%20consider%20the%20problem%20of%20mode%20clustering%20in%20Markov%20jump%20models.%20This%20model%20class%20consists%20of%20multiple%20dynamical%20modes%20with%20a%20switching%20sequence%20that%20determines%20how%20the%20system%20switches%20between%20them%20over%20time.%20Under%20different%20active%20modes%2C%20the%20observations%20can%20have%20different%20characteristics.%20Given%20the%20observations%20only%20and%20without%20knowing%20the%20mode%20sequence%2C%20the%20goal%20is%20to%20cluster%20the%20modes%20based%20on%20their%20transition%20distributions%20in%20the%20Markov%20chain%20to%20find%20a%20reduced-rank%20Markov%20matrix%20that%20is%20embedded%20in%20the%20original%20Markov%20chain.%20Our%20approach%20involves%20mode%20sequence%20estimation%2C%20mode%20clustering%20and%20reduced-rank%20model%20estimation%2C%20where%20mode%20clustering%20is%20achieved%20by%20applying%20the%20singular%20value%20decomposition%20and%20k-means.%20We%20show%20that%2C%20under%20certain%20conditions%2C%20the%20clustering%20error%20can%20be%20bounded%2C%20and%20the%20reduced-rank%20Markov%20chain%20is%20a%20good%20approximation%20to%20the%20original%20Markov%20chain.%20Through%20simulations%2C%20we%20show%20the%20efficacy%20of%20our%20approach%20and%20the%20application%20of%20our%20approach%20to%20real%20world%20scenarios.%22%2C%22proceedingsTitle%22%3A%222019%20IEEE%208th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%22%2C%22conferenceName%22%3A%222019%20IEEE%208th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%22%2C%22date%22%3A%22December%202019%22%2C%22DOI%22%3A%2210.1109%5C%2FCAMSAP45676.2019.9022650%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222020-03-27T18%3A53%3A47Z%22%7D%7D%2C%7B%22key%22%3A%222CZQ6C58%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hong%20et%20al.%22%2C%22parsedDate%22%3A%222019-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHong%2C%20D.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Fessler%2C%20J.%20A.%20%282019%29.%20Probabilistic%20PCA%20for%20Heteroscedastic%20Data.%20%26lt%3Bi%26gt%3B2019%20IEEE%208th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%26lt%3B%5C%2Fi%26gt%3B%2C%2026%26%23x2013%3B30.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCAMSAP45676.2019.9022436%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCAMSAP45676.2019.9022436%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Probabilistic%20PCA%20for%20Heteroscedastic%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%5D%2C%22abstractNote%22%3A%22Principal%20Component%20Analysis%20%28PCA%29%20is%20a%20standard%20dimensionality%20reduction%20technique%2C%20but%20it%20treats%20all%20samples%20uniformly%5Cu2019%20making%20it%20suboptimal%20for%20heterogeneous%20data%20that%20are%20increasingly%20common%20in%20modern%20settings.%20This%20paper%20proposes%20a%20PCA%20variant%20for%20samples%20with%20heterogeneous%20noise%20levels%2C%20i.e.%2C%20heteroscedastic%20noise%2C%20that%20naturally%20arise%20when%20some%20of%20the%20data%20come%20from%20higher%20quality%20sources%20than%20others.%20The%20technique%20handles%20heteroscedasticity%20by%20incorporating%20it%20in%20the%20statistical%20model%20of%20a%20probabilistic%20PCA.%20The%20resulting%20optimization%20problem%20is%20an%20interesting%20nonconvex%20problem%20related%20to%20but%20not%20seemingly%20solved%20by%20singular%20value%20decomposition%2C%20and%20this%20paper%20derives%20an%20expectation%20maximization%20%28EM%29%20algorithm.%20Numerical%20experiments%20illustrate%20the%20benefits%20of%20using%20the%20proposed%20method%20to%20combine%20samples%20with%20heteroscedastic%20noise%20in%20a%20single%20analysis%2C%20as%20well%20as%20benefits%20of%20careful%20initialization%20for%20the%20EM%20algorithm.%22%2C%22proceedingsTitle%22%3A%222019%20IEEE%208th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%22%2C%22conferenceName%22%3A%222019%20IEEE%208th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%22%2C%22date%22%3A%22December%202019%22%2C%22DOI%22%3A%2210.1109%5C%2FCAMSAP45676.2019.9022436%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222020-03-27T18%3A53%3A06Z%22%7D%7D%2C%7B%22key%22%3A%22EBP8XG22%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hong%20et%20al.%22%2C%22parsedDate%22%3A%222019-11%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHong%2C%20D.%2C%20Lei%2C%20S.%2C%20Mathieu%2C%20J.%20L.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282019%29.%20Exploration%20of%20tensor%20decomposition%20applied%20to%20commercial%20building%20baseline%20estimation.%20%26lt%3Bi%26gt%3B2019%20IEEE%20Global%20Conference%20on%20Signal%20and%20Information%20Processing%20%28GlobalSIP%29%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B5.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FGlobalSIP45357.2019.8969417%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FGlobalSIP45357.2019.8969417%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Exploration%20of%20tensor%20decomposition%20applied%20to%20commercial%20building%20baseline%20estimation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shunbo%22%2C%22lastName%22%3A%22Lei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johanna%20L.%22%2C%22lastName%22%3A%22Mathieu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Baseline%20estimation%20is%20a%20critical%20task%20for%20commercial%20buildings%20that%20participate%20in%20demand%20response%20programs%20and%20need%20to%20assess%20the%20impact%20of%20their%20strategies.%20The%20problem%20is%20to%20predict%20what%20the%20power%20profile%20would%20have%20been%20had%20the%20demand%20response%20event%20not%20taken%20place.%20This%20paper%20explores%20the%20use%20of%20tensor%20decomposition%20in%20baseline%20estimation.%20We%20apply%20the%20method%20to%20submetered%20fan%20power%20data%20from%20demand%20response%20experiments%20that%20were%20run%20to%20assess%20a%20fast%20demand%20response%20strategy%20expected%20to%20primarily%20impact%20the%20fans.%20Baselining%20this%20fan%20power%20data%20is%20critical%20for%20evaluating%20the%20results%2C%20but%20doing%20so%20presents%20new%20challenges%20not%20readily%20addressed%20by%20existing%20techniques%20designed%20primarily%20for%20baselining%20whole%20building%20electric%20loads.%20We%20find%20that%20tensor%20decomposition%20of%20the%20fan%20power%20data%20identifies%20components%20that%20capture%20both%20dominant%20daily%20patterns%20and%20demand%20response%20events%2C%20and%20that%20are%20generally%20more%20interpretable%20than%20those%20found%20by%20principal%20component%20analysis.%20We%20conclude%20by%20discussing%20how%20these%20components%20and%20related%20techniques%20can%20aid%20in%20developing%20new%20baseline%20models.%22%2C%22proceedingsTitle%22%3A%222019%20IEEE%20Global%20Conference%20on%20Signal%20and%20Information%20Processing%20%28GlobalSIP%29%22%2C%22conferenceName%22%3A%222019%20IEEE%20Global%20Conference%20on%20Signal%20and%20Information%20Processing%20%28GlobalSIP%29%22%2C%22date%22%3A%22November%202019%22%2C%22DOI%22%3A%2210.1109%5C%2FGlobalSIP45357.2019.8969417%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222020-08-04T22%3A16%3A58Z%22%7D%7D%2C%7B%22key%22%3A%225KQJWNQC%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ritchie%20et%20al.%22%2C%22parsedDate%22%3A%222019-06%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BRitchie%2C%20A.%2C%20Scott%2C%20C.%2C%20Balzano%2C%20L.%2C%20Kessler%2C%20D.%2C%20%26amp%3B%20Sripada%2C%20C.%20S.%20%282019%29.%20Supervised%20Principal%20Component%20Analysis%20Via%20Manifold%20Optimization.%20%26lt%3Bi%26gt%3B2019%20IEEE%20Data%20Science%20Workshop%20%28DSW%29%26lt%3B%5C%2Fi%26gt%3B%2C%206%26%23x2013%3B10.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FDSW.2019.8755587%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FDSW.2019.8755587%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Supervised%20Principal%20Component%20Analysis%20Via%20Manifold%20Optimization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Ritchie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.%22%2C%22lastName%22%3A%22Scott%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Kessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.%20S.%22%2C%22lastName%22%3A%22Sripada%22%7D%5D%2C%22abstractNote%22%3A%22High%20dimensional%20prediction%20problems%20are%20pervasive%20in%20the%20scientific%20community.%20In%20practice%2C%20dimensionality%20reduction%20%28DR%29%20is%20often%20performed%20as%20an%20initial%20step%20to%20improve%20prediction%20accuracy%20and%20interpretability.%20Principal%20component%20analysis%20%28PCA%29%20has%20been%20utilized%20extensively%20for%20DR%2C%20but%20does%20not%20take%20advantage%20of%20outcome%20variables%20inherent%20in%20the%20prediction%20task.%20Existing%20approaches%20for%20supervised%20PCA%20%28SPCA%29%20either%20take%20a%20multi-stage%20approach%20or%20incorporate%20supervision%20indirectly.%20We%20present%20a%20manifold%20optimization%20approach%20to%20SPCA%20that%20simultaneously%20solves%20the%20prediction%20and%20dimensionality%20reduction%20problems.%20The%20proposed%20framework%20is%20general%20enough%20for%20both%20regression%20and%20classification%20settings.%20Our%20empirical%20results%20show%20that%20the%20proposed%20approach%20explains%20nearly%20as%20much%20variation%20as%20PCA%20while%20outperforming%20existing%20methods%20in%20prediction%20accuracy.%22%2C%22proceedingsTitle%22%3A%222019%20IEEE%20Data%20Science%20Workshop%20%28DSW%29%22%2C%22conferenceName%22%3A%222019%20IEEE%20Data%20Science%20Workshop%20%28DSW%29%22%2C%22date%22%3A%22June%202019%22%2C%22DOI%22%3A%2210.1109%5C%2FDSW.2019.8755587%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222019-08-16T18%3A35%3A48Z%22%7D%7D%2C%7B%22key%22%3A%22P2VZD8Z4%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222019-05-17%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20Y.%2C%20Thong%2C%20T.%2C%20Saligrama%2C%20V.%2C%20Colacino%2C%20J.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Scott%2C%20C.%20%282019%29.%20A%20gene%20filter%20for%20comparative%20analysis%20of%20single-cell%20RNA-sequencing%20trajectory%20datasets.%20%26lt%3Bi%26gt%3BbioRxiv%26lt%3B%5C%2Fi%26gt%3B%2C%20637488.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1101%5C%2F637488%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1101%5C%2F637488%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20gene%20filter%20for%20comparative%20analysis%20of%20single-cell%20RNA-sequencing%20trajectory%20datasets%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yutong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tasha%22%2C%22lastName%22%3A%22Thong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Venkatesh%22%2C%22lastName%22%3A%22Saligrama%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Justin%22%2C%22lastName%22%3A%22Colacino%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Clayton%22%2C%22lastName%22%3A%22Scott%22%7D%5D%2C%22abstractNote%22%3A%22%26lt%3Bp%26gt%3BUnsupervised%20feature%20selection%2C%20or%20gene%20filtering%2C%20is%20a%20common%20preprocessing%20step%20to%20reduce%20the%20dimensionality%20of%20single-cell%20RNA%20sequencing%20%28scRNAseq%29%20data%20sets.%20Existing%20gene%20filters%20operate%20on%20scRNAseq%20datasets%20in%20isolation%20from%20other%20datasets.%20When%20jointly%20analyzing%20multiple%20datasets%2C%20however%2C%20there%20is%20a%20need%20for%20gene%20filters%20that%20are%20tailored%20to%20comparative%20analysis.%20In%20this%20work%2C%20we%20present%20a%20method%20for%20ranking%20the%20relevance%20of%20genes%20for%20comparing%20trajectory%20datasets.%20Our%20method%20is%20unsupervised%2C%20ie%20the%20cell%20metadata%20are%20not%20assumed%20to%20be%20known.%20Using%20the%20top-ranking%20genes%20significantly%20improves%20performance%20compared%20to%20methods%20not%20tailored%20to%20comparative%20analysis.%20We%20demonstrate%20the%20effectiveness%20of%20our%20algorithm%20on%20previously%20published%20datasets%20from%20studies%20on%20preimplantation%20embryo%20development%2C%20neurogenesis%20and%20cardiogenesis.%26lt%3B%5C%2Fp%26gt%3B%22%2C%22date%22%3A%222019-05-17%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1101%5C%2F637488%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.biorxiv.org%5C%2Fcontent%5C%2F10.1101%5C%2F637488v1%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222019-05-17T19%3A58%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22VFH24EK4%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Eftekhari%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A4%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BEftekhari%2C%20A.%2C%20Ongie%2C%20G.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Wakin%2C%20M.%20B.%20%282019%29.%20Streaming%20Principal%20Component%20Analysis%20From%20Incomplete%20Data.%20%26lt%3Bi%26gt%3BJournal%20of%20Machine%20Learning%20Research%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B20%26lt%3B%5C%2Fi%26gt%3B%2886%29%2C%201%26%23x2013%3B62.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fjmlr.org%5C%2Fpapers%5C%2Fv20%5C%2F16-627.html%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fjmlr.org%5C%2Fpapers%5C%2Fv20%5C%2F16-627.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Streaming%20Principal%20Component%20Analysis%20From%20Incomplete%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Armin%22%2C%22lastName%22%3A%22Eftekhari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gregory%22%2C%22lastName%22%3A%22Ongie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20B.%22%2C%22lastName%22%3A%22Wakin%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222019%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fjmlr.org%5C%2Fpapers%5C%2Fv20%5C%2F16-627.html%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221533-7928%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%22WUTP7CH6%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A38%3A14Z%22%7D%7D%2C%7B%22key%22%3A%228YCSPWJE%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gilman%20and%20Balzano%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGilman%2C%20K.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282019%29.%20Panoramic%20Video%20Separation%20with%20Online%20Grassmannian%20Robust%20Subspace%20Estimation.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%20IEEE%20International%20Conference%20on%20Computer%20Vision%20Workshops%26lt%3B%5C%2Fi%26gt%3B.%20Proceedings%20of%20the%20IEEE%20International%20Conference%20on%20Computer%20Vision%20Workshops.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fopenaccess.thecvf.com%5C%2Fcontent_ICCVW_2019%5C%2Fhtml%5C%2FRSL-CV%5C%2FGilman_Panoramic_Video_Separation_with_Online_Grassmannian_Robust_Subspace_Estimation_ICCVW_2019_paper.html%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fopenaccess.thecvf.com%5C%2Fcontent_ICCVW_2019%5C%2Fhtml%5C%2FRSL-CV%5C%2FGilman_Panoramic_Video_Separation_with_Online_Grassmannian_Robust_Subspace_Estimation_ICCVW_2019_paper.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Panoramic%20Video%20Separation%20with%20Online%20Grassmannian%20Robust%20Subspace%20Estimation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kyle%22%2C%22lastName%22%3A%22Gilman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20IEEE%20International%20Conference%20on%20Computer%20Vision%20Workshops%22%2C%22conferenceName%22%3A%22Proceedings%20of%20the%20IEEE%20International%20Conference%20on%20Computer%20Vision%20Workshops%22%2C%22date%22%3A%222019%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fopenaccess.thecvf.com%5C%2Fcontent_ICCVW_2019%5C%2Fhtml%5C%2FRSL-CV%5C%2FGilman_Panoramic_Video_Separation_with_Online_Grassmannian_Robust_Subspace_Estimation_ICCVW_2019_paper.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%227V94DTC4%22%2C%22ZA8QMDGD%22%2C%22UIWU664R%22%5D%2C%22dateModified%22%3A%222019-11-08T13%3A56%3A47Z%22%7D%7D%5D%7D
Du, Z., Ozay, N., & Balzano, L. (2019). Mode Clustering for Markov Jump Systems. 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 126–130. https://doi.org/10.1109/CAMSAP45676.2019.9022650
Hong, D., Balzano, L., & Fessler, J. A. (2019). Probabilistic PCA for Heteroscedastic Data. 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 26–30. https://doi.org/10.1109/CAMSAP45676.2019.9022436
Hong, D., Lei, S., Mathieu, J. L., & Balzano, L. (2019). Exploration of tensor decomposition applied to commercial building baseline estimation. 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 1–5. https://doi.org/10.1109/GlobalSIP45357.2019.8969417
Ritchie, A., Scott, C., Balzano, L., Kessler, D., & Sripada, C. S. (2019). Supervised Principal Component Analysis Via Manifold Optimization. 2019 IEEE Data Science Workshop (DSW), 6–10. https://doi.org/10.1109/DSW.2019.8755587
Wang, Y., Thong, T., Saligrama, V., Colacino, J., Balzano, L., & Scott, C. (2019). A gene filter for comparative analysis of single-cell RNA-sequencing trajectory datasets. bioRxiv, 637488. https://doi.org/10.1101/637488
Eftekhari, A., Ongie, G., Balzano, L., & Wakin, M. B. (2019). Streaming Principal Component Analysis From Incomplete Data. Journal of Machine Learning Research, 20(86), 1–62. http://jmlr.org/papers/v20/16-627.html
Gilman, K., & Balzano, L. (2019). Panoramic Video Separation with Online Grassmannian Robust Subspace Estimation. Proceedings of the IEEE International Conference on Computer Vision Workshops. Proceedings of the IEEE International Conference on Computer Vision Workshops. http://openaccess.thecvf.com/content_ICCVW_2019/html/RSL-CV/Gilman_Panoramic_Video_Separation_with_Online_Grassmannian_Robust_Subspace_Estimation_ICCVW_2019_paper.html

2018

1399621 DZFDBB6V 2018 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22WTLUU8RP%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gitlin%20et%20al.%22%2C%22parsedDate%22%3A%222018-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGitlin%2C%20A.%2C%20Tao%2C%20B.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Lipor%2C%20J.%20%282018%29.%20Improving%20%24K%24-Subspaces%20via%20Coherence%20Pursuit.%20%26lt%3Bi%26gt%3BIEEE%20Journal%20of%20Selected%20Topics%20in%20Signal%20Processing%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B12%26lt%3B%5C%2Fi%26gt%3B%286%29%2C%201575%26%23x2013%3B1588.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FJSTSP.2018.2869363%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FJSTSP.2018.2869363%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Improving%20%24K%24-Subspaces%20via%20Coherence%20Pursuit%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Gitlin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22B.%22%2C%22lastName%22%3A%22Tao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%22%2C%22lastName%22%3A%22Lipor%22%7D%5D%2C%22abstractNote%22%3A%22Subspace%20clustering%20is%20a%20powerful%20generalization%20of%20clustering%20for%20high-dimensional%20data%20analysis%2C%20where%20low-rank%20cluster%20structure%20is%20leveraged%20for%20accurate%20inference.%20K-Subspaces%20%28KSS%29%2C%20an%20alternating%20algorithm%20that%20mirrors%20K-means%2C%20is%20a%20classical%20approach%20for%20clustering%20with%20this%20model.%20Like%20K-means%2C%20KSS%20is%20highly%20sensitive%20to%20initialization%2C%20yet%20KSS%20has%20two%20major%20handicaps%20beyond%20this%20issue.%20First%2C%20unlike%20K-means%2C%20the%20KSS%20objective%20is%20NP-hard%20to%20approximate%20within%20any%20finite%20factor%20for%20a%20large%20enough%20subspace%20rank.%20Second%2C%20it%20is%20known%20that%20the%2012%20subspace%20estimation%20step%20is%20faulty%20when%20an%20estimated%20cluster%20has%20points%20from%20multiple%20subspaces.%20In%20this%20paper%2C%20we%20demonstrate%20both%20of%20these%20additional%20drawbacks%2C%20provide%20a%20proof%20for%20the%20former%2C%20and%20offer%20a%20solution%20to%20the%20latter%20through%20the%20use%20of%20a%20robust%20subspace%20recovery%20%28RSR%29%20method%20known%20as%20coherence%20pursuit%20%28CoP%29.%20While%20many%20RSR%20methods%20have%20been%20developed%20in%20recent%20years%2C%20few%20can%20handle%20the%20case%20where%20the%20outliers%20are%20themselves%20low%20rank.%20We%20prove%20that%20CoP%20can%20handle%20low-rank%20outliers.%20This%20and%20its%20low%20computational%20complexity%20make%20it%20ideal%20to%20incorporate%20into%20the%20subspace%20estimation%20step%20of%20KSS.%20We%20demonstrate%20on%20synthetic%20data%20that%20CoP%20successfully%20rejects%20low-rank%20outliers%20and%20show%20that%20combining%20CoP%20with%20K-Subspaces%20yields%20state-of-the-art%20clustering%20performance%20on%20canonical%20benchmark%20datasets.%22%2C%22date%22%3A%22December%202018%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FJSTSP.2018.2869363%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221932-4553%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222019-02-28T16%3A51%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22NA4IURWV%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222018-09-27%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20D.%2C%20Zhao%2C%20T.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282018%29.%20%26lt%3Bi%26gt%3BINFORMATION%20MAXIMIZATION%20AUTO-ENCODING%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DSyVpB2RqFX%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DSyVpB2RqFX%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22INFORMATION%20MAXIMIZATION%20AUTO-ENCODING%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianchen%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20propose%20the%20Information%20Maximization%20Autoencoder%20%28IMAE%29%2C%20an%20information%20theoretic%20approach%20to%20simultaneously%20learn%20continuous%20and%20discrete%20representations%20in%20an%20unsupervised%20setting.%20Unlike%20the%20Variational%20Autoencoder%20framework%2C%20IMAE%20starts%20from%20a%20stochastic%20encoder%20that%20seeks%20to%20map%20each%20input%20data%20to%20a%20hybrid%20discrete%20and%20continuous%20representation%20with%20the%20objective%20of%20maximizing%20the%20mutual%20information%20between%20the%20data%20and%20their%20representations.%20A%20decoder%20is%20included%20to%20approximate%20the%20posterior%20distribution%20of%20the%20data%20given%20their%20representations%2C%20where%20a%20high%20fidelity%20approximation%20can%20be%20achieved%20by%20leveraging%20the%20informative%20representations.%20We%20show%20that%20the%20proposed%20objective%20is%20theoretically%20valid%20and%20provides%20a%20principled%20framework%20for%20understanding%20the%20tradeoffs%20regarding%20informativeness%20of%20each%20representation%20factor%2C%20disentanglement%20of%20representations%2C%20and%20decoding%20quality.%22%2C%22date%22%3A%222018%5C%2F09%5C%2F27%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DSyVpB2RqFX%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-05-10T17%3A04%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22WDVLY9PT%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hong%20et%20al.%22%2C%22parsedDate%22%3A%222018-09-01%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHong%2C%20D.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Fessler%2C%20J.%20A.%20%282018%29.%20Asymptotic%20performance%20of%20PCA%20for%20high-dimensional%20heteroscedastic%20data.%20%26lt%3Bi%26gt%3BJournal%20of%20Multivariate%20Analysis%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B167%26lt%3B%5C%2Fi%26gt%3B%2C%20435%26%23x2013%3B452.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.jmva.2018.06.002%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.jmva.2018.06.002%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Asymptotic%20performance%20of%20PCA%20for%20high-dimensional%20heteroscedastic%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%5D%2C%22abstractNote%22%3A%22Principal%20Component%20Analysis%20%28PCA%29%20is%20a%20classical%20method%20for%20reducing%20the%20dimensionality%20of%20data%20by%20projecting%20them%20onto%20a%20subspace%20that%20captures%20most%20of%20their%20variation.%20Effective%20use%20of%20PCA%20in%20modern%20applications%20requires%20understanding%20its%20performance%20for%20data%20that%20are%20both%20high-dimensional%20and%20heteroscedastic.%20This%20paper%20analyzes%20the%20statistical%20performance%20of%20PCA%20in%20this%20setting%2C%20i.e.%2C%20for%20high-dimensional%20data%20drawn%20from%20a%20low-dimensional%20subspace%20and%20degraded%20by%20heteroscedastic%20noise.%20We%20provide%20simplified%20expressions%20for%20the%20asymptotic%20PCA%20recovery%20of%20the%20underlying%20subspace%2C%20subspace%20amplitudes%20and%20subspace%20coefficients%3B%20the%20expressions%20enable%20both%20easy%20and%20efficient%20calculation%20and%20reasoning%20about%20the%20performance%20of%20PCA.%20We%20exploit%20the%20structure%20of%20these%20expressions%20to%20show%20that%2C%20for%20a%20fixed%20average%20noise%20variance%2C%20the%20asymptotic%20recovery%20of%20PCA%20for%20heteroscedastic%20data%20is%20always%20worse%20than%20that%20for%20homoscedastic%20data%20%28i.e.%2C%20for%20noise%20variances%20that%20are%20equal%20across%20samples%29.%20Hence%2C%20while%20average%20noise%20variance%20is%20often%20a%20practically%20convenient%20measure%20for%20the%20overall%20quality%20of%20data%2C%20it%20gives%20an%20overly%20optimistic%20estimate%20of%20the%20performance%20of%20PCA%20for%20heteroscedastic%20data.%22%2C%22date%22%3A%22September%201%2C%202018%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jmva.2018.06.002%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0047259X17304852%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%220047-259X%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%5D%2C%22dateModified%22%3A%222018-07-11T17%3A22%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22ILMJZ2IM%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hong%20et%20al.%22%2C%22parsedDate%22%3A%222018-09%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHong%2C%20D.%2C%20Malinas%2C%20R.%20P.%2C%20Fessler%2C%20J.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282018%29.%20Learning%20Dictionary-Based%20Unions%20of%20Subspaces%20for%20Image%20Denoising.%20%26lt%3Bi%26gt%3B2018%2026th%20European%20Signal%20Processing%20Conference%20%28EUSIPCO%29%26lt%3B%5C%2Fi%26gt%3B%2C%201597%26%23x2013%3B1601.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.23919%5C%2FEUSIPCO.2018.8553117%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.23919%5C%2FEUSIPCO.2018.8553117%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Learning%20Dictionary-Based%20Unions%20of%20Subspaces%20for%20Image%20Denoising%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%20P.%22%2C%22lastName%22%3A%22Malinas%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Many%20signals%20of%20interest%20are%20well-approximated%20by%20sparse%20linear%20combinations%20of%20atomic%20signals%20from%20a%20dictionary.%20Equivalently%2C%20they%20are%20well-approximated%20by%20low-dimensional%20subspaces%20in%20a%20union%20of%20subspaces%20generated%20by%20the%20dictionary.%20A%20given%20sparsity%20level%20has%20an%20associated%20union%20of%20subspaces%24%28%5C%5CmathbfUoS%29%24generated%20by%20sparse%20combinations%20of%20correspondingly%20many%20atoms.%20When%20considering%20a%20sequence%20of%20sparsity%20levels%2C%20we%20have%20a%20sequence%20of%20unions%20of%20subspaces%24%28%5C%5CmathbfSUoS%29%24of%20increasing%20dimension.%20This%20paper%20considers%20the%20problem%20of%20learning%20such%20an%24%5C%5CmathbfSUoS%24from%20data.%20While%20each%24%5C%5CmathbfUoS%24is%20combinatorially%20large%20with%20respect%20to%20sparsity%20level%2C%20our%20learning%20approach%20exploits%20the%20fact%20that%20sparsity%20is%20structured%20for%20many%20signals%20of%20interest%2C%20i.e.%2C%20that%20certain%20collections%20of%20atoms%20are%20more%20frequently%20used%20together%20than%20others.%20This%20is%20known%20as%20group%20sparsity%20structure%20and%20has%20been%20studied%20extensively%20when%20the%20structure%20is%20known%20a%20priori.%20We%20consider%20the%20setting%20where%20the%20structure%20is%20unknown%2C%20and%20we%20seek%20to%20learn%20it%20from%20training%20data.%20We%20also%20adapt%20the%20subspaces%20we%20obtain%20to%20improve%20representation%20and%20parsimony%2C%20similar%20to%20the%20goal%20of%20adapting%20atoms%20in%20dictionary%20learning.%20We%20illustrate%20the%20benefits%20of%20the%20learned%20dictionary-based%20SUoS%20for%20the%20problem%20of%20denoising%3B%20using%20a%20more%20parsimonious%20and%20representative%20SUoS%20results%20in%20improved%20recovery%20of%20complicated%20structures%20and%20edges.%22%2C%22proceedingsTitle%22%3A%222018%2026th%20European%20Signal%20Processing%20Conference%20%28EUSIPCO%29%22%2C%22conferenceName%22%3A%222018%2026th%20European%20Signal%20Processing%20Conference%20%28EUSIPCO%29%22%2C%22date%22%3A%22Sept%202018%22%2C%22DOI%22%3A%2210.23919%5C%2FEUSIPCO.2018.8553117%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222018-12-18T14%3A38%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22T9XFFKTS%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ledva%20et%20al.%22%2C%22parsedDate%22%3A%222018-08%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLedva%2C%20G.%20S.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Mathieu%2C%20J.%20L.%20%282018%29.%20Exploring%20Connections%20Between%20a%20Multiple%20Model%20Kalman%20Filter%20and%20Dynamic%20Fixed%20Share%20with%20Applications%20to%20Demand%20Response.%20%26lt%3Bi%26gt%3B2018%20IEEE%20Conference%20on%20Control%20Technology%20and%20Applications%20%28CCTA%29%26lt%3B%5C%2Fi%26gt%3B%2C%20217%26%23x2013%3B223.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCCTA.2018.8511493%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCCTA.2018.8511493%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Exploring%20Connections%20Between%20a%20Multiple%20Model%20Kalman%20Filter%20and%20Dynamic%20Fixed%20Share%20with%20Applications%20to%20Demand%20Response%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22G.%20S.%22%2C%22lastName%22%3A%22Ledva%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%20L.%22%2C%22lastName%22%3A%22Mathieu%22%7D%5D%2C%22abstractNote%22%3A%22Kalman%20filtering%20and%20online%20learning%20are%20two%20approaches%20to%20estimate%20the%20state%20of%20a%20system%20in%20the%20presence%20of%20inaccurate%20%28e.g.%2C%20noisy%29%20measurements.%20While%20many%20online%20learning%20algorithms%20are%20model-free%20and%20data-driven%2C%20two%20recently%20developed%20online%20learning%20algorithms%2C%20Dynamic%20Mirror%20Descent%20%28DMD%29%20and%20Dynamic%20Fixed%20Share%20%28DFS%29%2C%20incorporate%20dynamic%20models%2C%20similarly%20to%20Kalman%20filtering%20algorithms.%20Our%20previous%20work%20showed%20that%20DMD%20can%20be%20constructed%20to%20produce%20state%20estimates%20that%20are%20identical%20to%20those%20produced%20by%20a%20discrete-time%20Kalman%20filter.%20This%20work%20extends%20our%20previous%20work%20by%20exploring%20connections%20between%20a%20multiple%20model%20Kalman%20filter%20%28MMKF%29%20and%20DFS%2C%20which%20both%20incorporate%20a%20set%20of%20candidate%20models%20to%20address%20situations%20in%20which%20the%20underlying%20model%20is%20unknown.%20We%20show%20that%20the%20functions%5C%2Fparameters%20used%20within%20DFS%20can%20be%20constructed%20to%20produce%20the%20same%20estimates%20as%20a%20MMKF.%20We%20then%20modify%20DFS%20to%20include%20several%20heuristics%20that%20are%20used%20to%20improve%20the%20performance%20of%20a%20MMKF%20in%20order%20to%20assess%20whether%20they%20can%20also%20be%20used%20to%20improve%20the%20performance%20of%20DFS.%20Finally%2C%20we%20investigate%20the%20performance%20of%20the%20algorithms%20and%20their%20variations%20in%20a%20simulation%20study.%20Specifically%2C%20we%20seek%20to%20estimate%20the%20time-varying%20power%20consumption%20of%20an%20aggregation%20of%20electric%20loads%2C%20which%20could%20be%20used%20as%20the%20feedback%20signal%20within%20a%20demand%20response%20algorithm.%20The%20simulation%20results%20empirically%20show%20that%20DFS%20implementations%20generally%20perform%20better%20than%20comparable%20MMKF%20implementations%20since%20we%20are%20able%20to%20tune%20the%20functions%5C%2Fparameters%20used%20within%20DFS.%22%2C%22proceedingsTitle%22%3A%222018%20IEEE%20Conference%20on%20Control%20Technology%20and%20Applications%20%28CCTA%29%22%2C%22conferenceName%22%3A%222018%20IEEE%20Conference%20on%20Control%20Technology%20and%20Applications%20%28CCTA%29%22%2C%22date%22%3A%22August%202018%22%2C%22DOI%22%3A%2210.1109%5C%2FCCTA.2018.8511493%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222018-12-18T14%3A39%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22K2FS32UV%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ongie%20et%20al.%22%2C%22parsedDate%22%3A%222018-06%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOngie%2C%20G.%2C%20Hong%2C%20D.%2C%20Zhang%2C%20D.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282018%29.%20Online%20Estimation%20of%20Coherent%20Subspaces%20with%20Adaptive%20Sampling.%20%26lt%3Bi%26gt%3B2018%20IEEE%20Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%20841%26%23x2013%3B845.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSSP.2018.8450830%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSSP.2018.8450830%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Online%20Estimation%20of%20Coherent%20Subspaces%20with%20Adaptive%20Sampling%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Greg%22%2C%22lastName%22%3A%22Ongie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22This%20work%20investigates%20adaptive%20sampling%20strategies%20for%20online%20subspace%20estimation%20from%20streaming%20input%20vectors%20where%20the%20underlying%20subspace%20is%20coherent%2C%20i.e.%2C%20aligned%20with%20some%20subset%20of%20the%20coordinate%20axes.%20We%20adapt%20the%20previously%20proposed%20Grassmannian%20rank-one%20update%20subspace%20estimation%20%28GROUSE%29%20algorithm%20to%20incorporate%20an%20adaptive%20sampling%20strategy%20that%20substantially%20improves%20over%20uniform%20random%20sampling.%20Our%20approach%20is%20to%20sample%20some%20proportion%20of%20the%20entries%20based%20on%20the%20leverage%20scores%20of%20the%20current%20subspace%20estimate.%20Experiments%20on%20synthetic%20data%20demonstrate%20that%20the%20adaptive%20measurement%20scheme%20greatly%20improves%20the%20convergence%20rate%20of%20GROUSE%20over%20uniform%20random%20measurements%20when%20the%20underlying%20subspace%20is%20coherent.%22%2C%22proceedingsTitle%22%3A%222018%20IEEE%20Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%22%2C%22conferenceName%22%3A%222018%20IEEE%20Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%22%2C%22date%22%3A%22June%202018%22%2C%22DOI%22%3A%2210.1109%5C%2FSSP.2018.8450830%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222020-05-13T16%3A02%3A21Z%22%7D%7D%2C%7B%22key%22%3A%22R663NLTU%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222018-06%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20D.%2C%20Katz-Samuels%2C%20J.%2C%20Figueiredo%2C%20M.%20A.%20T.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282018%29.%20Simultaneous%20Sparsity%20and%20Parameter%20Tying%20for%20Deep%20Learning%20Using%20Ordered%20Weighted%20%26%23x2113%3B1%20Regularization.%20%26lt%3Bi%26gt%3B2018%20IEEE%20Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%2065%26%23x2013%3B69.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSSP.2018.8450819%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSSP.2018.8450819%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Simultaneous%20Sparsity%20and%20Parameter%20Tying%20for%20Deep%20Learning%20Using%20Ordered%20Weighted%20%5Cu21131%20Regularization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julian%22%2C%22lastName%22%3A%22Katz-Samuels%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M%5Cu00e1rio%20A.T.%22%2C%22lastName%22%3A%22Figueiredo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22A%20deep%20neural%20network%20%28DNN%29%20usually%20contains%20millions%20of%20parameters%2C%20making%20both%20storage%20and%20computation%20extremely%20expensive.%20Although%20this%20high%20capacity%20allows%20DNNs%20to%20learn%20sophisticated%20mappings%2C%20it%20also%20makes%20them%20prone%20to%20over-fitting.%20To%20tackle%20this%20issue%2C%20we%20adopt%20a%20recently%20proposed%20sparsity-inducing%20regularizer%20called%20OWL%20%28ordered%20weighted%20%5Cu21131%2C%20which%20has%20proven%20effective%20in%20sparse%20linear%20regression%20with%20strongly%20correlated%20covariates.%20Unlike%20the%20conventional%20sparsity-inducing%20regularizers%2C%20OWL%20simultaneously%20eliminates%20unimportant%20variables%20by%20setting%20their%20weights%20to%20zero%2C%20while%20also%20explicitly%20identifying%20correlated%20groups%20of%20variables%20by%20tying%20the%20corresponding%20weights%20to%20a%20common%20value.%20We%20evaluate%20the%20OWL%20regularizer%20on%20several%20deep%20learning%20benchmarks%2C%20showing%20that%20it%20can%20dramatically%20compress%20the%20network%20with%20slight%20or%20even%20no%20loss%20on%20generalization%20accuracy.%22%2C%22proceedingsTitle%22%3A%222018%20IEEE%20Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%22%2C%22conferenceName%22%3A%222018%20IEEE%20Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%22%2C%22date%22%3A%22June%202018%22%2C%22DOI%22%3A%2210.1109%5C%2FSSP.2018.8450819%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222020-05-13T16%3A00%3A50Z%22%7D%7D%2C%7B%22key%22%3A%22NR6ZND2K%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222018-04-30%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20D.%2C%20Wang%2C%20H.%2C%20Figueiredo%2C%20M.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282018%29.%20Learning%20to%20Share%3A%20Simultaneous%20Parameter%20Tying%20and%20Sparsification%20in%20Deep%20Learning.%20%26lt%3Bi%26gt%3BInternational%20Conference%20on%20Learning%20Representations%20%28ICLR%29%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DrypT3fb0b%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DrypT3fb0b%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20to%20Share%3A%20Simultaneous%20Parameter%20Tying%20and%20Sparsification%20in%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haozhu%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mario%22%2C%22lastName%22%3A%22Figueiredo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Deep%20neural%20networks%20%28DNNs%29%20usually%20contain%20millions%2C%20maybe%20billions%2C%20of%20parameters%5C%2Fweights%2C%20making%20both%20storage%20and%20computation%20very%20expensive.%20This%20has%20motivated%20a%20large%20body%20of%20work%20to%20reduce...%22%2C%22date%22%3A%222018%5C%2F04%5C%2F30%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DrypT3fb0b%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22F7QF4T2Q%22%5D%2C%22dateModified%22%3A%222018-04-16T15%3A19%3A53Z%22%7D%7D%2C%7B%22key%22%3A%22BMJGJNQT%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bower%20et%20al.%22%2C%22parsedDate%22%3A%222018-04%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBower%2C%20A.%2C%20Jain%2C%20L.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282018%29.%20The%20Landscape%20of%20Non-Convex%20Quadratic%20Feasibility.%20%26lt%3Bi%26gt%3B2018%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%203974%26%23x2013%3B3978.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FICASSP.2018.8461868%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FICASSP.2018.8461868%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22The%20Landscape%20of%20Non-Convex%20Quadratic%20Feasibility%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amanda%22%2C%22lastName%22%3A%22Bower%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lalit%22%2C%22lastName%22%3A%22Jain%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Motivated%20by%20applications%20such%20as%20ordinal%20embedding%20and%20collaborative%20ranking%2C%20we%20formulate%20homogeneous%20quadratic%20feasibility%20as%20an%20unconstrained%2C%20non-convex%20minimization%20problem.%20Our%20work%20aims%20to%20understand%20the%20landscape%20%28local%20minimizers%20and%20global%20minimizers%29%20of%20the%20non-convex%20objective%2C%20which%20corresponds%20to%20hinge%20losses%20arising%20from%20quadratic%20constraints.%20Under%20certain%20assumptions%2C%20we%20give%20necessary%20conditions%20for%20non-global%2C%20local%20minimizers%20of%20our%20objective%20and%20additionally%20show%20that%20in%20two%20dimensions%2C%20every%20local%20minimizer%20is%20a%20global%20minimizer.%20Empirically%2C%20we%20demonstrate%20that%20finding%20feasible%20points%20by%20solving%20the%20unconstrained%20optimization%20problem%20with%20stochastic%20gradient%20descent%20works%20reliably%20by%20utilizing%20large%20initializations.%22%2C%22proceedingsTitle%22%3A%222018%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22conferenceName%22%3A%222018%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22date%22%3A%22April%202018%22%2C%22DOI%22%3A%2210.1109%5C%2FICASSP.2018.8461868%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222020-05-13T15%3A59%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22YBKIR8XD%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222018-01-01%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%2C%20Z.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Ozay%2C%20N.%20%282018%29.%20A%20Robust%20Algorithm%20for%20Online%20Switched%20System%20Identification.%20%26lt%3Bi%26gt%3BIFAC-PapersOnLine%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B51%26lt%3B%5C%2Fi%26gt%3B%2815%29%2C%20293%26%23x2013%3B298.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.ifacol.2018.09.150%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.ifacol.2018.09.150%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Robust%20Algorithm%20for%20Online%20Switched%20System%20Identification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Necmiye%22%2C%22lastName%22%3A%22Ozay%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%2C%20we%20consider%20the%20problem%20of%20online%20identification%20of%20Switched%20Au-toRegressive%20eXogenous%20%28SARX%29%20systems%2C%20where%20the%20goal%20is%20to%20estimate%20the%20parameters%20of%20each%20subsystem%20and%20identify%20the%20switching%20sequence%20as%20data%20are%20obtained%20in%20a%20streaming%20fashion.%20We%20propose%20a%20two-step%20algorithm%3A%20%28i%29%20every%20time%20we%20receive%20new%20data%2C%20we%20first%20assign%20this%20data%20to%20one%20candidate%20subsystem%20based%20on%20a%20novel%20robust%20criterion%20that%20incorporates%20both%20the%20residual%20error%20and%20an%20upper%20bound%20of%20subsystem%20estimation%20error%2C%20and%20%28ii%29%20we%20use%20a%20randomized%20algorithm%20to%20update%20the%20parameter%20estimate%20of%20chosen%20candidate.%20We%20provide%20a%20theoretical%20guarantee%20on%20the%20local%20convergence%20of%20our%20algorithm.%20Though%20our%20theory%20only%20guarantees%20convergence%20with%20a%20good%20initialization%2C%20simulation%20results%20show%20that%20even%20with%20random%20initialization%2C%20our%20algorithm%20still%20has%20excellent%20performance.%20Finally%2C%20we%20show%2C%20through%20simulations%2C%20that%20our%20algorithm%20outperforms%20existing%20methods%20and%20exhibits%20robust%20performance.%22%2C%22date%22%3A%22January%201%2C%202018%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.ifacol.2018.09.150%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS2405896318318111%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222405-8963%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222020-06-07T18%3A03%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22ZDIV8IP3%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ledva%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A4%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLedva%2C%20G.%20S.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Mathieu%2C%20J.%20L.%20%282018%29.%20Real-Time%20Energy%20Disaggregation%20of%20a%20Distribution%20Feeder%26%23x2019%3Bs%20Demand%20Using%20Online%20Learning.%20%26lt%3Bi%26gt%3BIEEE%20Transactions%20on%20Power%20Systems%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B1.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTPWRS.2018.2800535%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTPWRS.2018.2800535%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Real-Time%20Energy%20Disaggregation%20of%20a%20Distribution%20Feeder%27s%20Demand%20Using%20Online%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22G.%20S.%22%2C%22lastName%22%3A%22Ledva%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%20L.%22%2C%22lastName%22%3A%22Mathieu%22%7D%5D%2C%22abstractNote%22%3A%22Though%20distribution%20system%20operators%20have%20been%20adding%20more%20sensors%20to%20their%20networks%2C%20they%20still%20often%20lack%20an%20accurate%20real-time%20picture%20of%20the%20behavior%20of%20distributed%20energy%20resources%20such%20as%20demand%20responsive%20electric%20loads%20and%20residential%20solar%20generation.%20Such%20information%20could%20improve%20system%20reliability%2C%20economic%20efficiency%2C%20and%20environmental%20impact.%20Rather%20than%20installing%20additional%2C%20costly%20sensing%20and%20communication%20infrastructure%20to%20obtain%20additional%20real-time%20information%2C%20it%20may%20be%20possible%20to%20use%20existing%20sensing%20capabilities%20and%20leverage%20knowledge%20about%20the%20system%20to%20reduce%20the%20need%20for%20new%20infrastructure.%20In%20this%20paper%2C%20we%20disaggregate%20a%20distribution%20feeder%26%23039%3Bs%20demand%20measurements%20into%3A%201%29%20the%20demand%20of%20a%20population%20of%20air%20conditioners%2C%20and%202%29%20the%20demand%20of%20the%20remaining%20loads%20connected%20to%20the%20feeder.%20We%20use%20an%20online%20learning%20algorithm%2C%20Dynamic%20Fixed%20Share%20%28DFS%29%2C%20that%20uses%20the%20real-time%20distribution%20feeder%20measurements%20as%20well%20as%20models%20generated%20from%20historical%20building-%20and%20device-level%20data.%20We%20develop%20two%20implementations%20of%20the%20algorithm%20and%20conduct%20case%20studies%20using%20real%20demand%20data%20from%20households%20and%20commercial%20buildings%20to%20investigate%20the%20effectiveness%20of%20the%20algorithm.%20The%20case%20studies%20demonstrate%20that%20DFS%20can%20effectively%20perform%20online%20disaggregation%20and%20the%20choice%20and%20construction%20of%20models%20included%20in%20the%20algorithm%20affects%20its%20accuracy%2C%20which%20is%20comparable%20to%20that%20of%20a%20set%20of%20Kalman%20filters.%22%2C%22date%22%3A%222018%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTPWRS.2018.2800535%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%220885-8950%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A38%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22FKPFDDE8%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A4%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20Chi%2C%20Y.%2C%20%26amp%3B%20Lu%2C%20Y.%20M.%20%282018%29.%20Streaming%20PCA%20and%20Subspace%20Tracking%3A%20The%20Missing%20Data%20Case.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%20IEEE%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B18.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FJPROC.2018.2847041%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FJPROC.2018.2847041%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Streaming%20PCA%20and%20Subspace%20Tracking%3A%20The%20Missing%20Data%20Case%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y.%22%2C%22lastName%22%3A%22Chi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y.%20M.%22%2C%22lastName%22%3A%22Lu%22%7D%5D%2C%22abstractNote%22%3A%22For%20many%20modern%20applications%20in%20science%20and%20engineering%2C%20data%20are%20collected%20in%20a%20streaming%20fashion%20carrying%20time-varying%20information%2C%20and%20practitioners%20need%20to%20process%20them%20with%20a%20limited%20amount%20of%20memory%20and%20computational%20resources%20in%20a%20timely%20manner%20for%20decision%20making.%20This%20often%20is%20coupled%20with%20the%20missing%20data%20problem%2C%20such%20that%20only%20a%20small%20fraction%20of%20data%20attributes%20are%20observed.%20These%20complications%20impose%20significant%2C%20and%20unconventional%2C%20constraints%20on%20the%20problem%20of%20streaming%20principal%20component%20analysis%20%28PCA%29%20and%20subspace%20tracking%2C%20which%20is%20an%20essential%20building%20block%20for%20many%20inference%20tasks%20in%20signal%20processing%20and%20machine%20learning.%20This%20survey%20article%20reviews%20a%20variety%20of%20classical%20and%20recent%20algorithms%20for%20solving%20this%20problem%20with%20low%20computational%20and%20memory%20complexities%2C%20particularly%20those%20applicable%20in%20the%20big%20data%20regime%20with%20missing%20data.%20We%20illustrate%20that%20streaming%20PCA%20and%20subspace%20tracking%20algorithms%20can%20be%20understood%20through%20algebraic%20and%20geometric%20perspectives%2C%20and%20they%20need%20to%20be%20adjusted%20carefully%20to%20handle%20missing%20data.%20Both%20asymptotic%20and%20nonasymptotic%20convergence%20guarantees%20are%20reviewed.%20Finally%2C%20we%20benchmark%20the%20performance%20of%20several%20competitive%20algorithms%20in%20the%20presence%20of%20missing%20data%20for%20both%20well-conditioned%20and%20ill-conditioned%20systems.%22%2C%22date%22%3A%222018%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FJPROC.2018.2847041%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%220018-9219%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%227V94DTC4%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A38%3A24Z%22%7D%7D%2C%7B%22key%22%3A%22HNSM7SLA%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ledva%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLedva%2C%20G.%20S.%2C%20Du%2C%20Z.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Mathieu%2C%20J.%20L.%20%282018%29.%20Disaggregating%20Load%20by%20Type%20from%20Distribution%20System%20Measurements%20in%20Real%20Time.%20In%20S.%20Meyn%2C%20T.%20Samad%2C%20I.%20Hiskens%2C%20%26amp%3B%20J.%20Stoustrup%20%28Eds.%29%2C%20%26lt%3Bi%26gt%3BEnergy%20Markets%20and%20Responsive%20Grids%26lt%3B%5C%2Fi%26gt%3B%20%28Vol.%20162%2C%20pp.%20413%26%23x2013%3B437%29.%20Springer%20New%20York.%20https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-1-4939-7822-9_17%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Disaggregating%20Load%20by%20Type%20from%20Distribution%20System%20Measurements%20in%20Real%20Time%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Sean%22%2C%22lastName%22%3A%22Meyn%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Tariq%22%2C%22lastName%22%3A%22Samad%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Ian%22%2C%22lastName%22%3A%22Hiskens%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Jakob%22%2C%22lastName%22%3A%22Stoustrup%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gregory%20S.%22%2C%22lastName%22%3A%22Ledva%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johanna%20L.%22%2C%22lastName%22%3A%22Mathieu%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22bookTitle%22%3A%22Energy%20Markets%20and%20Responsive%20Grids%22%2C%22date%22%3A%222018%22%2C%22originalDate%22%3A%22%22%2C%22originalPublisher%22%3A%22%22%2C%22originalPlace%22%3A%22%22%2C%22format%22%3A%22%22%2C%22ISBN%22%3A%22978-1-4939-7821-2%20978-1-4939-7822-9%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Flink.springer.com%5C%2F10.1007%5C%2F978-1-4939-7822-9_17%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222021-10-08T01%3A25%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22JBK99EYA%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ongie%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOngie%2C%20G.%2C%20Murthy%2C%20N.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Fessler%2C%20J.%20A.%20%282018%29.%20A%20Memory-efficient%20Algorithm%20for%20Large-scale%20Sparsity%20Regularized%20Image%20Reconstruction.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%20International%20Conference%20on%20Image%20Formation%20in%20X-Ray%20Computed%20Tomography%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1904.00423%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1904.00423%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Memory-efficient%20Algorithm%20for%20Large-scale%20Sparsity%20Regularized%20Image%20Reconstruction%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Greg%22%2C%22lastName%22%3A%22Ongie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Naveen%22%2C%22lastName%22%3A%22Murthy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%5D%2C%22abstractNote%22%3A%22We%20derive%20a%20memory-efficient%20first-order%20variable%20splitting%20algorithm%20for%20convex%20image%20reconstruction%20problems%20with%20non-smooth%20regularization%20terms.%20The%20algorithm%20is%20based%20on%20a%20primal-dual%20approach%2C%20where%20one%20of%20the%20dual%20variables%20is%20updated%20using%20a%20step%20of%20the%20Frank-Wolfe%20algorithm%2C%20rather%20than%20the%20typical%20proximal%20point%20step%20used%20in%20other%20primal-dual%20algorithms.%20We%20show%20in%20certain%20cases%20this%20results%20in%20an%20algorithm%20with%20far%20less%20memory%20demand%20than%20other%20first-order%20methods%20based%20on%20proximal%20mappings.%20We%20demonstrate%20the%20algorithm%20on%20the%20problem%20of%20sparse-view%20X-ray%20computed%20tomography%20%28CT%29%20reconstruction%20with%20non-smooth%20edge-preserving%20regularization%20and%20show%20competitive%20run-time%20with%20other%20state-of-the-art%20algorithms%20while%20using%20much%20less%20memory.%22%2C%22date%22%3A%222018%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1904.00423%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222020-01-09T15%3A05%3A18Z%22%7D%7D%5D%7D
Gitlin, A., Tao, B., Balzano, L., & Lipor, J. (2018). Improving $K$-Subspaces via Coherence Pursuit. IEEE Journal of Selected Topics in Signal Processing, 12(6), 1575–1588. https://doi.org/10.1109/JSTSP.2018.2869363
Zhang, D., Zhao, T., & Balzano, L. (2018). INFORMATION MAXIMIZATION AUTO-ENCODING. https://openreview.net/forum?id=SyVpB2RqFX
Hong, D., Balzano, L., & Fessler, J. A. (2018). Asymptotic performance of PCA for high-dimensional heteroscedastic data. Journal of Multivariate Analysis, 167, 435–452. https://doi.org/10.1016/j.jmva.2018.06.002
Hong, D., Malinas, R. P., Fessler, J. A., & Balzano, L. (2018). Learning Dictionary-Based Unions of Subspaces for Image Denoising. 2018 26th European Signal Processing Conference (EUSIPCO), 1597–1601. https://doi.org/10.23919/EUSIPCO.2018.8553117
Ledva, G. S., Balzano, L., & Mathieu, J. L. (2018). Exploring Connections Between a Multiple Model Kalman Filter and Dynamic Fixed Share with Applications to Demand Response. 2018 IEEE Conference on Control Technology and Applications (CCTA), 217–223. https://doi.org/10.1109/CCTA.2018.8511493
Ongie, G., Hong, D., Zhang, D., & Balzano, L. (2018). Online Estimation of Coherent Subspaces with Adaptive Sampling. 2018 IEEE Statistical Signal Processing Workshop (SSP), 841–845. https://doi.org/10.1109/SSP.2018.8450830
Zhang, D., Katz-Samuels, J., Figueiredo, M. A. T., & Balzano, L. (2018). Simultaneous Sparsity and Parameter Tying for Deep Learning Using Ordered Weighted ℓ1 Regularization. 2018 IEEE Statistical Signal Processing Workshop (SSP), 65–69. https://doi.org/10.1109/SSP.2018.8450819
Zhang, D., Wang, H., Figueiredo, M., & Balzano, L. (2018). Learning to Share: Simultaneous Parameter Tying and Sparsification in Deep Learning. International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=rypT3fb0b
Bower, A., Jain, L., & Balzano, L. (2018). The Landscape of Non-Convex Quadratic Feasibility. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3974–3978. https://doi.org/10.1109/ICASSP.2018.8461868
Du, Z., Balzano, L., & Ozay, N. (2018). A Robust Algorithm for Online Switched System Identification. IFAC-PapersOnLine, 51(15), 293–298. https://doi.org/10.1016/j.ifacol.2018.09.150
Ledva, G. S., Balzano, L., & Mathieu, J. L. (2018). Real-Time Energy Disaggregation of a Distribution Feeder’s Demand Using Online Learning. IEEE Transactions on Power Systems, 1–1. https://doi.org/10.1109/TPWRS.2018.2800535
Balzano, L., Chi, Y., & Lu, Y. M. (2018). Streaming PCA and Subspace Tracking: The Missing Data Case. Proceedings of the IEEE, 1–18. https://doi.org/10.1109/JPROC.2018.2847041
Ledva, G. S., Du, Z., Balzano, L., & Mathieu, J. L. (2018). Disaggregating Load by Type from Distribution System Measurements in Real Time. In S. Meyn, T. Samad, I. Hiskens, & J. Stoustrup (Eds.), Energy Markets and Responsive Grids (Vol. 162, pp. 413–437). Springer New York. https://doi.org/10.1007/978-1-4939-7822-9_17
Ongie, G., Murthy, N., Balzano, L., & Fessler, J. A. (2018). A Memory-efficient Algorithm for Large-scale Sparsity Regularized Image Reconstruction. Proceedings of the International Conference on Image Formation in X-Ray Computed Tomography. http://arxiv.org/abs/1904.00423

2017

1399621 DZFDBB6V 2017 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%227L3JEBHH%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222017-12-20%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20D.%2C%20Sun%2C%20Y.%2C%20Eriksson%2C%20B.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282017%29.%20Deep%20Unsupervised%20Clustering%20Using%20Mixture%20of%20Autoencoders.%20%26lt%3Bi%26gt%3BarXiv%3A1712.07788%20%5BCs%2C%20Stat%5D%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1712.07788%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1712.07788%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Unsupervised%20Clustering%20Using%20Mixture%20of%20Autoencoders%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Brian%22%2C%22lastName%22%3A%22Eriksson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Unsupervised%20clustering%20is%20one%20of%20the%20most%20fundamental%20challenges%20in%20machine%20learning.%20A%20popular%20hypothesis%20is%20that%20data%20are%20generated%20from%20a%20union%20of%20low-dimensional%20nonlinear%20manifolds%3B%20thus%20an%20approach%20to%20clustering%20is%20identifying%20and%20separating%20these%20manifolds.%20In%20this%20paper%2C%20we%20present%20a%20novel%20approach%20to%20solve%20this%20problem%20by%20using%20a%20mixture%20of%20autoencoders.%20Our%20model%20consists%20of%20two%20parts%3A%201%29%20a%20collection%20of%20autoencoders%20where%20each%20autoencoder%20learns%20the%20underlying%20manifold%20of%20a%20group%20of%20similar%20objects%2C%20and%202%29%20a%20mixture%20assignment%20neural%20network%2C%20which%20takes%20the%20concatenated%20latent%20vectors%20from%20the%20autoencoders%20as%20input%20and%20infers%20the%20distribution%20over%20clusters.%20By%20jointly%20optimizing%20the%20two%20parts%2C%20we%20simultaneously%20assign%20data%20to%20clusters%20and%20learn%20the%20underlying%20manifolds%20of%20each%20cluster.%22%2C%22date%22%3A%222017-12-20%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1712.07788%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A33%3A07Z%22%7D%7D%2C%7B%22key%22%3A%22EG4NL6NC%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ongie%20et%20al.%22%2C%22parsedDate%22%3A%222017-11%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOngie%2C%20G.%2C%20Dewangan%2C%20S.%2C%20Fessler%2C%20J.%20A.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282017%29.%20Online%20dynamic%20MRI%20reconstruction%20via%20robust%20subspace%20tracking.%20%26lt%3Bi%26gt%3B2017%20IEEE%20Global%20Conference%20on%20Signal%20and%20Information%20Processing%20%28GlobalSIP%29%26lt%3B%5C%2Fi%26gt%3B%2C%201180%26%23x2013%3B1184.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FGlobalSIP.2017.8309147%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FGlobalSIP.2017.8309147%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Online%20dynamic%20MRI%20reconstruction%20via%20robust%20subspace%20tracking%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Greg%22%2C%22lastName%22%3A%22Ongie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Saket%22%2C%22lastName%22%3A%22Dewangan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20propose%20an%20efficient%20online%20reconstruction%20algorithm%20for%20the%20problem%20of%20highly%20undersampled%20dynamic%20magnetic%20resonance%20imaging%20%28DMRI%29.%20Our%20approach%20reconstructs%20the%20dynamic%20time%20series%20by%20processing%20only%20a%20small%20batch%20of%20frames%20at%20a%20time.%20We%20adapt%20an%20online%20subspace%20tracking%20algorithm%20based%20on%20manifold%20optimization%20to%20the%20DMRI%20reconstruction%20setting%20and%20propose%20a%20novel%20extension%20of%20the%20algorithm%20to%20enable%20robust%20subspace%20tracking%20based%20on%20a%20local%20low-rank%20plus%20transform%20sparse%20model.%20Our%20experiments%20on%20real%20and%20synthetic%20data%20show%20that%20proposed%20approach%20gives%20comparable%20results%20to%20methods%20that%20reconstruct%20the%20entire%20image%20series%20at%20once%20while%20requiring%20only%20a%20fraction%20of%20the%20memory%20and%20computational%20demand.%20The%20dramatic%20memory%20savings%20allows%20robust%20subspace-based%20methods%20to%20be%20applied%20to%20much%20larger%20datasets%20than%20previously%20allowed.%22%2C%22proceedingsTitle%22%3A%222017%20IEEE%20Global%20Conference%20on%20Signal%20and%20Information%20Processing%20%28GlobalSIP%29%22%2C%22conferenceName%22%3A%222017%20IEEE%20Global%20Conference%20on%20Signal%20and%20Information%20Processing%20%28GlobalSIP%29%22%2C%22date%22%3A%222017-11%22%2C%22DOI%22%3A%2210.1109%5C%2FGlobalSIP.2017.8309147%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222021-12-09T22%3A01%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22DTH4F5EN%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lipor%20et%20al.%22%2C%22parsedDate%22%3A%222017-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLipor%2C%20J.%2C%20Wong%2C%20B.%20P.%2C%20Scavia%2C%20D.%2C%20Kerkez%2C%20B.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282017%29.%20Distance-Penalized%20Active%20Learning%20Using%20Quantile%20Search.%20%26lt%3Bi%26gt%3BIEEE%20Transactions%20on%20Signal%20Processing%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B65%26lt%3B%5C%2Fi%26gt%3B%2820%29%2C%205453%26%23x2013%3B5465.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTSP.2017.2731323%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTSP.2017.2731323%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Distance-Penalized%20Active%20Learning%20Using%20Quantile%20Search%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%22%2C%22lastName%22%3A%22Lipor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22B.%20P.%22%2C%22lastName%22%3A%22Wong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Scavia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22B.%22%2C%22lastName%22%3A%22Kerkez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Adaptive%20sampling%20theory%20has%20shown%20that%2C%20with%20proper%20assumptions%20on%20the%20signal%20class%2C%20algorithms%20exist%20to%20reconstruct%20a%20signal%20in%20%24mathbb%20R%5Ed%24%20with%20an%20optimal%20number%20of%20samples.%20We%20generalize%20this%20problem%20to%20the%20case%20of%20spatial%20signals%2C%20where%20the%20sampling%20cost%20is%20a%20function%20of%20both%20the%20number%20of%20samples%20taken%20and%20the%20distance%20traveled%20during%20estimation.%20This%20is%20motivated%20by%20our%20work%20studying%20regions%20of%20low%20oxygen%20concentration%20in%20the%20Great%20Lakes.%20We%20show%20that%20for%20one-dimensional%20threshold%20classifiers%2C%20a%20tradeoff%20between%20the%20number%20of%20samples%20taken%20and%20distance%20traveled%20can%20be%20achieved%20using%20a%20generalization%20of%20binary%20search%2C%20which%20we%20refer%20to%20as%20quantile%20search.%20We%20characterize%20both%20the%20estimation%20error%20after%20a%20fixed%20number%20of%20samples%20and%20the%20distance%20traveled%20in%20the%20noiseless%20case%2C%20as%20well%20as%20the%20estimation%20error%20in%20the%20case%20of%20noisy%20measurements.%20We%20illustrate%20our%20results%20in%20both%20simulations%20and%20experiments%20and%20show%20that%20our%20method%20outperforms%20existing%20algorithms%20in%20a%20large%20range%20of%20sampling%20scenarios.%22%2C%22date%22%3A%22October%202017%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTSP.2017.2731323%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221053-587X%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%22UIWU664R%22%2C%22E36HLPSJ%22%5D%2C%22dateModified%22%3A%222018-02-16T18%3A44%3A47Z%22%7D%7D%2C%7B%22key%22%3A%223D4RNA88%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pimentel-Alarc%5Cu00f3n%20et%20al.%22%2C%22parsedDate%22%3A%222017-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPimentel-Alarc%26%23xF3%3Bn%2C%20D.%2C%20Ongie%2C%20G.%2C%20Balzano%2C%20L.%2C%20Willett%2C%20R.%2C%20%26amp%3B%20Nowak%2C%20R.%20%282017%29.%20Low%20algebraic%20dimension%20matrix%20completion.%20%26lt%3Bi%26gt%3B2017%2055th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%26lt%3B%5C%2Fi%26gt%3B%2C%20790%26%23x2013%3B797.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FALLERTON.2017.8262820%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FALLERTON.2017.8262820%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Low%20algebraic%20dimension%20matrix%20completion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Pimentel-Alarc%5Cu00f3n%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22G.%22%2C%22lastName%22%3A%22Ongie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Willett%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22Low%20rank%20matrix%20completion%20%28LRMC%29%20has%20received%20tremendous%20attention%20in%20recent%20years.%20The%20low%20rank%20assumption%20means%20that%20the%20columns%20%28or%20rows%29%20of%20the%20matrix%20to%20be%20completed%20are%20points%20on%20a%20low-dimensional%20linear%20variety.%20This%20paper%20extends%20this%20thinking%20to%20cases%20where%20the%20columns%20are%20points%20on%20low-dimensional%20nonlinear%20algebraic%20varieties.%20While%20others%20have%20recently%20studied%20matrix%20completion%20in%20such%20settings%2C%20existing%20results%20focus%20mainly%20on%20algorithms%20and%20experiments%2C%20without%20supporting%20theory.%20This%20paper%20proposes%20a%20new%20approach%20to%20what%20we%20call%20Low%20Algebraic-Dimension%20Matrix%20Completion%20%28LADMC%29.%20We%20propose%20a%20new%20LADMC%20algorithm%20that%20leverages%20existing%20LRMC%20methods%20on%20a%20tensorized%20representation%20of%20the%20data.%20We%20also%20provide%20a%20formal%20mathematical%20justification%20for%20the%20success%20of%20our%20method.%20In%20particular%2C%20the%20new%20algorithm%20can%20succeed%20in%20many%20cases%20where%20traditional%20LRMC%20is%20guaranteed%20to%20fail.%20We%20also%20provide%20experimental%20results%20showing%20that%20the%20new%20approach%20significantly%20outperforms%20existing%20state-of-the-art%20methods%20for%20matrix%20completion%20in%20many%20situations.%22%2C%22proceedingsTitle%22%3A%222017%2055th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22conferenceName%22%3A%222017%2055th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22date%22%3A%22October%202017%22%2C%22DOI%22%3A%2210.1109%5C%2FALLERTON.2017.8262820%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%22ZA8QMDGD%22%2C%226JKB3X7P%22%5D%2C%22dateModified%22%3A%222018-02-16T18%3A27%3A05Z%22%7D%7D%2C%7B%22key%22%3A%223Z788T46%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ongie%20et%20al.%22%2C%22parsedDate%22%3A%222017-07-17%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOngie%2C%20G.%2C%20Willett%2C%20R.%2C%20Nowak%2C%20R.%20D.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282017%29.%20Algebraic%20Variety%20Models%20for%20High-Rank%20Matrix%20Completion.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%2034th%20International%20Conference%20on%20Machine%20Learning%26lt%3B%5C%2Fi%26gt%3B%2C%202691%26%23x2013%3B2700.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv70%5C%2Fongie17a.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv70%5C%2Fongie17a.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Algebraic%20Variety%20Models%20for%20High-Rank%20Matrix%20Completion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Greg%22%2C%22lastName%22%3A%22Ongie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rebecca%22%2C%22lastName%22%3A%22Willett%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%20D.%22%2C%22lastName%22%3A%22Nowak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20consider%20a%20non-linear%20generalization%20of%20low-rank%20matrix%20completion%20to%20the%20case%20where%20the%20data%20belongs%20to%20an%20algebraic%20variety%2C%20i.e.%2C%20each%20data%20point%20is%20a%20solution%20to%20a%20system%20of%20polynomial%20equations.%20In%20this%20case%20the%20original%20matrix%20is%20possibly%20high-rank%2C%20but%20it%20becomes%20low-rank%20after%20mapping%20each%20column%20to%20a%20higher%20dimensional%20space%20of%20monomial%20features.%20Algebraic%20varieties%20capture%20a%20range%20of%20well-studied%20linear%20models%2C%20including%20affine%20subspaces%20and%20their%20union%2C%20but%20also%20quadratic%20and%20higher%20degree%20curves%20and%20surfaces.%20We%20study%20the%20sampling%20requirements%20for%20a%20general%20variety%20model%20with%20a%20focus%20on%20the%20union%20of%20affine%20subspaces.%20We%20propose%20an%20efficient%20matrix%20completion%20algorithm%20that%20minimizes%20a%20convex%20or%20non-convex%20surrogate%20of%20the%20rank%20of%20the%20lifted%20matrix.%20Our%20algorithm%20uses%20the%20well-known%20%5Cu201ckernel%20trick%5Cu201d%20to%20avoid%20working%20directly%20with%20the%20high-dimensional%20lifted%20data%20matrix%20and%20scales%20efficiently%20with%20data%20size.%20We%20show%20the%20proposed%20algorithm%20is%20able%20to%20recover%20synthetically%20generated%20data%20up%20to%20the%20predicted%20sampling%20complexity%20bounds.%20The%20algorithm%20also%20outperforms%20standard%20techniques%20in%20experiments%20with%20real%20data.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2034th%20International%20Conference%20on%20Machine%20Learning%22%2C%22conferenceName%22%3A%22International%20Conference%20on%20Machine%20Learning%22%2C%22date%22%3A%222017-07-17%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv70%5C%2Fongie17a.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-05-03T10%3A43%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22DGINJBUB%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lipor%20and%20Balzano%22%2C%22parsedDate%22%3A%222017-07-17%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLipor%2C%20J.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282017%29.%20Leveraging%20Union%20of%20Subspace%20Structure%20to%20Improve%20Constrained%20Clustering.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%2034th%20International%20Conference%20on%20Machine%20Learning%26lt%3B%5C%2Fi%26gt%3B%2C%202130%26%23x2013%3B2139.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv70%5C%2Flipor17a.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv70%5C%2Flipor17a.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Leveraging%20Union%20of%20Subspace%20Structure%20to%20Improve%20Constrained%20Clustering%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22John%22%2C%22lastName%22%3A%22Lipor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Many%20clustering%20problems%20in%20computer%20vision%20and%20other%20contexts%20are%20also%20classification%20problems%2C%20where%20each%20cluster%20shares%20a%20meaningful%20label.%20Subspace%20clustering%20algorithms%20in%20particular%20are%20often%20applied%20to%20problems%20that%20fit%20this%20description%2C%20for%20example%20with%20face%20images%20or%20handwritten%20digits.%20While%20it%20is%20straightforward%20to%20request%20human%20input%20on%20these%20datasets%2C%20our%20goal%20is%20to%20reduce%20this%20input%20as%20much%20as%20possible.%20We%20present%20a%20pairwise-constrained%20clustering%20algorithm%20that%20actively%20selects%20queries%20based%20on%20the%20union-of-subspaces%20model.%20The%20central%20step%20of%20the%20algorithm%20is%20in%20querying%20points%20of%20minimum%20margin%20between%20estimated%20subspaces%3B%20analogous%20to%20classifier%20margin%2C%20these%20lie%20near%20the%20decision%20boundary.%20We%20prove%20that%20points%20lying%20near%20the%20intersection%20of%20subspaces%20are%20points%20with%20low%20margin.%20Our%20procedure%20can%20be%20used%20after%20any%20subspace%20clustering%20algorithm%20that%20outputs%20an%20affinity%20matrix.%20We%20demonstrate%20on%20several%20datasets%20that%20our%20algorithm%20drives%20the%20clustering%20error%20down%20considerably%20faster%20than%20the%20state-of-the-art%20active%20query%20algorithms%20on%20datasets%20with%20subspace%20structure%20and%20is%20competitive%20on%20other%20datasets.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2034th%20International%20Conference%20on%20Machine%20Learning%22%2C%22conferenceName%22%3A%22International%20Conference%20on%20Machine%20Learning%22%2C%22date%22%3A%222017-07-17%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv70%5C%2Flipor17a.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222024-05-03T10%3A42%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22KP7GFVDR%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pimentel-Alarc%5Cu00f3n%20et%20al.%22%2C%22parsedDate%22%3A%222017-07%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPimentel-Alarc%26%23xF3%3Bn%2C%20D.%2C%20Balzano%2C%20L.%2C%20Marcia%2C%20R.%2C%20Nowak%2C%20R.%2C%20%26amp%3B%20Willett%2C%20R.%20%282017%29.%20Mixture%20regression%20as%20subspace%20clustering.%20%26lt%3Bi%26gt%3B2017%20International%20Conference%20on%20Sampling%20Theory%20and%20Applications%20%28SampTA%29%26lt%3B%5C%2Fi%26gt%3B%2C%20456%26%23x2013%3B459.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSAMPTA.2017.8024386%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSAMPTA.2017.8024386%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Mixture%20regression%20as%20subspace%20clustering%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Pimentel-Alarc%5Cu00f3n%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Marcia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Nowak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Willett%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%20we%20show%20that%20observations%20in%20a%20mixture%20can%20be%20modeled%20using%20a%20union%20of%20subspaces%2C%20and%20hence%20mixture%20regression%20can%20be%20posed%20as%20a%20subspace%20clustering%20problem.%20This%20allows%20to%20perform%20mixture%20regression%20even%20in%20the%20presence%20of%20missing%20data.%20We%20illustrate%20this%20using%20a%20state-of-the-art%20subspace%20clustering%20algorithm%20for%20incomplete%20data%20to%20perform%20mixed%20linear%20regression%20on%20gene%20functional%20data.%20Our%20approach%20outperforms%20existing%20methods%20on%20this%20task.%22%2C%22proceedingsTitle%22%3A%222017%20International%20Conference%20on%20Sampling%20Theory%20and%20Applications%20%28SampTA%29%22%2C%22conferenceName%22%3A%222017%20International%20Conference%20on%20Sampling%20Theory%20and%20Applications%20%28SampTA%29%22%2C%22date%22%3A%22July%202017%22%2C%22DOI%22%3A%2210.1109%5C%2FSAMPTA.2017.8024386%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%226JKB3X7P%22%5D%2C%22dateModified%22%3A%222017-09-08T10%3A23%3A22Z%22%7D%7D%2C%7B%22key%22%3A%22AC6TPEZD%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Eftekhari%20et%20al.%22%2C%22parsedDate%22%3A%222017-06%22%2C%22numChildren%22%3A5%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BEftekhari%2C%20A.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Wakin%2C%20M.%20B.%20%282017%29.%20What%20to%20Expect%20When%20You%20Are%20Expecting%20on%20the%20Grassmannian.%20%26lt%3Bi%26gt%3BIEEE%20Signal%20Processing%20Letters%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B24%26lt%3B%5C%2Fi%26gt%3B%286%29%2C%20872%26%23x2013%3B876.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FLSP.2017.2684784%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FLSP.2017.2684784%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22What%20to%20Expect%20When%20You%20Are%20Expecting%20on%20the%20Grassmannian%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Eftekhari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%20B.%22%2C%22lastName%22%3A%22Wakin%22%7D%5D%2C%22abstractNote%22%3A%22Consider%20an%20incoming%20sequence%20of%20vectors%2C%20all%20belonging%20to%20an%20unknown%20subspace%20%24textS%24%2C%20and%20each%20with%20many%20missing%20entries.%20In%20order%20to%20estimate%20%24textS%24%2C%20it%20is%20common%20to%20partition%20the%20data%20into%20blocks%20and%20iteratively%20update%20the%20estimate%20of%20%24textS%24%20with%20each%20new%20incoming%20measurement%20block.%20In%20this%20letter%2C%20we%20investigate%20a%20rather%20basic%20question%3A%20Is%20it%20possible%20to%20identify%20%24textS%24%20by%20averaging%20the%20range%20of%20the%20partially%20observed%20incoming%20measurement%20blocks%20on%20the%20Grassmannian%3F%20We%20show%20that%2C%20in%20general%2C%20the%20span%20of%20the%20incoming%20blocks%20is%20in%20fact%20a%20biased%20estimator%20of%20%24textS%24%20when%20data%20suffer%20from%20erasures%2C%20and%20we%20find%20an%20upper%20bound%20for%20this%20bias.%20We%20reach%20this%20conclusion%20by%20examining%20the%20defining%20optimization%20program%20for%20the%20Fr%5Cu00e9chet%20expectation%20on%20the%20Grassmannian%2C%20and%20with%20the%20aid%20of%20a%20sharp%20perturbation%20bound%20and%20standard%20large%20deviation%20results.%22%2C%22date%22%3A%22June%202017%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLSP.2017.2684784%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221070-9908%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A37%3A05Z%22%7D%7D%2C%7B%22key%22%3A%228JQPEQ5X%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20and%20Balzano%22%2C%22parsedDate%22%3A%222017-03%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20D.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282017%29.%20Matched%20subspace%20detection%20using%20compressively%20sampled%20data.%20%26lt%3Bi%26gt%3B2017%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%204601%26%23x2013%3B4605.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FICASSP.2017.7953028%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FICASSP.2017.7953028%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Matched%20subspace%20detection%20using%20compressively%20sampled%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20consider%20the%20problem%20of%20detecting%20whether%20a%20high%20dimensional%20signal%20lies%20in%20a%20given%20low%20dimensional%20subspace%20using%20only%20a%20few%20compressive%20measurements%20of%20it.%20By%20leveraging%20modern%20random%20matrix%20theory%2C%20we%20show%20that%2C%20even%20when%20we%20are%20short%20on%20information%2C%20a%20reliable%20detector%20can%20be%20constructed%20via%20a%20properly%20defined%20measure%20of%20energy%20of%20the%20signal%20outside%20the%20subspace.%20Our%20results%20extend%20those%20in%20%5B1%5D%20to%20a%20more%20general%20sampling%20framework.%20Moreover%2C%20the%20test%20statistic%20we%20define%20is%20much%20simpler%20than%20that%20required%20by%20%5B1%5D%2C%20and%20it%20results%20in%20more%20efficient%20computation%2C%20which%20is%20crucial%20for%20high-dimensional%20data%20processing.%22%2C%22proceedingsTitle%22%3A%222017%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22conferenceName%22%3A%222017%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22date%22%3A%22March%202017%22%2C%22DOI%22%3A%2210.1109%5C%2FICASSP.2017.7953028%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222017-08-31T15%3A56%3A42Z%22%7D%7D%2C%7B%22key%22%3A%22B3LAG3NC%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ganti%20et%20al.%22%2C%22parsedDate%22%3A%222017-02-13%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGanti%2C%20R.%2C%20Rao%2C%20N.%2C%20Balzano%2C%20L.%2C%20Willett%2C%20R.%2C%20%26amp%3B%20Nowak%2C%20R.%20%282017%2C%20February%2013%29.%20On%20Learning%20High%20Dimensional%20Structured%20Single%20Index%20Models.%20%26lt%3Bi%26gt%3BThirty-First%20AAAI%20Conference%20on%20Artificial%20Intelligence%26lt%3B%5C%2Fi%26gt%3B.%20Thirty-First%20AAAI%20Conference%20on%20Artificial%20Intelligence.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.aaai.org%5C%2Focs%5C%2Findex.php%5C%2FAAAI%5C%2FAAAI17%5C%2Fpaper%5C%2Fview%5C%2F14480%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fwww.aaai.org%5C%2Focs%5C%2Findex.php%5C%2FAAAI%5C%2FAAAI17%5C%2Fpaper%5C%2Fview%5C%2F14480%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22On%20Learning%20High%20Dimensional%20Structured%20Single%20Index%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ravi%22%2C%22lastName%22%3A%22Ganti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nikhil%22%2C%22lastName%22%3A%22Rao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rebecca%22%2C%22lastName%22%3A%22Willett%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22Single%20Index%20Models%20%28SIMs%29%20are%20simple%20yet%20flexible%20semi-parametric%20models%20for%20machine%20learning%2C%20where%20the%20response%20variable%20is%20modeled%20as%20a%20monotonic%20function%20of%20a%20linear%20combination%20of%20features.%20Estimation%20in%20this%20context%20requires%20learning%20both%20the%20feature%20weights%20and%20the%20nonlinear%20function%20that%20relates%20features%20to%20observations.%20While%20methods%20have%20been%20described%20to%20learn%20SIMs%20in%20the%20low%20dimensional%20regime%2C%20a%20method%20that%20can%20efficiently%20learn%20SIMs%20in%20high%20dimensions%2C%20and%20under%20general%20structural%20assumptions%2C%20has%20not%20been%20forthcoming.%20In%20this%20paper%2C%20we%20propose%20computationally%20efficient%20algorithms%20for%20SIM%20inference%20in%20high%20dimensions%20with%20structural%20constraints.%20Our%20general%20approach%20specializes%20to%20sparsity%2C%20group%20sparsity%2C%20and%20low-rank%20assumptions%20among%20others.%20Experiments%20show%20that%20the%20proposed%20method%20enjoys%20superior%20predictive%20performance%20when%20compared%20to%20generalized%20linear%20models%2C%20and%20achieves%20results%20comparable%20to%20or%20better%20than%20single%20layer%20feedforward%20neural%20networks%20with%20significantly%20less%20computational%20cost.%22%2C%22proceedingsTitle%22%3A%22Thirty-First%20AAAI%20Conference%20on%20Artificial%20Intelligence%22%2C%22conferenceName%22%3A%22Thirty-First%20AAAI%20Conference%20on%20Artificial%20Intelligence%22%2C%22date%22%3A%222017%5C%2F02%5C%2F13%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.aaai.org%5C%2Focs%5C%2Findex.php%5C%2FAAAI%5C%2FAAAI17%5C%2Fpaper%5C%2Fview%5C%2F14480%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22F7QF4T2Q%22%5D%2C%22dateModified%22%3A%222018-02-16T18%3A47%3A48Z%22%7D%7D%2C%7B%22key%22%3A%22A7CD9NJP%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ongie%20et%20al.%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOngie%2C%20G.%2C%20Hong%2C%20D.%2C%20Zhang%2C%20D.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282017%29.%20Enhanced%20Online%20Subspace%20Estimation%20via%20Adaptive%20Sensing.%20%26lt%3Bi%26gt%3BAsilomar%20Confernce%20on%20Signals%2C%20Systems%2C%20and%20Computers%26lt%3B%5C%2Fi%26gt%3B.%20Asilomar%20Confernce%20on%20Signals%2C%20Systems%2C%20and%20Computers.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fpdfs.semanticscholar.org%5C%2Fba2f%5C%2F61c45e92ae471552d55a8350f7211b02e6b0.pdf%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fpdfs.semanticscholar.org%5C%2Fba2f%5C%2F61c45e92ae471552d55a8350f7211b02e6b0.pdf%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Enhanced%20Online%20Subspace%20Estimation%20via%20Adaptive%20Sensing%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gregory%22%2C%22lastName%22%3A%22Ongie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22This%20work%20investigates%20the%20problem%20of%20adaptive%20measurement%20design%20for%20online%20subspace%20estimation%20from%20compressive%20%20%20linear%20measurements.%20We%20study%20the%20previously%20proposed%20Grassmannian%20rank-one%20online%20subspace%20estimation%20%28GROUSE%29%20algorithm%20with%20adaptively%20designed%20compressive%20measurements.%20We%20propose%20an%20%20adaptive%20measurement%20scheme%20that%20biases%20%20%20the%20measurement%20vectors%20towards%20the%20current%20subspace%20estimate%20and%20prove%20a%20global%20convergence%20result%20for%20the%20%20resulting%20algorithm.%20Our%20experiments%20%20on%20%20synthetic%20data%20demonstrate%20the%20effectiveness%20of%20the%20adaptive%20%20measurement%20scheme%20over%20non-adaptive%20compressive%20random%20measurements.%22%2C%22proceedingsTitle%22%3A%22Asilomar%20Confernce%20on%20Signals%2C%20Systems%2C%20and%20Computers%22%2C%22conferenceName%22%3A%22Asilomar%20Confernce%20on%20Signals%2C%20Systems%2C%20and%20Computers%22%2C%22date%22%3A%222017%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fpdfs.semanticscholar.org%5C%2Fba2f%5C%2F61c45e92ae471552d55a8350f7211b02e6b0.pdf%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%227V94DTC4%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222021-09-16T20%3A33%3A27Z%22%7D%7D%5D%7D
Zhang, D., Sun, Y., Eriksson, B., & Balzano, L. (2017). Deep Unsupervised Clustering Using Mixture of Autoencoders. arXiv:1712.07788 [Cs, Stat]. http://arxiv.org/abs/1712.07788
Ongie, G., Dewangan, S., Fessler, J. A., & Balzano, L. (2017). Online dynamic MRI reconstruction via robust subspace tracking. 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 1180–1184. https://doi.org/10.1109/GlobalSIP.2017.8309147
Lipor, J., Wong, B. P., Scavia, D., Kerkez, B., & Balzano, L. (2017). Distance-Penalized Active Learning Using Quantile Search. IEEE Transactions on Signal Processing, 65(20), 5453–5465. https://doi.org/10.1109/TSP.2017.2731323
Pimentel-Alarcón, D., Ongie, G., Balzano, L., Willett, R., & Nowak, R. (2017). Low algebraic dimension matrix completion. 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 790–797. https://doi.org/10.1109/ALLERTON.2017.8262820
Ongie, G., Willett, R., Nowak, R. D., & Balzano, L. (2017). Algebraic Variety Models for High-Rank Matrix Completion. Proceedings of the 34th International Conference on Machine Learning, 2691–2700. https://proceedings.mlr.press/v70/ongie17a.html
Lipor, J., & Balzano, L. (2017). Leveraging Union of Subspace Structure to Improve Constrained Clustering. Proceedings of the 34th International Conference on Machine Learning, 2130–2139. https://proceedings.mlr.press/v70/lipor17a.html
Pimentel-Alarcón, D., Balzano, L., Marcia, R., Nowak, R., & Willett, R. (2017). Mixture regression as subspace clustering. 2017 International Conference on Sampling Theory and Applications (SampTA), 456–459. https://doi.org/10.1109/SAMPTA.2017.8024386
Eftekhari, A., Balzano, L., & Wakin, M. B. (2017). What to Expect When You Are Expecting on the Grassmannian. IEEE Signal Processing Letters, 24(6), 872–876. https://doi.org/10.1109/LSP.2017.2684784
Zhang, D., & Balzano, L. (2017). Matched subspace detection using compressively sampled data. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4601–4605. https://doi.org/10.1109/ICASSP.2017.7953028
Ganti, R., Rao, N., Balzano, L., Willett, R., & Nowak, R. (2017, February 13). On Learning High Dimensional Structured Single Index Models. Thirty-First AAAI Conference on Artificial Intelligence. Thirty-First AAAI Conference on Artificial Intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14480
Ongie, G., Hong, D., Zhang, D., & Balzano, L. (2017). Enhanced Online Subspace Estimation via Adaptive Sensing. Asilomar Confernce on Signals, Systems, and Computers. Asilomar Confernce on Signals, Systems, and Computers. https://pdfs.semanticscholar.org/ba2f/61c45e92ae471552d55a8350f7211b02e6b0.pdf

2016

1399621 DZFDBB6V 2016 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22E5N4EZGZ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kennedy%20et%20al.%22%2C%22parsedDate%22%3A%222016-09%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKennedy%2C%20R.%2C%20Balzano%2C%20L.%2C%20Wright%2C%20S.%20J.%2C%20%26amp%3B%20Taylor%2C%20C.%20J.%20%282016%29.%20Online%20algorithms%20for%20factorization-based%20structure%20from%20motion.%20%26lt%3Bi%26gt%3BComputer%20Vision%20and%20Image%20Understanding%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B150%26lt%3B%5C%2Fi%26gt%3B%2C%20139%26%23x2013%3B152.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.cviu.2016.04.011%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.cviu.2016.04.011%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Online%20algorithms%20for%20factorization-based%20structure%20from%20motion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ryan%22%2C%22lastName%22%3A%22Kennedy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stephen%20J.%22%2C%22lastName%22%3A%22Wright%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Camillo%20J.%22%2C%22lastName%22%3A%22Taylor%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20a%20family%20of%20online%20algorithms%20for%20real-time%20factorization-based%20structure%20from%20motion%2C%20leveraging%20a%20relationship%20between%20the%20incremental%20singular%20value%20decomposition%20and%20recently%20proposed%20methods%20for%20online%20matrix%20completion.%20Our%20methods%20are%20orders%20of%20magnitude%20faster%20than%20previous%20state%20of%20the%20art%2C%20can%20handle%20missing%20data%20and%20a%20variable%20number%20of%20feature%20points%2C%20and%20are%20robust%20to%20noise%20and%20sparse%20outliers.%20We%20demonstrate%20our%20methods%20on%20both%20real%20and%20synthetic%20sequences%20and%20show%20that%20they%20perform%20well%20in%20both%20online%20and%20batch%20settings.%20We%20also%20provide%20an%20implementation%20that%20is%20able%20to%20produce%203D%20models%20in%20real%20time%20using%20a%20laptop%20with%20a%20webcam.%22%2C%22date%22%3A%22September%202016%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.cviu.2016.04.011%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1077314216300364%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221077-3142%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%227V94DTC4%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222016-08-09T23%3A20%3A57Z%22%7D%7D%2C%7B%22key%22%3A%2235Z6UP8E%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hong%20et%20al.%22%2C%22parsedDate%22%3A%222016-09%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHong%2C%20D.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Fessler%2C%20J.%20A.%20%282016%29.%20Towards%20a%20theoretical%20analysis%20of%20PCA%20for%20heteroscedastic%20data.%20%26lt%3Bi%26gt%3B2016%2054th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%26lt%3B%5C%2Fi%26gt%3B%2C%20496%26%23x2013%3B503.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FALLERTON.2016.7852272%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FALLERTON.2016.7852272%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Towards%20a%20theoretical%20analysis%20of%20PCA%20for%20heteroscedastic%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%20A.%22%2C%22lastName%22%3A%22Fessler%22%7D%5D%2C%22abstractNote%22%3A%22Principal%20Component%20Analysis%20%28PCA%29%20is%20a%20method%20for%20estimating%20a%20subspace%20given%20noisy%20samples.%20It%20is%20useful%20in%20a%20variety%20of%20problems%20ranging%20from%20dimensionality%20reduction%20to%20anomaly%20detection%20and%20the%20visualization%20of%20high%20dimensional%20data.%20PCA%20performs%20well%20in%20the%20presence%20of%20moderate%20noise%20and%20even%20with%20missing%20data%2C%20but%20is%20also%20sensitive%20to%20outliers.%20PCA%20is%20also%20known%20to%20have%20a%20phase%20transition%20when%20noise%20is%20independent%20and%20identically%20distributed%3B%20recovery%20of%20the%20subspace%20sharply%20declines%20at%20a%20threshold%20noise%20variance.%20Effective%20use%20of%20PCA%20requires%20a%20rigorous%20understanding%20of%20these%20behaviors.%20This%20paper%20provides%20a%20step%20towards%20an%20analysis%20of%20PCA%20for%20samples%20with%20heteroscedastic%20noise%2C%20that%20is%2C%20samples%20that%20have%20non-uniform%20noise%20variances%20and%20so%20are%20no%20longer%20identically%20distributed.%20In%20particular%2C%20we%20provide%20a%20simple%20asymptotic%20prediction%20of%20the%20recovery%20of%20a%20one-dimensional%20subspace%20from%20noisy%20heteroscedastic%20samples.%20The%20prediction%20enables%3A%20a%29%20easy%20and%20efficient%20calculation%20of%20the%20asymptotic%20performance%2C%20and%20b%29%20qualitative%20reasoning%20to%20understand%20how%20PCA%20is%20impacted%20by%20heteroscedasticity%20%28such%20as%20outliers%29.%22%2C%22proceedingsTitle%22%3A%222016%2054th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22conferenceName%22%3A%222016%2054th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22date%22%3A%22Sept%202016%22%2C%22DOI%22%3A%2210.1109%5C%2FALLERTON.2016.7852272%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1610.03595%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%227V94DTC4%22%5D%2C%22dateModified%22%3A%222017-03-21T14%3A58%3A50Z%22%7D%7D%2C%7B%22key%22%3A%22TBE93EZQ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xiao%20and%20Balzano%22%2C%22parsedDate%22%3A%222016-09%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXiao%2C%20P.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282016%29.%20Online%20sparse%20and%20orthogonal%20subspace%20estimation%20from%20partial%20information.%20%26lt%3Bi%26gt%3B2016%2054th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%26lt%3B%5C%2Fi%26gt%3B%2C%20284%26%23x2013%3B291.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FALLERTON.2016.7852242%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FALLERTON.2016.7852242%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Online%20sparse%20and%20orthogonal%20subspace%20estimation%20from%20partial%20information%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P.%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20consider%20an%20online%20version%20of%20the%20sparse%20PCA%20problem%20with%20missing%20data%20in%20which%20we%20seek%20a%20set%20of%20sparse%20orthogonal%20basis%20vectors.%20We%20are%20motivated%20by%20big%20data%20applications%20where%20we%20must%20sequentially%20process%20possibly%20incomplete%20vector%20observations%20to%20find%20an%20approximating%20subspace%2C%20and%20we%20desire%20the%20subspace%20representation%20to%20be%20sparse%20and%20have%20orthogonal%20columns%20for%20reasons%20of%20interpretability.%20We%20propose%20two%20different%20algorithms%20for%20solving%20this%20problem%20inspired%20by%20the%20work%20of%20%5B15%5D%2C%20where%20the%20main%20idea%20is%20to%20find%20a%20rotation%20matrix%20such%20that%20the%20subspace%20basis%20is%20sparse%20after%20rotation.%20Our%20first%20algorithm%20is%20a%20batch%20algorithm%20with%20updates%20for%20the%20rotation%20matrix%20estimate%20using%20gradient%20steps%20on%20the%20Stiefel%20manifold.%20The%20second%20algorithm%20is%20online%2C%20and%20for%20each%20observation%20it%20performs%20two%20updates%2C%20one%20of%20the%20rotation%20matrix%20estimate%20and%20one%20of%20the%20subspace%20estimate%2C%20the%20latter%20of%20which%20is%20updated%20using%20gradient%20steps%20on%20the%20Grassmannian.%20The%20batch%20algorithm%20is%20competitive%20with%20state-of-the-art%20on%20full%20data.%20The%20online%20algorithm%20allows%20for%20a%20trade-off%20between%20subspace%20fit%20and%20sparsity%20of%20the%20subspace%2C%20and%20its%20performance%20degrades%20gracefully%20with%20missing%20data.%20We%20evaluate%20the%20performance%20of%20these%20two%20algorithms%20on%20both%20synthetic%20and%20real%20data.%22%2C%22proceedingsTitle%22%3A%222016%2054th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22conferenceName%22%3A%222016%2054th%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22date%22%3A%22Sept%202016%22%2C%22DOI%22%3A%2210.1109%5C%2FALLERTON.2016.7852242%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%227V94DTC4%22%5D%2C%22dateModified%22%3A%222017-03-21T14%3A55%3A58Z%22%7D%7D%2C%7B%22key%22%3A%225KU92TTX%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pimentel-Alarc%5Cu00f3n%20et%20al.%22%2C%22parsedDate%22%3A%222016-06%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPimentel-Alarc%26%23xF3%3Bn%2C%20D.%2C%20Balzano%2C%20L.%2C%20Marcia%2C%20R.%2C%20Nowak%2C%20R.%2C%20%26amp%3B%20Willett%2C%20R.%20%282016%29.%20Group-sparse%20subspace%20clustering%20with%20missing%20data.%20%26lt%3Bi%26gt%3B2016%20IEEE%20Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B5.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSSP.2016.7551734%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FSSP.2016.7551734%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Group-sparse%20subspace%20clustering%20with%20missing%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Pimentel-Alarc%5Cu00f3n%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Marcia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Nowak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Willett%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20explores%20algorithms%20for%20subspace%20clustering%20with%20missing%20data.%20In%20many%20high-dimensional%20data%20analysis%20settings%2C%20data%20points%20Lie%20in%20or%20near%20a%20union%20of%20subspaces.%20Subspace%20clustering%20is%20the%20process%20of%20estimating%20these%20subspaces%20and%20assigning%20each%20data%20point%20to%20one%20of%20them.%20However%2C%20in%20many%20modern%20applications%20the%20data%20are%20severely%20corrupted%20by%20missing%20values.%20This%20paper%20describes%20two%20novel%20methods%20for%20subspace%20clustering%20with%20missing%20data%3A%20%28a%29%20group-sparse%20sub-space%20clustering%20%28GSSC%29%2C%20which%20is%20based%20on%20group-sparsity%20and%20alternating%20minimization%2C%20and%20%28b%29%20mixture%20subspace%20clustering%20%28MSC%29%2C%20which%20models%20each%20data%20point%20as%20a%20convex%20combination%20of%20its%20projections%20onto%20all%20subspaces%20in%20the%20union.%20Both%20of%20these%20algorithms%20are%20shown%20to%20converge%20to%20a%20local%20minimum%2C%20and%20experimental%20results%20show%20that%20they%20outperform%20the%20previous%20state-of-the-art%2C%20with%20GSSC%20yielding%20the%20highest%20overall%20clustering%20accuracy.%22%2C%22proceedingsTitle%22%3A%222016%20IEEE%20Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%22%2C%22conferenceName%22%3A%222016%20IEEE%20Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%22%2C%22date%22%3A%22June%202016%22%2C%22DOI%22%3A%2210.1109%5C%2FSSP.2016.7551734%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%226JKB3X7P%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A33%3A15Z%22%7D%7D%2C%7B%22key%22%3A%22DSKI2PB9%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20and%20Balzano%22%2C%22parsedDate%22%3A%222016%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20D.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282016%29.%20Global%20Convergence%20of%20a%20Grassmannian%20Gradient%20Descent%20Algorithm%20for%20Subspace%20Estimation.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%2019th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%26lt%3B%5C%2Fi%26gt%3B%2C%201460%26%23x2013%3B1468.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv51%5C%2Fzhang16b.html%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv51%5C%2Fzhang16b.html%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Global%20Convergence%20of%20a%20Grassmannian%20Gradient%20Descent%20Algorithm%20for%20Subspace%20Estimation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2019th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%22%2C%22conferenceName%22%3A%22Proceedings%20of%20the%2019th%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Statistics%22%2C%22date%22%3A%222016%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproceedings.mlr.press%5C%2Fv51%5C%2Fzhang16b.html%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%227V94DTC4%22%2C%22ZA8QMDGD%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222025-10-22T18%3A54%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22BHWNWAUV%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pimentel-Alarc%5Cu00f3n%20et%20al.%22%2C%22parsedDate%22%3A%222016%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPimentel-Alarc%26%23xF3%3Bn%2C%20D.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Nowak%2C%20R.%20%282016%29.%20Necessary%20and%20sufficient%20conditions%20for%20sketched%20subspace%20clustering.%20%26lt%3Bi%26gt%3BAllerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdanielpimentel.github.io%5C%2Fpdfs%5C%2FsketchedSC.pdf%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdanielpimentel.github.io%5C%2Fpdfs%5C%2FsketchedSC.pdf%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Necessary%20and%20sufficient%20conditions%20for%20sketched%20subspace%20clustering%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Pimentel-Alarc%5Cu00f3n%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%222016%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdanielpimentel.github.io%5C%2Fpdfs%5C%2FsketchedSC.pdf%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%226JKB3X7P%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A33%3A37Z%22%7D%7D%5D%7D
Kennedy, R., Balzano, L., Wright, S. J., & Taylor, C. J. (2016). Online algorithms for factorization-based structure from motion. Computer Vision and Image Understanding, 150, 139–152. https://doi.org/10.1016/j.cviu.2016.04.011
Hong, D., Balzano, L., & Fessler, J. A. (2016). Towards a theoretical analysis of PCA for heteroscedastic data. 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 496–503. https://doi.org/10.1109/ALLERTON.2016.7852272
Xiao, P., & Balzano, L. (2016). Online sparse and orthogonal subspace estimation from partial information. 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 284–291. https://doi.org/10.1109/ALLERTON.2016.7852242
Pimentel-Alarcón, D., Balzano, L., Marcia, R., Nowak, R., & Willett, R. (2016). Group-sparse subspace clustering with missing data. 2016 IEEE Statistical Signal Processing Workshop (SSP), 1–5. https://doi.org/10.1109/SSP.2016.7551734
Zhang, D., & Balzano, L. (2016). Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 1460–1468. https://proceedings.mlr.press/v51/zhang16b.html
Pimentel-Alarcón, D., Balzano, L., & Nowak, R. (2016). Necessary and sufficient conditions for sketched subspace clustering. Allerton Conference on Communication, Control, and Computing. https://danielpimentel.github.io/pdfs/sketchedSC.pdf

2015

1399621 DZFDBB6V 2015 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%2278Q9AHXX%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lipor%20and%20Balzano%22%2C%22parsedDate%22%3A%222015-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLipor%2C%20J.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282015%29.%20Margin-based%20active%20subspace%20clustering.%20%26lt%3Bi%26gt%3B2015%20IEEE%206th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%26lt%3B%5C%2Fi%26gt%3B%2C%20377%26%23x2013%3B380.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCAMSAP.2015.7383815%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FCAMSAP.2015.7383815%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Margin-based%20active%20subspace%20clustering%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%22%2C%22lastName%22%3A%22Lipor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Subspace%20clustering%20has%20typically%20been%20approached%20as%20an%20unsupervised%20machine%20learning%20problem.%20However%20in%20several%20applications%20where%20the%20union%20of%20subspaces%20model%20is%20useful%2C%20it%20is%20also%20reasonable%20to%20assume%20you%20have%20access%20to%20a%20small%20number%20of%20labels.%20In%20this%20paper%20we%20investigate%20the%20benefit%20labeled%20data%20brings%20to%20the%20subspace%20clustering%20problem.%20We%20focus%20on%20incorporating%20labels%20into%20the%20k-subspaces%20algorithm%2C%20a%20simple%20and%20computationally%20efficient%20alternating%20estimation%20algorithm.%20We%20find%20that%20even%20a%20very%20small%20number%20of%20randomly%20selected%20labels%20can%20greatly%20improve%20accuracy%20over%20the%20unsupervised%20approach.%20We%20demonstrate%20that%20with%20enough%20labels%2C%20we%20get%20a%20significant%20improvement%20by%20using%20actively%20selected%20labels%20chosen%20for%20points%20that%20are%20nearly%20equidistant%20to%20more%20than%20one%20estimated%20subspace.%20We%20show%20this%20improvement%20on%20simulated%20data%20and%20face%20images.%22%2C%22proceedingsTitle%22%3A%222015%20IEEE%206th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%22%2C%22conferenceName%22%3A%222015%20IEEE%206th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%22%2C%22date%22%3A%22December%202015%22%2C%22DOI%22%3A%2210.1109%5C%2FCAMSAP.2015.7383815%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22ieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D7383815%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%226JKB3X7P%22%2C%22E36HLPSJ%22%5D%2C%22dateModified%22%3A%222016-06-13T15%3A59%3A42Z%22%7D%7D%2C%7B%22key%22%3A%229DN7EWEK%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ledva%20et%20al.%22%2C%22parsedDate%22%3A%222015-09%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLedva%2C%20G.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Mathieu%2C%20J.%20%282015%2C%20September%29.%20Inferring%20the%20behavior%20of%20distributed%20energy%20resources%20with%20online%20learning.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%26lt%3B%5C%2Fi%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Inferring%20the%20behavior%20of%20distributed%20energy%20resources%20with%20online%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gregory%22%2C%22lastName%22%3A%22Ledva%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johanna%22%2C%22lastName%22%3A%22Mathieu%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20Allerton%20conference%20on%20Communication%2C%20Control%2C%20and%20Computing%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%22September%202015%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22UIWU664R%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A32%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22UUTZBDMJ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lipor%20et%20al.%22%2C%22parsedDate%22%3A%222015-09%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLipor%2C%20J.%2C%20Balzano%2C%20L.%2C%20Kerkez%2C%20B.%2C%20%26amp%3B%20Scavia%2C%20D.%20%282015%29.%20Quantile%20search%3A%20A%20distance-penalized%20active%20learning%20algorithm%20for%20spatial%20sampling.%20%26lt%3Bi%26gt%3B2015%2053rd%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%26lt%3B%5C%2Fi%26gt%3B%2C%201241%26%23x2013%3B1248.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FALLERTON.2015.7447150%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FALLERTON.2015.7447150%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Quantile%20search%3A%20A%20distance-penalized%20active%20learning%20algorithm%20for%20spatial%20sampling%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%22%2C%22lastName%22%3A%22Lipor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22B.%22%2C%22lastName%22%3A%22Kerkez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Scavia%22%7D%5D%2C%22abstractNote%22%3A%22Adaptive%20sampling%20theory%20has%20shown%20that%2C%20with%20proper%20assumptions%20on%20the%20signal%20class%2C%20algorithms%20exist%20to%20reconstruct%20a%20signal%20in%20%3F%3F%3Fd%20with%20an%20optimal%20number%20of%20samples.%20We%20generalize%20this%20problem%20to%20when%20the%20cost%20of%20sampling%20is%20not%20only%20the%20number%20of%20samples%20but%20also%20the%20distance%20traveled%20between%20samples.%20This%20is%20motivated%20by%20our%20work%20studying%20regions%20of%20low%20oxygen%20concentration%20in%20the%20Great%20Lakes.%20We%20show%20that%20for%20one-dimensional%20threshold%20classifiers%2C%20a%20tradeoff%20between%20number%20of%20samples%20and%20distance%20traveled%20can%20be%20achieved%20using%20a%20generalization%20of%20binary%20search%2C%20which%20we%20refer%20to%20as%20quantile%20search.%20We%20derive%20the%20expected%20total%20sampling%20time%20for%20noiseless%20measurements%20and%20the%20expected%20number%20of%20samples%20for%20an%20extension%20to%20the%20noisy%20case.%20We%20illustrate%20our%20results%20in%20simulations%20relevant%20to%20our%20sampling%20application.%22%2C%22proceedingsTitle%22%3A%222015%2053rd%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22conferenceName%22%3A%222015%2053rd%20Annual%20Allerton%20Conference%20on%20Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%22%2C%22date%22%3A%22Sept%202015%22%2C%22DOI%22%3A%2210.1109%5C%2FALLERTON.2015.7447150%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D7447150%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22UIWU664R%22%2C%22E36HLPSJ%22%5D%2C%22dateModified%22%3A%222016-06-13T16%3A02%3A06Z%22%7D%7D%2C%7B%22key%22%3A%229EJHAU95%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ganti%20et%20al.%22%2C%22parsedDate%22%3A%222015%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGanti%2C%20R.%20S.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Willett%2C%20R.%20%282015%29.%20Matrix%20Completion%20Under%20Monotonic%20Single%20Index%20Models.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%20Conference%20for%20Advances%20in%20Neural%20Information%20Processing%20Systems%26lt%3B%5C%2Fi%26gt%3B%2C%201864%26%23x2013%3B1872.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F5916-matrix-completion-under-monotonic-single-index-models%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F5916-matrix-completion-under-monotonic-single-index-models%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Matrix%20Completion%20Under%20Monotonic%20Single%20Index%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ravi%20Sastry%22%2C%22lastName%22%3A%22Ganti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rebecca%22%2C%22lastName%22%3A%22Willett%22%7D%5D%2C%22abstractNote%22%3A%22Eletronic%20Proceedings%20of%20Neural%20Information%20Processing%20Systems%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20conference%20for%20Advances%20in%20Neural%20Information%20Processing%20Systems%22%2C%22conferenceName%22%3A%22Advances%20in%20Neural%20Information%20Processing%20Systems%22%2C%22date%22%3A%222015%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F5916-matrix-completion-under-monotonic-single-index-models%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%2284TPD646%22%2C%22ZA8QMDGD%22%2C%22F7QF4T2Q%22%5D%2C%22dateModified%22%3A%222015-12-29T15%3A07%3A01Z%22%7D%7D%5D%7D
Lipor, J., & Balzano, L. (2015). Margin-based active subspace clustering. 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 377–380. https://doi.org/10.1109/CAMSAP.2015.7383815
Ledva, G., Balzano, L., & Mathieu, J. (2015, September). Inferring the behavior of distributed energy resources with online learning. Proceedings of the Allerton Conference on Communication, Control, and Computing.
Lipor, J., Balzano, L., Kerkez, B., & Scavia, D. (2015). Quantile search: A distance-penalized active learning algorithm for spatial sampling. 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), 1241–1248. https://doi.org/10.1109/ALLERTON.2015.7447150
Ganti, R. S., Balzano, L., & Willett, R. (2015). Matrix Completion Under Monotonic Single Index Models. Proceedings of the Conference for Advances in Neural Information Processing Systems, 1864–1872. http://papers.nips.cc/paper/5916-matrix-completion-under-monotonic-single-index-models

2014

1399621 DZFDBB6V 2014 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22ZUF7GDFJ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kennedy%20et%20al.%22%2C%22parsedDate%22%3A%222014-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKennedy%2C%20R.%2C%20Taylor%2C%20C.%20J.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282014%29.%20Online%20completion%20of%20Ill-conditioned%20low-rank%20matrices.%20%26lt%3Bi%26gt%3B2014%20IEEE%20Global%20Conference%20on%20Signal%20and%20Information%20Processing%20%28GlobalSIP%29%26lt%3B%5C%2Fi%26gt%3B%2C%20507%26%23x2013%3B511.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Freload%3Dtrue%26amp%3Barnumber%3D7032169%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Freload%3Dtrue%26amp%3Barnumber%3D7032169%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Online%20completion%20of%20Ill-conditioned%20low-rank%20matrices%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Kennedy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.J.%22%2C%22lastName%22%3A%22Taylor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22We%20consider%20the%20problem%20of%20online%20completion%20of%20ill-conditioned%20low-rank%20matrices.%20While%20many%20matrix%20completion%20algorithms%20have%20been%20proposed%20recently%2C%20they%20often%20struggle%20with%20ill-conditioned%20matrices%20and%20take%20a%20long%20time%20to%20converge.%20In%20this%20paper%2C%20we%20present%20a%20new%20algorithm%20called%20Polar%20Incremental%20Matrix%20Completion%20%28PIMC%29%20to%20address%20this%20problem.%20Our%20method%20is%20based%20on%20the%20GROUSE%20algorithm%2C%20and%20we%20show%20how%20a%20polar%20decomposition%20can%20be%20used%20to%20maintain%20an%20estimate%20of%20the%20singular%20value%20matrix%20to%20better%20deal%20with%20ill-conditioned%20problems.%20The%20method%20is%20also%20online%2C%20allowing%20it%20to%20be%20applied%20to%20streaming%20data.%20We%20evaluate%20our%20algorithm%20on%20both%20synthetic%20data%20and%20a%20real%20%26quot%3Bstructure%20from%20motion%26quot%3B%20dataset%20from%20the%20computer%20vision%20community%2C%20and%20show%20that%20PIMC%20outperforms%20similar%20methods.%22%2C%22proceedingsTitle%22%3A%222014%20IEEE%20Global%20Conference%20on%20Signal%20and%20Information%20Processing%20%28GlobalSIP%29%22%2C%22conferenceName%22%3A%222014%20IEEE%20Global%20Conference%20on%20Signal%20and%20Information%20Processing%20%28GlobalSIP%29%22%2C%22date%22%3A%22December%202014%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Freload%3Dtrue%26arnumber%3D7032169%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%227V94DTC4%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222015-09-08T16%3A37%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22684U7TSZ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20and%20Wright%22%2C%22parsedDate%22%3A%222014-10-08%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20%26amp%3B%20Wright%2C%20S.%20J.%20%282014%29.%20Local%20Convergence%20of%20an%20Algorithm%20for%20Subspace%20Identification%20from%20Partial%20Data.%20%26lt%3Bi%26gt%3BFoundations%20of%20Computational%20Mathematics%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B36.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Flink.springer.com%5C%2Farticle%5C%2F10.1007%5C%2Fs10208-014-9227-7%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Flink.springer.com%5C%2Farticle%5C%2F10.1007%5C%2Fs10208-014-9227-7%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Local%20Convergence%20of%20an%20Algorithm%20for%20Subspace%20Identification%20from%20Partial%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stephen%20J.%22%2C%22lastName%22%3A%22Wright%22%7D%5D%2C%22abstractNote%22%3A%22Grassmannian%20rank-one%20update%20subspace%20estimation%20%28GROUSE%29%20is%20an%20iterative%20algorithm%20for%20identifying%20a%20linear%20subspace%20of%20%5Cu211dn%5C%5Cmathbb%20%7BR%7D%5En%20from%20data%20consisting%20of%20partial%20observations%20of%20random%20vectors%20from%20that%20subspace.%20This%20paper%20examines%20local%20convergence%20properties%20of%20GROUSE%2C%20under%20assumptions%20on%20the%20randomness%20of%20the%20observed%20vectors%2C%20the%20randomness%20of%20the%20subset%20of%20elements%20observed%20at%20each%20iteration%2C%20and%20incoherence%20of%20the%20subspace%20with%20the%20coordinate%20directions.%20Convergence%20at%20an%20expected%20linear%20rate%20is%20demonstrated%20under%20certain%20assumptions.%20The%20case%20in%20which%20the%20full%20random%20vector%20is%20revealed%20at%20each%20iteration%20allows%20for%20much%20simpler%20analysis%20and%20is%20also%20described.%20GROUSE%20is%20related%20to%20incremental%20SVD%20methods%20and%20to%20gradient%20projection%20algorithms%20in%20optimization.%22%2C%22date%22%3A%222014%5C%2F10%5C%2F08%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Flink.springer.com%5C%2Farticle%5C%2F10.1007%5C%2Fs10208-014-9227-7%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221615-3375%2C%201615-3383%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%22WUTP7CH6%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222015-09-08T16%3A37%3A09Z%22%7D%7D%2C%7B%22key%22%3A%22CRESJX39%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22He%20et%20al.%22%2C%22parsedDate%22%3A%222014-10%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHe%2C%20J.%2C%20Zhang%2C%20D.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Tao%2C%20T.%20%282014%29.%20Iterative%20Grassmannian%20optimization%20for%20robust%20image%20alignment.%20%26lt%3Bi%26gt%3BImage%20and%20Vision%20Computing%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B32%26lt%3B%5C%2Fi%26gt%3B%2810%29%2C%20800%26%23x2013%3B813.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0262885614000523%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0262885614000523%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Iterative%20Grassmannian%20optimization%20for%20robust%20image%20alignment%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Tao%22%7D%5D%2C%22abstractNote%22%3A%22Robust%20high-dimensional%20data%20processing%20has%20witnessed%20an%20exciting%20development%20in%20recent%20years.%20Theoretical%20results%20have%20shown%20that%20it%20is%20possible%20using%20convex%20programming%20to%20optimize%20data%20fit%20to%20a%20low-rank%20component%20plus%20a%20sparse%20outlier%20component.%20This%20problem%20is%20also%20known%20as%20robust%20PCA%2C%20and%20it%20has%20found%20application%20in%20many%20areas%20of%20computer%20vision.%20In%20image%20and%20video%20processing%20and%20face%20recognition%2C%20the%20opportunity%20to%20process%20massive%20image%20databases%20is%20emerging%20as%20people%20upload%20photo%20and%20video%20data%20online%20in%20unprecedented%20volumes.%20However%2C%20data%20quality%20and%20consistency%20is%20not%20controlled%20in%20any%20way%2C%20and%20the%20massiveness%20of%20the%20data%20poses%20a%20serious%20computational%20challenge.%20In%20this%20paper%20we%20present%20t-GRASTA%2C%20or%20%5Cu201cTransformed%20GRASTA%20%28Grassmannian%20robust%20adaptive%20subspace%20tracking%20algorithm%29%5Cu201d.%20t-GRASTA%20iteratively%20performs%20incremental%20gradient%20descent%20constrained%20to%20the%20Grassmann%20manifold%20of%20subspaces%20in%20order%20to%20simultaneously%20estimate%20three%20components%20of%20a%20decomposition%20of%20a%20collection%20of%20images%3A%20a%20low-rank%20subspace%2C%20a%20sparse%20part%20of%20occlusions%20and%20foreground%20objects%2C%20and%20a%20transformation%20such%20as%20rotation%20or%20translation%20of%20the%20image.%20We%20show%20that%20t-GRASTA%20is%204%20%5Cu00d7%20faster%20than%20state-of-the-art%20algorithms%2C%20has%20half%20the%20memory%20requirement%2C%20and%20can%20achieve%20alignment%20for%20face%20images%20as%20well%20as%20jittered%20camera%20surveillance%20images.%22%2C%22date%22%3A%22October%202014%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0262885614000523%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%220262-8856%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%227V94DTC4%22%5D%2C%22dateModified%22%3A%222015-09-08T16%3A37%3A52Z%22%7D%7D%2C%7B%22key%22%3A%224SRUMQNM%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pimentel%20et%20al.%22%2C%22parsedDate%22%3A%222014-06%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPimentel%2C%20D.%2C%20Nowak%2C%20R.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282014%29.%20On%20the%20sample%20complexity%20of%20subspace%20clustering%20with%20missing%20data.%20%26lt%3Bi%26gt%3B2014%20IEEE%20Workshop%20on%20Statistical%20Signal%20Processing%20%28SSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%20280%26%23x2013%3B283.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D6884630%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D6884630%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22On%20the%20sample%20complexity%20of%20subspace%20clustering%20with%20missing%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Pimentel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Nowak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Subspace%20clustering%20is%20a%20useful%20tool%20for%20analyzing%20large%20complex%20data%2C%20but%20in%20many%20relevant%20applications%20missing%20data%20are%20common.%20Existing%20theoretical%20analysis%20of%20this%20problem%20shows%20that%20subspace%20clustering%20from%20incomplete%20data%20is%20possible%2C%20but%20that%20analysis%20requires%20the%20number%20of%20samples%20%28i.e.%2C%20partially%20observed%20vectors%29%20to%20be%20super-polynomial%20in%20the%20dimension%20d.%20Such%20huge%20sample%20sizes%20are%20unnecessary%20when%20no%20data%20are%20missing%20and%20uncommon%20in%20applications.%20There%20are%20two%20main%20contributions%20in%20this%20paper.%20First%2C%20it%20is%20shown%20that%20if%20subspaces%20have%20rank%20at%20most%20r%20and%20the%20number%20of%20partially%20observed%20vectors%20greater%20than%20dr%2B1%20%28times%20a%20poly-logarithmic%20factor%29%2C%20then%20with%20high%20probability%20the%20true%20subspaces%20are%20the%20only%20subspaces%20that%20agree%20with%20the%20observed%20data.%20We%20may%20conclude%20that%20subspace%20clustering%20may%20be%20possible%20without%20impractically%20large%20sample%20sizes%20and%20that%20we%20can%20certify%20the%20output%20of%20any%20subspace%20clustering%20algorithm%20by%20checking%20its%20fit%20to%20the%20observed%20data.%20The%20second%20main%20contribution%20is%20a%20novel%20EM-type%20algorithm%20for%20subspace%20clustering%20with%20missing%20data.%20We%20demonstrate%20and%20compare%20it%20to%20several%20other%20algorithms.%20Experiments%20with%20simulated%20and%20real%20data%20show%20that%20such%20algorithms%20work%20well%20in%20practice.%22%2C%22proceedingsTitle%22%3A%222014%20IEEE%20Workshop%20on%20Statistical%20Signal%20Processing%20%28SSP%29%22%2C%22conferenceName%22%3A%222014%20IEEE%20Workshop%20on%20Statistical%20Signal%20Processing%20%28SSP%29%22%2C%22date%22%3A%22June%202014%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D6884630%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%22ZA8QMDGD%22%2C%226JKB3X7P%22%5D%2C%22dateModified%22%3A%222015-09-08T16%3A38%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22HN3FZVH8%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lipor%20and%20Balzano%22%2C%22parsedDate%22%3A%222014-05%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLipor%2C%20J.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282014%29.%20Robust%20blind%20calibration%20via%20total%20least%20squares.%20%26lt%3Bi%26gt%3B2014%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%204244%26%23x2013%3B4248.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6854402%26amp%3Btag%3D1%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6854402%26amp%3Btag%3D1%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Robust%20blind%20calibration%20via%20total%20least%20squares%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22John%22%2C%22lastName%22%3A%22Lipor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20considers%20the%20problem%20of%20blindly%20calibrating%20large%20sensor%20networks%20to%20account%20for%20unknown%20gain%20and%20offset%20in%20each%20sensor.%20Under%20the%20assumption%20that%20the%20true%20signals%20measured%20by%20the%20sensors%20lie%20in%20a%20known%20lower%20dimensional%20subspace%2C%20previous%20work%20has%20shown%20that%20blind%20calibration%20is%20possible.%20In%20practical%20scenarios%2C%20perfect%20signal%20subspace%20knowledge%20is%20difficult%20to%20obtain.%20In%20this%20paper%2C%20we%20show%20that%20a%20solution%20robust%20to%20misspecification%20of%20the%20signal%20subspace%20can%20be%20obtained%20using%20total%20least%20squares%20%28TLS%29%20estimation.%20This%20formulation%20provides%20significant%20performance%20benefits%20over%20the%20standard%20least%20squares%20approach%2C%20as%20we%20show.%20Next%2C%20we%20extend%20this%20TLS%20algorithm%20for%20incorporating%20exact%20knowledge%20of%20a%20few%20sensor%20gains%2C%20termed%20partially-blind%20total%20least%20squares.%22%2C%22proceedingsTitle%22%3A%222014%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22conferenceName%22%3A%222014%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22date%22%3A%22May%202014%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6854402%26tag%3D1%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%2284TPD646%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222015-09-08T16%3A38%3A10Z%22%7D%7D%2C%7B%22key%22%3A%229HRCNUU3%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kennedy%20et%20al.%22%2C%22parsedDate%22%3A%222014-03%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKennedy%2C%20R.%2C%20Balzano%2C%20L.%2C%20Wright%2C%20S.%20J.%2C%20%26amp%3B%20Taylor%2C%20C.%20J.%20%282014%29.%20Online%20algorithms%20for%20factorization-based%20structure%20from%20motion.%20%26lt%3Bi%26gt%3B2014%20IEEE%20Winter%20Conference%20on%20Applications%20of%20Computer%20Vision%20%28WACV%29%26lt%3B%5C%2Fi%26gt%3B%2C%2037%26%23x2013%3B44.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D6836120%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D6836120%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Online%20algorithms%20for%20factorization-based%20structure%20from%20motion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Kennedy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.J.%22%2C%22lastName%22%3A%22Wright%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.J.%22%2C%22lastName%22%3A%22Taylor%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20a%20family%20of%20online%20algorithms%20for%20real-time%20factorization-based%20structure%20from%20motion%2C%20leveraging%20a%20relationship%20between%20the%20incremental%20singular%20value%20decomposition%20and%20recent%20work%20in%20online%20matrix%20completion.%20Our%20methods%20are%20orders%20of%20magnitude%20faster%20than%20previous%20state%20of%20the%20art%2C%20can%20handle%20missing%20data%20and%20a%20variable%20number%20of%20feature%20points%2C%20and%20are%20robust%20to%20noise%20and%20sparse%20outliers.%20Experiments%20show%20that%20they%20perform%20well%20in%20both%20online%20and%20batch%20settings.%20We%20also%20provide%20an%20implementation%20which%20is%20able%20to%20produce%203D%20models%20in%20real%20time%20using%20a%20laptop%20with%20a%20webcam.%22%2C%22proceedingsTitle%22%3A%222014%20IEEE%20Winter%20Conference%20on%20Applications%20of%20Computer%20Vision%20%28WACV%29%22%2C%22conferenceName%22%3A%222014%20IEEE%20Winter%20Conference%20on%20Applications%20of%20Computer%20Vision%20%28WACV%29%22%2C%22date%22%3A%22March%202014%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D6836120%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%22ZA8QMDGD%22%2C%22UIWU664R%22%5D%2C%22dateModified%22%3A%222015-09-08T16%3A38%3A50Z%22%7D%7D%2C%7B%22key%22%3A%226C8Q5AQH%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Brown%20et%20al.%22%2C%22parsedDate%22%3A%222014%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBrown%2C%20S.%20G.%2C%20Russell-Graham%2C%20A.%2C%20Xiao%2C%20P.%2C%20%26amp%3B%20Balzano%2C%20L.%20%282014%29.%20Determination%20of%20Trends%20in%20Ozone%20in%20the%20Mid-Atlantic%20Using%20Non-Negative%20Matrix%20Factorization.%20%26lt%3Bi%26gt%3BAGU%20Fall%20Meeting%20Abstracts%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fadsabs.harvard.edu%5C%2Fabs%5C%2F2014AGUFM.A23E3307B%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fadsabs.harvard.edu%5C%2Fabs%5C%2F2014AGUFM.A23E3307B%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Determination%20of%20Trends%20in%20Ozone%20in%20the%20Mid-Atlantic%20Using%20Non-Negative%20Matrix%20Factorization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%20G.%22%2C%22lastName%22%3A%22Brown%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Russell-Graham%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P.%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%2212%5C%2F2014%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fadsabs.harvard.edu%5C%2Fabs%5C%2F2014AGUFM.A23E3307B%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22UIWU664R%22%5D%2C%22dateModified%22%3A%222015-12-29T14%3A52%3A52Z%22%7D%7D%2C%7B%22key%22%3A%226G8SXTN4%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jun%20He%20et%20al.%22%2C%22parsedDate%22%3A%222014%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJun%20He%2C%20Laura%20Balzano%2C%20%26amp%3B%20Arthur%20Szlam.%20%282014%29.%20Online%20Robust%20Background%20Modeling%20via%20Alternating%20Grassmannian%20Optimization.%20In%20%26lt%3Bi%26gt%3BBackground%20Modeling%20and%20Foreground%20Detection%20for%20Video%20Surveillance%26lt%3B%5C%2Fi%26gt%3B%20%28Vol.%201%26%23x2013%3B0%2C%20pp.%2016-1-16%26%23x2013%3B26%29.%20Chapman%20and%20Hall%5C%2FCRC.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fdx.doi.org%5C%2F10.1201%5C%2Fb17223-24%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fdx.doi.org%5C%2F10.1201%5C%2Fb17223-24%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Online%20Robust%20Background%20Modeling%20via%20Alternating%20Grassmannian%20Optimization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22name%22%3A%22Jun%20He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22name%22%3A%22Laura%20Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22name%22%3A%22Arthur%20Szlam%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22bookTitle%22%3A%22Background%20Modeling%20and%20Foreground%20Detection%20for%20Video%20Surveillance%22%2C%22date%22%3A%222014%22%2C%22originalDate%22%3A%22%22%2C%22originalPublisher%22%3A%22%22%2C%22originalPlace%22%3A%22%22%2C%22format%22%3A%22%22%2C%22ISBN%22%3A%22978-1-4822-0537-4%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fdx.doi.org%5C%2F10.1201%5C%2Fb17223-24%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%227V94DTC4%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222022-09-07T18%3A40%3A41Z%22%7D%7D%5D%7D
Kennedy, R., Taylor, C. J., & Balzano, L. (2014). Online completion of Ill-conditioned low-rank matrices. 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 507–511. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=7032169
Balzano, L., & Wright, S. J. (2014). Local Convergence of an Algorithm for Subspace Identification from Partial Data. Foundations of Computational Mathematics, 1–36. http://link.springer.com/article/10.1007/s10208-014-9227-7
He, J., Zhang, D., Balzano, L., & Tao, T. (2014). Iterative Grassmannian optimization for robust image alignment. Image and Vision Computing, 32(10), 800–813. http://www.sciencedirect.com/science/article/pii/S0262885614000523
Pimentel, D., Nowak, R., & Balzano, L. (2014). On the sample complexity of subspace clustering with missing data. 2014 IEEE Workshop on Statistical Signal Processing (SSP), 280–283. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6884630
Lipor, J., & Balzano, L. (2014). Robust blind calibration via total least squares. 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4244–4248. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6854402&tag=1
Kennedy, R., Balzano, L., Wright, S. J., & Taylor, C. J. (2014). Online algorithms for factorization-based structure from motion. 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), 37–44. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6836120
Brown, S. G., Russell-Graham, A., Xiao, P., & Balzano, L. (2014). Determination of Trends in Ozone in the Mid-Atlantic Using Non-Negative Matrix Factorization. AGU Fall Meeting Abstracts. http://adsabs.harvard.edu/abs/2014AGUFM.A23E3307B
Jun He, Laura Balzano, & Arthur Szlam. (2014). Online Robust Background Modeling via Alternating Grassmannian Optimization. In Background Modeling and Foreground Detection for Video Surveillance (Vol. 1–0, pp. 16-1-16–26). Chapman and Hall/CRC. http://dx.doi.org/10.1201/b17223-24

2013

1399621 DZFDBB6V 2013 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22Q8DSA89M%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20and%20Wright%22%2C%22parsedDate%22%3A%222013-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20%26amp%3B%20Wright%2C%20S.%20J.%20%282013%29.%20On%20GROUSE%20and%20incremental%20SVD.%20%26lt%3Bi%26gt%3B2013%20IEEE%205th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B4.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Ftp%3D%26amp%3Barnumber%3D6713992%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Ftp%3D%26amp%3Barnumber%3D6713992%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22On%20GROUSE%20and%20incremental%20SVD%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.J.%22%2C%22lastName%22%3A%22Wright%22%7D%5D%2C%22abstractNote%22%3A%22GROUSE%20%28Grassmannian%20Rank-One%20Update%20Subspace%20Estimation%29%20%5B1%5D%20is%20an%20incremental%20algorithm%20for%20identifying%20a%20subspace%20of%20%5Cu211dn%20from%20a%20sequence%20of%20vectors%20in%20this%20subspace%2C%20where%20only%20a%20subset%20of%20components%20of%20each%20vector%20is%20revealed%20at%20each%20iteration.%20Recent%20analysis%20%5B2%5D%20has%20shown%20that%20GROUSE%20converges%20locally%20at%20an%20expected%20linear%20rate%2C%20under%20certain%20assumptions.%20GROUSE%20has%20a%20similar%20flavor%20to%20the%20incremental%20singular%20value%20decomposition%20algorithm%20%5B4%5D%2C%20which%20updates%20the%20SVD%20of%20a%20matrix%20following%20addition%20of%20a%20single%20column.%20In%20this%20paper%2C%20we%20modify%20the%20incremental%20SVD%20approach%20to%20handle%20missing%20data%2C%20and%20demonstrate%20that%20this%20modified%20approach%20is%20equivalent%20to%20GROUSE%2C%20for%20a%20certain%20choice%20of%20an%20algorithmic%20parameter.%22%2C%22proceedingsTitle%22%3A%222013%20IEEE%205th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%22%2C%22conferenceName%22%3A%222013%20IEEE%205th%20International%20Workshop%20on%20Computational%20Advances%20in%20Multi-Sensor%20Adaptive%20Processing%20%28CAMSAP%29%22%2C%22date%22%3A%22December%202013%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Ftp%3D%26arnumber%3D6713992%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%227V94DTC4%22%2C%22ZA8QMDGD%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222015-09-08T16%3A39%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22I8IPPF95%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22He%20et%20al.%22%2C%22parsedDate%22%3A%222013-04%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHe%2C%20J.%2C%20Zhang%2C%20D.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Tao%2C%20T.%20%282013%2C%20April%29.%20Iterative%20Online%20Subspace%20Learning%20for%20Robust%20Image%20Alignment.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%20IEEE%20Conference%20on%20Face%20and%20Gesture%20Recognition%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Ftp%3D%26amp%3Barnumber%3D6553759%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Ftp%3D%26amp%3Barnumber%3D6553759%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Iterative%20Online%20Subspace%20Learning%20for%20Robust%20Image%20Alignment%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Tao%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20IEEE%20Conference%20on%20Face%20and%20Gesture%20Recognition%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%22April%202013%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Ftp%3D%26arnumber%3D6553759%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222014-09-03T19%3A14%3A08Z%22%7D%7D%5D%7D
Balzano, L., & Wright, S. J. (2013). On GROUSE and incremental SVD. 2013 IEEE 5th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 1–4. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6713992
He, J., Zhang, D., Balzano, L., & Tao, T. (2013, April). Iterative Online Subspace Learning for Robust Image Alignment. Proceedings of the IEEE Conference on Face and Gesture Recognition. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6553759

2012

1399621 DZFDBB6V 2012 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22RIZN7IZ9%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22He%20et%20al.%22%2C%22parsedDate%22%3A%222012%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHe%2C%20J.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Szlam%2C%20A.%20%282012%29.%20Incremental%20gradient%20on%20the%20Grassmannian%20for%20online%20foreground%20and%20background%20separation%20in%20subsampled%20video.%20%26lt%3Bi%26gt%3BComputer%20Vision%20and%20Pattern%20Recognition%20%28CVPR%29%2C%202012%20IEEE%20Conference%20On%26lt%3B%5C%2Fi%26gt%3B%2C%201568%26%23x2013%3B1575.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6247848%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6247848%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Incremental%20gradient%20on%20the%20Grassmannian%20for%20online%20foreground%20and%20background%20separation%20in%20subsampled%20video%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arthur%22%2C%22lastName%22%3A%22Szlam%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Computer%20Vision%20and%20Pattern%20Recognition%20%28CVPR%29%2C%202012%20IEEE%20Conference%20on%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%222012%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6247848%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%227V94DTC4%22%2C%22ZA8QMDGD%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222024-06-25T14%3A32%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22URMGDKNW%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tan%20et%20al.%22%2C%22parsedDate%22%3A%222012%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTan%2C%20V.%20Y.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Draper%2C%20S.%20C.%20%282012%29.%20Rank%20minimization%20over%20finite%20fields%3A%20Fundamental%20limits%20and%20coding-theoretic%20interpretations.%20%26lt%3Bi%26gt%3BInformation%20Theory%2C%20IEEE%20Transactions%20On%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B58%26lt%3B%5C%2Fi%26gt%3B%284%29%2C%202018%26%23x2013%3B2039.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6094216%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6094216%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Rank%20minimization%20over%20finite%20fields%3A%20Fundamental%20limits%20and%20coding-theoretic%20interpretations%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vincent%20YF%22%2C%22lastName%22%3A%22Tan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stark%20C.%22%2C%22lastName%22%3A%22Draper%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222012%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6094216%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%22WUTP7CH6%22%5D%2C%22dateModified%22%3A%222013-05-17T18%3A33%3A26Z%22%7D%7D%2C%7B%22key%22%3A%22GTXPRIVJ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Eriksson%20et%20al.%22%2C%22parsedDate%22%3A%222012%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BEriksson%2C%20B.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Nowak%2C%20R.%20%282012%29.%20High%20rank%20matrix%20completion.%20%26lt%3Bi%26gt%3BProc.%20of%20Intl.%20Conf.%20on%20Artificial%20Intell.%20and%20Stat%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fjmlr.csail.mit.edu%5C%2Fproceedings%5C%2Fpapers%5C%2Fv22%5C%2Feriksson12%5C%2Feriksson12.pdf%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fjmlr.csail.mit.edu%5C%2Fproceedings%5C%2Fpapers%5C%2Fv22%5C%2Feriksson12%5C%2Feriksson12.pdf%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Bsup%20class%3D%26quot%3Bzp-Notes-Reference%26quot%3B%26gt%3B%26lt%3Ba%20href%3D%26quot%3B%23zp-Note-GTXPRIVJ%26quot%3B%26gt%3B1%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fsup%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22High%20rank%20matrix%20completion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Brian%22%2C%22lastName%22%3A%22Eriksson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proc.%20of%20Intl.%20Conf.%20on%20Artificial%20Intell.%20and%20Stat%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%222012%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fjmlr.csail.mit.edu%5C%2Fproceedings%5C%2Fpapers%5C%2Fv22%5C%2Feriksson12%5C%2Feriksson12.pdf%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%22ZA8QMDGD%22%2C%226JKB3X7P%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222013-05-17T18%3A38%3A09Z%22%7D%2C%22notes%22%3A%22%3Cli%20id%3D%5C%22zp-Note-GTXPRIVJ%5C%22%3E%5Cn%3Cp%3EA%20complete%20report%20of%20our%20High%20Rank%20Matrix%20Completion%20paper%20can%20be%20found%20%3Ca%20href%3D%5C%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1112.5629%5C%22%3Eat%20the%20arxiv%3C%5C%2Fa%3E.%20A%20youtube%20video%20of%20my%20poster%20presentation%20of%20this%20work%20at%20the%20ITA%20workshop%20%3Ca%20href%3D%5C%22http%3A%5C%2F%5C%2Fwww.youtube.com%5C%2Fwatch%3Fv%3DthnHe43S0z4%5C%22%3Ecan%20be%20found%20here%3C%5C%2Fa%3E.%3C%5C%2Fp%3E%5Cn%3C%5C%2Fli%3E%5Cn%22%7D%2C%7B%22key%22%3A%22ZPS2BSDF%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20et%20al.%22%2C%22parsedDate%22%3A%222012%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20Szlam%2C%20A.%2C%20Recht%2C%20B.%2C%20%26amp%3B%20Nowak%2C%20R.%20%282012%29.%20K-subspaces%20with%20missing%20data.%20%26lt%3Bi%26gt%3BStatistical%20Signal%20Processing%20Workshop%20%28SSP%29%2C%202012%20IEEE%26lt%3B%5C%2Fi%26gt%3B%2C%20612%26%23x2013%3B615.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6319774%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6319774%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22K-subspaces%20with%20missing%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arthur%22%2C%22lastName%22%3A%22Szlam%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Benjamin%22%2C%22lastName%22%3A%22Recht%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Statistical%20Signal%20Processing%20Workshop%20%28SSP%29%2C%202012%20IEEE%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%222012%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6319774%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%22ZA8QMDGD%22%2C%226JKB3X7P%22%5D%2C%22dateModified%22%3A%222013-05-17T18%3A39%3A06Z%22%7D%7D%5D%7D
He, J., Balzano, L., & Szlam, A. (2012). Incremental gradient on the Grassmannian for online foreground and background separation in subsampled video. Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference On, 1568–1575. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6247848
Tan, V. Y., Balzano, L., & Draper, S. C. (2012). Rank minimization over finite fields: Fundamental limits and coding-theoretic interpretations. Information Theory, IEEE Transactions On, 58(4), 2018–2039. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6094216
Eriksson, B., Balzano, L., & Nowak, R. (2012). High rank matrix completion. Proc. of Intl. Conf. on Artificial Intell. and Stat. http://jmlr.csail.mit.edu/proceedings/papers/v22/eriksson12/eriksson12.pdf 1
Balzano, L., Szlam, A., Recht, B., & Nowak, R. (2012). K-subspaces with missing data. Statistical Signal Processing Workshop (SSP), 2012 IEEE, 612–615. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6319774

2011

1399621 DZFDBB6V 2011 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22U4Z2BUSI%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20et%20al.%22%2C%22parsedDate%22%3A%222011-05%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20Nowak%2C%20R.%2C%20%26amp%3B%20Roughan%2C%20M.%20%282011%29.%20On%20the%20success%20of%20network%20inference%20using%20a%20markov%20routing%20model.%20%26lt%3Bi%26gt%3B2011%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%26lt%3B%5C%2Fi%26gt%3B%2C%203108%26%23x2013%3B3111.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D5946353%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D5946353%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22On%20the%20success%20of%20network%20inference%20using%20a%20markov%20routing%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Nowak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Roughan%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%20we%20discuss%20why%20a%20simple%20network%20topology%20inference%20algorithm%20based%20on%20network%20co-occurrence%20measurements%20and%20a%20Markov%20random%20walk%20model%20for%20routing%20enables%20perfect%20topology%20reconstruction%2C%20despite%20the%20seeming%20model%20mismatch%20to%20real%20network%20routing.%22%2C%22proceedingsTitle%22%3A%222011%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22conferenceName%22%3A%222011%20IEEE%20International%20Conference%20on%20Acoustics%2C%20Speech%20and%20Signal%20Processing%20%28ICASSP%29%22%2C%22date%22%3A%22May%202011%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpl%5C%2FarticleDetails.jsp%3Farnumber%3D5946353%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22UIWU664R%22%5D%2C%22dateModified%22%3A%222015-09-08T16%3A40%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22XX59W4J9%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tan%20et%20al.%22%2C%22parsedDate%22%3A%222011%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTan%2C%20V.%20Y.%2C%20Balzano%2C%20L.%2C%20%26amp%3B%20Draper%2C%20S.%20C.%20%282011%29.%20Rank%20minimization%20over%20finite%20fields.%20%26lt%3Bi%26gt%3BInformation%20Theory%20Proceedings%20%28ISIT%29%2C%202011%20IEEE%20International%20Symposium%20On%26lt%3B%5C%2Fi%26gt%3B%2C%201195%26%23x2013%3B1199.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6033722%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6033722%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Rank%20minimization%20over%20finite%20fields%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vincent%20YF%22%2C%22lastName%22%3A%22Tan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stark%20C.%22%2C%22lastName%22%3A%22Draper%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Information%20Theory%20Proceedings%20%28ISIT%29%2C%202011%20IEEE%20International%20Symposium%20on%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%222011%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D6033722%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222013-05-17T18%3A33%3A26Z%22%7D%7D%5D%7D
Balzano, L., Nowak, R., & Roughan, M. (2011). On the success of network inference using a markov routing model. 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3108–3111. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5946353
Tan, V. Y., Balzano, L., & Draper, S. C. (2011). Rank minimization over finite fields. Information Theory Proceedings (ISIT), 2011 IEEE International Symposium On, 1195–1199. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6033722

2010

1399621 DZFDBB6V 2010 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22IC5THMK3%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20et%20al.%22%2C%22parsedDate%22%3A%222010%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20Nowak%2C%20R.%2C%20%26amp%3B%20Recht%2C%20B.%20%282010%29.%20Online%20identification%20and%20tracking%20of%20subspaces%20from%20highly%20incomplete%20information.%20%26lt%3Bi%26gt%3BCommunication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%2C%202010%2048th%20Annual%20Allerton%20Conference%20On%26lt%3B%5C%2Fi%26gt%3B%2C%20704%26%23x2013%3B711.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D5706976%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D5706976%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Online%20identification%20and%20tracking%20of%20subspaces%20from%20highly%20incomplete%20information%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Nowak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Benjamin%22%2C%22lastName%22%3A%22Recht%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Communication%2C%20Control%2C%20and%20Computing%20%28Allerton%29%2C%202010%2048th%20Annual%20Allerton%20Conference%20on%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%222010%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D5706976%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%227V94DTC4%22%2C%22ZA8QMDGD%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222013-05-13T21%3A09%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22QERF97BJ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20et%20al.%22%2C%22parsedDate%22%3A%222010%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20Recht%2C%20B.%2C%20%26amp%3B%20Nowak%2C%20R.%20%282010%29.%20High-dimensional%20matched%20subspace%20detection%20when%20data%20are%20missing.%20%26lt%3Bi%26gt%3BInformation%20Theory%20Proceedings%20%28ISIT%29%2C%202010%20IEEE%20International%20Symposium%20On%26lt%3B%5C%2Fi%26gt%3B%2C%201638%26%23x2013%3B1642.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D5513344%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D5513344%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22High-dimensional%20matched%20subspace%20detection%20when%20data%20are%20missing%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Benjamin%22%2C%22lastName%22%3A%22Recht%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Information%20Theory%20Proceedings%20%28ISIT%29%2C%202010%20IEEE%20International%20Symposium%20on%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%222010%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D5513344%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%227V94DTC4%22%2C%22ZA8QMDGD%22%5D%2C%22dateModified%22%3A%222013-05-13T21%3A10%3A14Z%22%7D%7D%5D%7D
Balzano, L., Nowak, R., & Recht, B. (2010). Online identification and tracking of subspaces from highly incomplete information. Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference On, 704–711. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5706976
Balzano, L., Recht, B., & Nowak, R. (2010). High-dimensional matched subspace detection when data are missing. Information Theory Proceedings (ISIT), 2010 IEEE International Symposium On, 1638–1642. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5513344

2009

1399621 DZFDBB6V 2009 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22WZG3UJ3V%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ni%20et%20al.%22%2C%22parsedDate%22%3A%222009%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BNi%2C%20K.%2C%20Ramanathan%2C%20N.%2C%20Chehade%2C%20M.%20N.%20H.%2C%20Balzano%2C%20L.%2C%20Nair%2C%20S.%2C%20Zahedi%2C%20S.%2C%20Kohler%2C%20E.%2C%20Pottie%2C%20G.%2C%20Hansen%2C%20M.%2C%20%26amp%3B%20Srivastava%2C%20M.%20%282009%29.%20Sensor%20network%20data%20fault%20types.%20%26lt%3Bi%26gt%3BACM%20Transactions%20on%20Sensor%20Networks%20%28TOSN%29%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B5%26lt%3B%5C%2Fi%26gt%3B%283%29%2C%2025.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fid%3D1525863%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fid%3D1525863%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Sensor%20network%20data%20fault%20types%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kevin%22%2C%22lastName%22%3A%22Ni%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nithya%22%2C%22lastName%22%3A%22Ramanathan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mohamed%20Nabil%20Hajj%22%2C%22lastName%22%3A%22Chehade%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sheela%22%2C%22lastName%22%3A%22Nair%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sadaf%22%2C%22lastName%22%3A%22Zahedi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Eddie%22%2C%22lastName%22%3A%22Kohler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Greg%22%2C%22lastName%22%3A%22Pottie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mark%22%2C%22lastName%22%3A%22Hansen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mani%22%2C%22lastName%22%3A%22Srivastava%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222009%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fid%3D1525863%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%2284TPD646%22%2C%22UIWU664R%22%5D%2C%22dateModified%22%3A%222013-05-17T18%3A35%3A05Z%22%7D%7D%5D%7D
Ni, K., Ramanathan, N., Chehade, M. N. H., Balzano, L., Nair, S., Zahedi, S., Kohler, E., Pottie, G., Hansen, M., & Srivastava, M. (2009). Sensor network data fault types. ACM Transactions on Sensor Networks (TOSN), 5(3), 25. http://dl.acm.org/citation.cfm?id=1525863

2008

1399621 DZFDBB6V 2008 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22WE33JEJ7%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20and%20Nowak%22%2C%22parsedDate%22%3A%222008-01-01%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20%26amp%3B%20Nowak%2C%20R.%20%282008%29.%20Blind%20Calibration%20of%20Networks%20of%20Sensors%3A%20Theory%20and%20Algorithms.%20In%20V.%20Saligrama%20%28Ed.%29%2C%20%26lt%3Bi%26gt%3BNetworked%20Sensing%20Information%20and%20Control%26lt%3B%5C%2Fi%26gt%3B%20%28pp.%209%26%23x2013%3B37%29.%20Springer%20US.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Flink.springer.com.proxy.lib.umich.edu%5C%2Fchapter%5C%2F10.1007%5C%2F978-0-387-68845-9_1%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Flink.springer.com.proxy.lib.umich.edu%5C%2Fchapter%5C%2F10.1007%5C%2F978-0-387-68845-9_1%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Blind%20Calibration%20of%20Networks%20of%20Sensors%3A%20Theory%20and%20Algorithms%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Nowak%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Venkatesh%22%2C%22lastName%22%3A%22Saligrama%22%7D%5D%2C%22abstractNote%22%3A%22With%20the%20wide%20variety%20of%20sensor%20network%20applications%20being%20envisioned%20and%20implemented%2C%20it%20is%20clear%20that%20in%20certain%20situations%20the%20applications%20need%20more%20accurate%20measurements%20than%20uncalibrated%2C%20low-cost%20sensors%20provide.%20Arguably%2C%20calibration%20errors%20are%20one%20of%20the%20major%20obstacles%20to%20the%20practical%20use%20of%20sensor%20networks%20%5B3%5D%2C%20because%20they%20allow%20a%20user%20to%20infer%20a%20difference%20between%20the%20readings%20of%20two%20spatially%20separated%20sensors%20when%20in%20fact%20that%20difference%20may%20be%20due%20in%20part%20to%20miscalibration.%20Consequently%2C%20automatic%20methods%20for%20jointly%20calibrating%20sensor%20networks%20in%20the%20field%2C%20without%20dependence%20on%20controlled%20stimuli%20or%20high-fidelity%20groundtruth%20data%2C%20is%20of%20significant%20interest.%20We%20call%20this%20problem%20blind%20calibration.%22%2C%22bookTitle%22%3A%22Networked%20Sensing%20Information%20and%20Control%22%2C%22date%22%3A%222008%5C%2F01%5C%2F01%22%2C%22originalDate%22%3A%22%22%2C%22originalPublisher%22%3A%22%22%2C%22originalPlace%22%3A%22%22%2C%22format%22%3A%22%22%2C%22ISBN%22%3A%22978-0-387-68843-5%2C%20978-0-387-68845-9%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Flink.springer.com.proxy.lib.umich.edu%5C%2Fchapter%5C%2F10.1007%5C%2F978-0-387-68845-9_1%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%5D%2C%22dateModified%22%3A%222013-06-30T16%3A40%3A52Z%22%7D%7D%2C%7B%22key%22%3A%22W9DZZ2A6%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ganeriwal%20et%20al.%22%2C%22parsedDate%22%3A%222008%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGaneriwal%2C%20S.%2C%20Balzano%2C%20L.%20K.%2C%20%26amp%3B%20Srivastava%2C%20M.%20B.%20%282008%29.%20Reputation-based%20framework%20for%20high%20integrity%20sensor%20networks.%20%26lt%3Bi%26gt%3BACM%20Transactions%20on%20Sensor%20Networks%20%28TOSN%29%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B4%26lt%3B%5C%2Fi%26gt%3B%283%29%2C%2015.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fid%3D1362546%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fid%3D1362546%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Reputation-based%20framework%20for%20high%20integrity%20sensor%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Saurabh%22%2C%22lastName%22%3A%22Ganeriwal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%20K.%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mani%20B.%22%2C%22lastName%22%3A%22Srivastava%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222008%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fid%3D1362546%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%2284TPD646%22%5D%2C%22dateModified%22%3A%222013-05-17T18%3A31%3A53Z%22%7D%7D%5D%7D
Balzano, L., & Nowak, R. (2008). Blind Calibration of Networks of Sensors: Theory and Algorithms. In V. Saligrama (Ed.), Networked Sensing Information and Control (pp. 9–37). Springer US. http://link.springer.com.proxy.lib.umich.edu/chapter/10.1007/978-0-387-68845-9_1
Ganeriwal, S., Balzano, L. K., & Srivastava, M. B. (2008). Reputation-based framework for high integrity sensor networks. ACM Transactions on Sensor Networks (TOSN), 4(3), 15. http://dl.acm.org/citation.cfm?id=1362546

2007

1399621 DZFDBB6V 2007 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%228APWRX4C%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Balzano%20and%20Nowak%22%2C%22parsedDate%22%3A%222007%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBalzano%2C%20L.%2C%20%26amp%3B%20Nowak%2C%20R.%20%282007%29.%20Blind%20calibration%20of%20sensor%20networks.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%206th%20International%20Conference%20on%20Information%20Processing%20in%20Sensor%20Networks%26lt%3B%5C%2Fi%26gt%3B%2C%2079%26%23x2013%3B88.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fid%3D1236372%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fid%3D1236372%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Blind%20calibration%20of%20sensor%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%206th%20international%20conference%20on%20Information%20processing%20in%20sensor%20networks%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%222007%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fid%3D1236372%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%2284TPD646%22%2C%22ZA8QMDGD%22%2C%22UIWU664R%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222013-05-13T21%3A08%3A58Z%22%7D%7D%5D%7D
Balzano, L., & Nowak, R. (2007). Blind calibration of sensor networks. Proceedings of the 6th International Conference on Information Processing in Sensor Networks, 79–88. http://dl.acm.org/citation.cfm?id=1236372

2004

1399621 DZFDBB6V 2004 1 apa 50 date desc 1 153 https://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%223EPT4IHI%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gambiroza%20et%20al.%22%2C%22parsedDate%22%3A%222004%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGambiroza%2C%20V.%2C%20Yuan%2C%20P.%2C%20Balzano%2C%20L.%2C%20Liu%2C%20Y.%2C%20Sheafor%2C%20S.%2C%20%26amp%3B%20Knightly%2C%20E.%20%282004%29.%20Design%2C%20analysis%2C%20and%20implementation%20of%20DVSR%3A%20a%20fair%20high-performance%20protocol%20for%20packet%20rings.%20%26lt%3Bi%26gt%3BNetworking%2C%20IEEE%5C%2FACM%20Transactions%20On%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B12%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%2085%26%23x2013%3B102.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D1268081%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D1268081%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Design%2C%20analysis%2C%20and%20implementation%20of%20DVSR%3A%20a%20fair%20high-performance%20protocol%20for%20packet%20rings%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Violeta%22%2C%22lastName%22%3A%22Gambiroza%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ping%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yonghe%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Steve%22%2C%22lastName%22%3A%22Sheafor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edward%22%2C%22lastName%22%3A%22Knightly%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222004%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fxpls%5C%2Fabs_all.jsp%3Farnumber%3D1268081%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%5D%2C%22dateModified%22%3A%222013-05-17T18%3A35%3A46Z%22%7D%7D%5D%7D
Gambiroza, V., Yuan, P., Balzano, L., Liu, Y., Sheafor, S., & Knightly, E. (2004). Design, analysis, and implementation of DVSR: a fair high-performance protocol for packet rings. Networking, IEEE/ACM Transactions On, 12(1), 85–102. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1268081

Thesis

Laura Balzano, Handling Missing Data in High-Dimensional Subspace Modeling, May 2012.