Sparsity and Non-Linearity

A great deal of work in sparse signal processing focuses on linear measurements. Applications, on the other hand, often have fundamental nonlinearities that need to be modeled. We have work studying algebraic variety models, monotonic nonlinear measurement functions, pairwise comparison or ranking data, and sparsity in deep networks.

1399621 F7QF4T2Q 1 chicago-author-date 50 date desc 1 536 http://web.eecs.umich.edu/~girasole/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22VJTA8NLZ%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ongie%20et%20al.%22%2C%22parsedDate%22%3A%222021-01-01%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOngie%2C%20Greg%2C%20Daniel%20Pimentel-Alarc%26%23xF3%3Bn%2C%20Laura%20Balzano%2C%20Rebecca%20Willett%2C%20and%20Robert%20D.%20Nowak.%202021.%20%26%23x201C%3BTensor%20Methods%20for%20Nonlinear%20Matrix%20Completion.%26%23x201D%3B%20%26lt%3Bi%26gt%3BSIAM%20Journal%20on%20Mathematics%20of%20Data%20Science%26lt%3B%5C%2Fi%26gt%3B%2C%20January%201%2C%20253%26%23x2013%3B79.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1137%5C%2F20M1323448%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1137%5C%2F20M1323448%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Tensor%20Methods%20for%20Nonlinear%20Matrix%20Completion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Greg%22%2C%22lastName%22%3A%22Ongie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Pimentel-Alarc%5Cu00f3n%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rebecca%22%2C%22lastName%22%3A%22Willett%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%20D.%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22In%20the%20low-rank%20matrix%20completion%20%28LRMC%29%20problem%2C%20the%20low-rank%20assumption%20means%20that%20the%20columns%20%28or%20rows%29%20of%20the%20matrix%20to%20be%20completed%20are%20points%20on%20a%20low-dimensional%20linear%20algebraic%20variety.%20This%20paper%20extends%20this%20thinking%20to%20cases%20where%20the%20columns%20are%20points%20on%20a%20low-dimensional%20nonlinear%20algebraic%20variety%2C%20a%20problem%20we%20call%20low%20algebraic%20dimension%20matrix%20completion%20%28LADMC%29.%20Matrices%20whose%20columns%20belong%20to%20a%20union%20of%20subspaces%20are%20an%20important%20special%20case.%20We%20propose%20an%20LADMC%20algorithm%20that%20leverages%20existing%20LRMC%20methods%20on%20a%20tensorized%20representation%20of%20the%20data.%20For%20example%2C%20a%20second-order%20tensorized%20representation%20is%20formed%20by%20taking%20the%20Kronecker%20product%20of%20each%20column%20with%20itself%2C%20and%20we%20consider%20higher-order%20tensorizations%20as%20well.%20This%20approach%20will%20succeed%20in%20many%20cases%20where%20traditional%20LRMC%20is%20guaranteed%20to%20fail%20because%20the%20data%20are%20low-rank%20in%20the%20tensorized%20representation%20but%20are%20not%20in%20the%20original%20representation.%20We%20also%20provide%20a%20formal%20mathematical%20justification%20for%20the%20success%20of%20our%20method.%20In%20particular%2C%20we%20give%20bounds%20on%20the%20rank%20of%20these%20data%20in%20the%20tensorized%20representation%2C%20and%20we%20prove%20sampling%20requirements%20to%20guarantee%20uniqueness%20of%20the%20solution.%20We%20also%20provide%20experimental%20results%20showing%20that%20the%20new%20approach%20outperforms%20existing%20state-of-the-art%20methods%20for%20matrix%20completion%20under%20a%20union-of-subspaces%20model.%22%2C%22date%22%3A%22January%201%2C%202021%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1137%5C%2F20M1323448%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fepubs.siam.org%5C%2Fdoi%5C%2Fabs%5C%2F10.1137%5C%2F20M1323448%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22427SEM27%22%2C%22WUTP7CH6%22%2C%22F7QF4T2Q%22%2C%22HJQ26QYG%22%5D%2C%22dateModified%22%3A%222021-03-10T15%3A33%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22NR6ZND2K%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222018-04-30%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20Dejiao%2C%20Haozhu%20Wang%2C%20Mario%20Figueiredo%2C%20and%20Laura%20Balzano.%202018.%20%26%23x201C%3BLearning%20to%20Share%3A%20Simultaneous%20Parameter%20Tying%20and%20Sparsification%20in%20Deep%20Learning.%26%23x201D%3B%20%26lt%3Bi%26gt%3BInternational%20Conference%20on%20Learning%20Representations%20%28ICLR%29%26lt%3B%5C%2Fi%26gt%3B%2C%20April%2030.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DrypT3fb0b%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DrypT3fb0b%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20to%20Share%3A%20Simultaneous%20Parameter%20Tying%20and%20Sparsification%20in%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dejiao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haozhu%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mario%22%2C%22lastName%22%3A%22Figueiredo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%5D%2C%22abstractNote%22%3A%22Deep%20neural%20networks%20%28DNNs%29%20usually%20contain%20millions%2C%20maybe%20billions%2C%20of%20parameters%5C%2Fweights%2C%20making%20both%20storage%20and%20computation%20very%20expensive.%20This%20has%20motivated%20a%20large%20body%20of%20work%20to%20reduce...%22%2C%22date%22%3A%222018%5C%2F04%5C%2F30%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fopenreview.net%5C%2Fforum%3Fid%3DrypT3fb0b%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22F7QF4T2Q%22%5D%2C%22dateModified%22%3A%222018-04-16T15%3A19%3A53Z%22%7D%7D%2C%7B%22key%22%3A%22B3LAG3NC%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ganti%20et%20al.%22%2C%22parsedDate%22%3A%222017-02-13%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGanti%2C%20Ravi%2C%20Nikhil%20Rao%2C%20Laura%20Balzano%2C%20Rebecca%20Willett%2C%20and%20Robert%20Nowak.%202017.%20%26%23x201C%3BOn%20Learning%20High%20Dimensional%20Structured%20Single%20Index%20Models.%26%23x201D%3B%20Paper%20presented%20Thirty-First%20AAAI%20Conference%20on%20Artificial%20Intelligence.%20%26lt%3Bi%26gt%3BThirty-First%20AAAI%20Conference%20on%20Artificial%20Intelligence%26lt%3B%5C%2Fi%26gt%3B%2C%20February%2013.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.aaai.org%5C%2Focs%5C%2Findex.php%5C%2FAAAI%5C%2FAAAI17%5C%2Fpaper%5C%2Fview%5C%2F14480%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fwww.aaai.org%5C%2Focs%5C%2Findex.php%5C%2FAAAI%5C%2FAAAI17%5C%2Fpaper%5C%2Fview%5C%2F14480%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22On%20Learning%20High%20Dimensional%20Structured%20Single%20Index%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ravi%22%2C%22lastName%22%3A%22Ganti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nikhil%22%2C%22lastName%22%3A%22Rao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rebecca%22%2C%22lastName%22%3A%22Willett%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Nowak%22%7D%5D%2C%22abstractNote%22%3A%22Single%20Index%20Models%20%28SIMs%29%20are%20simple%20yet%20flexible%20semi-parametric%20models%20for%20machine%20learning%2C%20where%20the%20response%20variable%20is%20modeled%20as%20a%20monotonic%20function%20of%20a%20linear%20combination%20of%20features.%20Estimation%20in%20this%20context%20requires%20learning%20both%20the%20feature%20weights%20and%20the%20nonlinear%20function%20that%20relates%20features%20to%20observations.%20While%20methods%20have%20been%20described%20to%20learn%20SIMs%20in%20the%20low%20dimensional%20regime%2C%20a%20method%20that%20can%20efficiently%20learn%20SIMs%20in%20high%20dimensions%2C%20and%20under%20general%20structural%20assumptions%2C%20has%20not%20been%20forthcoming.%20In%20this%20paper%2C%20we%20propose%20computationally%20efficient%20algorithms%20for%20SIM%20inference%20in%20high%20dimensions%20with%20structural%20constraints.%20Our%20general%20approach%20specializes%20to%20sparsity%2C%20group%20sparsity%2C%20and%20low-rank%20assumptions%20among%20others.%20Experiments%20show%20that%20the%20proposed%20method%20enjoys%20superior%20predictive%20performance%20when%20compared%20to%20generalized%20linear%20models%2C%20and%20achieves%20results%20comparable%20to%20or%20better%20than%20single%20layer%20feedforward%20neural%20networks%20with%20significantly%20less%20computational%20cost.%22%2C%22proceedingsTitle%22%3A%22Thirty-First%20AAAI%20Conference%20on%20Artificial%20Intelligence%22%2C%22conferenceName%22%3A%22Thirty-First%20AAAI%20Conference%20on%20Artificial%20Intelligence%22%2C%22date%22%3A%222017%5C%2F02%5C%2F13%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.aaai.org%5C%2Focs%5C%2Findex.php%5C%2FAAAI%5C%2FAAAI17%5C%2Fpaper%5C%2Fview%5C%2F14480%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22ZA8QMDGD%22%2C%22F7QF4T2Q%22%5D%2C%22dateModified%22%3A%222018-02-16T18%3A47%3A48Z%22%7D%7D%2C%7B%22key%22%3A%229EJHAU95%22%2C%22library%22%3A%7B%22id%22%3A1399621%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ganti%20et%20al.%22%2C%22parsedDate%22%3A%222015%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGanti%2C%20Ravi%20Sastry%2C%20Laura%20Balzano%2C%20and%20Rebecca%20Willett.%202015.%20%26%23x201C%3BMatrix%20Completion%20Under%20Monotonic%20Single%20Index%20Models.%26%23x201D%3B%20%26lt%3Bi%26gt%3BProceedings%20of%20the%20Conference%20for%20Advances%20in%20Neural%20Information%20Processing%20Systems%26lt%3B%5C%2Fi%26gt%3B%2C%201864%26%23x2013%3B72.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F5916-matrix-completion-under-monotonic-single-index-models%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F5916-matrix-completion-under-monotonic-single-index-models%26lt%3B%5C%2Fa%26gt%3B.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Matrix%20Completion%20Under%20Monotonic%20Single%20Index%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ravi%20Sastry%22%2C%22lastName%22%3A%22Ganti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Balzano%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rebecca%22%2C%22lastName%22%3A%22Willett%22%7D%5D%2C%22abstractNote%22%3A%22Eletronic%20Proceedings%20of%20Neural%20Information%20Processing%20Systems%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20conference%20for%20Advances%20in%20Neural%20Information%20Processing%20Systems%22%2C%22conferenceName%22%3A%22Advances%20in%20Neural%20Information%20Processing%20Systems%22%2C%22date%22%3A%222015%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F5916-matrix-completion-under-monotonic-single-index-models%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DZFDBB6V%22%2C%22WUTP7CH6%22%2C%2284TPD646%22%2C%22ZA8QMDGD%22%2C%22F7QF4T2Q%22%5D%2C%22dateModified%22%3A%222015-12-29T15%3A07%3A01Z%22%7D%7D%5D%7D
Ongie, Greg, Daniel Pimentel-Alarcón, Laura Balzano, Rebecca Willett, and Robert D. Nowak. 2021. “Tensor Methods for Nonlinear Matrix Completion.” SIAM Journal on Mathematics of Data Science, January 1, 253–79. https://doi.org/10.1137/20M1323448.
Zhang, Dejiao, Haozhu Wang, Mario Figueiredo, and Laura Balzano. 2018. “Learning to Share: Simultaneous Parameter Tying and Sparsification in Deep Learning.” International Conference on Learning Representations (ICLR), April 30. https://openreview.net/forum?id=rypT3fb0b.
Ganti, Ravi, Nikhil Rao, Laura Balzano, Rebecca Willett, and Robert Nowak. 2017. “On Learning High Dimensional Structured Single Index Models.” Paper presented Thirty-First AAAI Conference on Artificial Intelligence. Thirty-First AAAI Conference on Artificial Intelligence, February 13. https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14480.
Ganti, Ravi Sastry, Laura Balzano, and Rebecca Willett. 2015. “Matrix Completion Under Monotonic Single Index Models.” Proceedings of the Conference for Advances in Neural Information Processing Systems, 1864–72. http://papers.nips.cc/paper/5916-matrix-completion-under-monotonic-single-index-models.