TY - JOUR
T1 - Magnum
T2 - Tackling high-dimensional structures with self-organization
AU - Mao, Poyuan
AU - Tham, Yikfoong
AU - Zhang, Heng
AU - Vargas, Danilo Vasconcellos
N1 - Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2023/9/14
Y1 - 2023/9/14
N2 - A big challenge in dealing with real-world problems is scalability. In fact, this is partially the reason behind the success of deep learning over other learning paradigms. Here, we tackle the scalability of a novel learning paradigm proposed in 2021 based solely on self-organizing principles. This paradigm consists of only dynamical equations which self-organize with the input to create attractor-repeller points that are related to the patterns found in data. To achieve scalability for such a system, we propose the Magnum algorithm, which utilizes many self-organizing subsystems (Inertia-SyncMap) each with subsets of the problem's variables. The main idea is that by merging Inertia-SyncMaps, Magnum builds over time a variable correlation by consensus, capable of accurately predicting the structure of large groups of variables. Experiments show that Magnum surpasses or ties with other unsupervised algorithms in all of the high-dimensional chunking problems, each with distinct types of shapes and structural features. Moreover, Inertia-SyncMap alone outperforms or ties with other unsupervised algorithms in six out of seven basic chunking problems. Thus, this work sheds light on how self-organization learning paradigms can be scaled up to deal with high-dimensional structures and compete with current learning paradigms.
AB - A big challenge in dealing with real-world problems is scalability. In fact, this is partially the reason behind the success of deep learning over other learning paradigms. Here, we tackle the scalability of a novel learning paradigm proposed in 2021 based solely on self-organizing principles. This paradigm consists of only dynamical equations which self-organize with the input to create attractor-repeller points that are related to the patterns found in data. To achieve scalability for such a system, we propose the Magnum algorithm, which utilizes many self-organizing subsystems (Inertia-SyncMap) each with subsets of the problem's variables. The main idea is that by merging Inertia-SyncMaps, Magnum builds over time a variable correlation by consensus, capable of accurately predicting the structure of large groups of variables. Experiments show that Magnum surpasses or ties with other unsupervised algorithms in all of the high-dimensional chunking problems, each with distinct types of shapes and structural features. Moreover, Inertia-SyncMap alone outperforms or ties with other unsupervised algorithms in six out of seven basic chunking problems. Thus, this work sheds light on how self-organization learning paradigms can be scaled up to deal with high-dimensional structures and compete with current learning paradigms.
UR - http://www.scopus.com/inward/record.url?scp=85163963890&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85163963890&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2023.126508
DO - 10.1016/j.neucom.2023.126508
M3 - Article
AN - SCOPUS:85163963890
SN - 0925-2312
VL - 550
JO - Neurocomputing
JF - Neurocomputing
M1 - 126508
ER -