I'm a research scientist at LG AI Research in South Korea.
At LG AI Research, I've worked on reasoning, out-of-distribution extrapolation, and neural functionals. I completed my bachelor's and master's studies at KAIST, where I was fortunate to be advised by Prof. Eunho Yang.
My long-term research vision is to develop highly adaptive, general-purpose AI systems that continuously improve their capabilities and expand their skill sets through "active engagement" within a dynamic, ever-changing multimodal world. These systems aim to become more effective over time, responding to an unpredictable environment and developing human-like problem-solving abilities. My research includes neuro-symbolic learning, language agent reasoning/acting, reinforcement learning, and cost-efficient test-time adaptation in out-of-distribution scenarios. While my past work has drawn inspiration from neuroscience and cognitive science, these interests continue to inform my perspective on building adaptable AI systems.
My research interest include the following topics:
Post-Training of Language Agent for Decision Making in Out-of-Distribution Scenarios
Assimilate Your Neuron: Decision Tree-Based Optimization for Training LLM Agents in
Dynamic Action Space (ongoing; as 1st-equal author)
Applications may include intelligence for the physical environment requiring control such as robotic intelligence, modality-agnostic concept-based multi-hop reasoning, and some real-world scenarios that lack observations, such as forecasting, simulation, and scientific modeling.
A latent identifying neural functional along parameter manifold for modality-agnostic generalized INR
keyword: generalization, extrapolation, neural functional, parameter space
A latent identifying neural functional along parameter manifold for modality-agnostic generalized INR
keyword: generalization, extrapolation, neural functional, parameter space
A graph truncated attention regularizer outperforms vanilla transformer 7%p in top-1 single-turn translation task
keyword: graph transformer
Community Assessment of the Predictability of Cancer Protein and Phosphoprotein Levels
from Genomics and Transcriptomics
Mi Yang, Francesca Petralia, Zhi Li, Hongyang Li, Weiping Ma, Xiaoyu Song, Sunkyu Kim, Heewon Lee, Han Yu, Bora Lee, Seohui Bae, Eunji Heo, Jan Kaczmarczyk, Piotr Stępniak, Michał Warchoł, Thomas Yu, Anna P Calinawan, Paul C Boutros, Samuel H Payne, Boris Reva, Tunde Aderinwale, Ebrahim Afyounian, Piyush Agrawal, Mehreen Ali, Alicia Amadoz, Francisco Azuaje, John Bachman, Sherry Bhalla, José Carbonell-Caballero, Priyanka Chakraborty, Kumardeep Chaudhary, Yonghwa Choi, Yoonjung Choi, Cankut Çubuk, Sandeep Kumar Dhanda, Joaquín Dopazo, Laura L Elo, Ábel Fóthi, Olivier Gevaert, Kirsi Granberg, Russell Greiner, Marta R Hidalgo, Vivek Jayaswal, Hwisang Jeon, Minji Jeon, Sunil V Kalmady, Yasuhiro Kambara, Jaewoo Kang, Keunsoo Kang, Tony Kaoma, Harpreet Kaur, Hilal Kazan, Devishi Kesar, Juha Kesseli, Daehan Kim, Keonwoo Kim, Sang-Yoon Kim, Sajal Kumar, Yunpeng Liu, Roland Luethy, Swapnil Mahajan, Mehrad Mahmoudian, Arnaud Muller, Petr V Nazarov, Hien Nguyen, Matti Nykter, Shujiro Okuda, Sungsoo Park, Gajendra Pal Singh Raghava, Jagath C Rajapakse, Tommi Rantapero, Hobin Ryu, Francisco Salavert, Sohrab Saraei, Ruby Sharma, Ari Siitonen, Artem Sokolov, Kartik Subramanian, Veronika Suni, Tomi Suomi, Léon-Charles Tranchevent, Salman Sadullah Usmani, Tommi Välikangas, Roberto Vega, Hua Zhong, Emily Boja, Henry Rodriguez, Gustavo Stolovitzky, Yuanfang Guan, Pei Wang, David Fenyö, Julio Saez-Rodriguez
Cell Systems, 2020
project page
/
paper
Ranked leaderboard top-3 in NCI-CPTAC DREAM Machine Learning Challenge 2018 (team: DEARGENpg)
keyword: sparse, high-dimensional data, gradient-boosted tree models