If someone asked what my master’s program left me with, I would answer without hesitation: a lifelong colleague, and a ticket to unlock the gates of this formidable, closed industry.
Two and a half years ago, my first mission after stepping into the world of research was an industry collaboration with Naver. The task—teaching machines to read the subtle nuances embedded in news articles—culminated in a SCIE Q1 publication. While prior studies had focused on encoder models like BERT solving classification problems—selecting from predetermined labels, LOGIC reimagined this as a generation task—using generative models like BART to articulate reasoning directly. Stance, after all, is not a simple point divided into black and white; it is a flow of logic where one statement leads autoregressively to the next. True understanding emerges not merely from getting the answer right, but from being able to explain why. We designed a system where small language models (SLMs) learn the sophisticated reasoning processes derived from large language models (LLMs), generating both answers and their supporting rationale. This was more than just picking answers—it was a paradigm shift that distilled the cognitive circuits of a more powerful model into a smaller one, enabling BART to surpass its teacher GPT-4 in stance detection.
Yet this process of using LLMs as teachers to enhance smaller models’ cognition left me with questions far larger than any answers it resolved. “If machines truly ‘understand’ human intent embedded in text well enough to generate responses, shouldn’t there exist somewhere within those vast neural networks a physical substrate—or a specific cluster of neurons—responsible for processing that intent and emotion?”
This question bred more questions, eventually leading me into the abyss of “machine emotion.” This relentless expansion of thought blossomed into the highlight of my master’s program: a first-author paper presented at ACL 2025. This research, which dissected the internal mechanisms processing six basic emotions within neural networks, was an eight-year-delayed response to the “Sentiment Neuron” research that Ilya Sutskever launched in 2017. Where he had accidentally fished out “a single neuron” oscillating between positive and negative from the small raft of a 4,096-unit LSTM, I systematically mapped how the spectrum of emotions is processed in multilayered complexity aboard the massive vessel of LLaMA3-70B. It was a conversation across eight years with a pioneer—my tribute to the trailblazer who opened the door to the age of LLMs.
Do Large Language Models Have “Emotion Neurons”?
And the reason this entire intellectual voyage did not end as a solitary monologue was because Jaewook Lee was by my side. Co-author of both papers, partner who shared every joy and frustration that accompanied this journey. Because of him, this arduous research became not mere labor, but an unforgettable adventure.
Working at the frontier of this scene, I felt acutely how narrow and closed the Korean AI industry truly is. In this sealed ecosystem where perhaps a few hundred practitioners monopolize and exchange knowledge, outsiders who cannot prove their “capability” struggle to breach its walls. In that sense, my two papers were not mere degree requirements. They were tickets proving I could transcend the boundaries of Professor Harksoo Kim’s NLP Lab and enter this fortified inner circle. As a result, I gained entry to ETRI (Electronics and Telecommunications Research Institute)—the heart of AI research in Korea. The fierce results we created together finally opened that narrow gate.
Yet paradoxically, the deeper I venture inside, the more my gaze turns outward again—toward the market. ETRI stands at the frontier of Korean technology as a national research institute. My time here will not be a settlement into complacency, but a running start toward a greater leap. The identity I dreamed of during my undergraduate years—a builder who bridges bits and atoms—still lives and breathes within me. Here, I will absorb cutting-edge national-scale technology without filter and deepen my technical expertise as a researcher. And when that blade has been sharpened enough, I will venture beyond the institute’s boundaries to create products the world has never seen. Beyond performance proven only in papers—building tangible things that people can touch and find useful. Creating tools that transcend borders and serve users worldwide. To me, entrepreneurship is the most autonomous way to manifest the future I imagine. ETRI will be the most intense and practical training ground of my life, where I secure that overwhelming technological edge.
