주요내용: Stability of stochastically modeled reaction networks Continuous-time Markov chains are widely used to model biochemical systems when the intrinsic noise of the system plays an important role in its dynamical behavior. The stability of the stochastic models holds when the time evolution of the associated probability distribution converges to a limiting distribution. People in more practical research fields frequently undervalue the significance of stability, despite it being one of the most crucial mathematical concepts to understand. In this talk, we begin with background of stochastic processes for biochemical reaction systems modeled with jump-by-jump Markov chains. Then we will go through a couple of novel computational and analytical methods for analyzing those Markov chains, and we'll look at how the Markov chain's stability were used to invent those methods. With interesting examples, we will further discuss the importance of studying convergence rate to the limiting distribution, i.e., the rate of stabilization, which is yet another important concept but overlooked in practical research.