Industry Spotlight Session: Corporate Risk Management


A workshop on Time Series Analysis and Forecasting Using Python was conducted for students with the objective of introducing fundamental concepts of time series and demonstrating their practical implementation using Python. The session was designed as a beginner-level workshop, making it accessible to participants with basic familiarity in Python programming, while not requiring prior knowledge of time series analysis. The workshop commenced with an introduction to the concept of time series. Participants were introduced to the definition of time series as a sequence of data points recorded at regular intervals of time, such as daily temperature readings, stock prices, or sales data. The distinction between time series models and regression models was clearly explained, with particular emphasis on autocorrelation — a defining feature of time series data. Unlike regression models, which assume independence among observations, time series data exhibits dependence on past values. The session then covered the four key components of a time series: trend, seasonality, cyclic components, and irregular variations (noise). Through graphical illustrations and examples, participants learned how trend represents long-term movement, seasonality reflects predictable recurring patterns, cyclic components denote longer-term but irregular fluctuations, and noise represents random, unpredictable disturbances. This foundational understanding was essential before moving toward decomposition techniques. Participants were introduced to time series decomposition using both additive and multiplicative models. The distinction between the two approaches was explained based on whether seasonal fluctuations remain constant over time or increase proportionally with trend. Using Python’s statsmodels library, the process of decomposing a time series into trend, seasonal, and residual components was demonstrated. Emphasis was placed on interpreting residuals and ensuring they resemble white noise for an effective decomposition.

The workshop then progressed to smoothing techniques, beginning with moving averages. The concept of window size and its impact on smoothing was demonstrated both manually and through Python implementation. It was explained that increasing the window size produces a smoother curve, enabling better visualization of underlying trends. Participants also explored exponential smoothing methods, including Simple Exponential Smoothing, Holt’s Linear Trend Method, and Holt-Winters Method. Each method was explained based on the type of data it best suits — stable data, data with trend, and data with both trend and seasonality, respectively. The role of the smoothing parameter (alpha) in assigning weights to recent versus past observations was discussed in detail.

A significant portion of the workshop focused on the concept of stationarity, an essential requirement for advanced forecasting models. The characteristics of stationary and non-stationary series were explained graphically, highlighting constant mean and variance as key properties. Statistical tests such as the Augmented Dickey-Fuller (ADF) test and KPSS test were introduced to formally verify stationarity. When non-stationarity was detected, participants learned how to apply differencing techniques in Python to transform the series into a stationary form.

Following this, the workshop introduced Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) plots. The conceptual difference between the two was explained, particularly how PACF removes intermediate lag influence. Participants learned how to interpret significant spikes in these plots to determine the appropriate order parameters for ARIMA modeling. The session culminated with the step-by-step implementation of the ARIMA (AutoRegressive Integrated Moving Average) model in Python. Participants learned how to determine the values of p, d, and q using ACF, PACF, and differencing. The model fitting process was demonstrated, followed by residual diagnostics to assess model adequacy. The importance of residual normality and constant variance was emphasized. Forecasting was then performed, along with confidence interval visualization.

The workshop concluded with discussions on model selection criteria such as AIC, BIC, and HQIC for comparing alternative models. Participants were also briefly introduced to advanced extensions such as seasonal ARIMA and GARCH models for handling volatility in financial time series. Overall, the workshop successfully integrated theoretical concepts with practical Python implementation, equipping participants with foundational skills in time series analysis and forecasting.


Comments

Popular posts from this blog

WEBINAR ON ROLE OF DIGITAL MARKETING IN THE PHARMACEUTICAL SECTOR

LAVASA, IN DEFENSE OF AN EDUCATION AMIDST NATURE

Attitude of Gratitude