**Khder Alakkari ^{1}***

1, Department of Statistics and Programming, Faculty of Economics, Tishreen University, Latakia, P.O. Box 2230, Syria

**E-mail:**

khderalakkari1990@gmail.com**Received**: 16/09/2024 **Acceptance**: 06/10/2024 **Available Online:** 07/10/2024**Published:** 01/01/2025

Manuscript link

http://dx.doi.org/10.30493/DAS.2024.478968

**Abstract**

This research examined the effectiveness of Autoregressive Integrated Moving Average (ARIMA), Neural Network Autoregressive (NNAR), and eXtreme Gradient Boosting (XGBoost) models in nowcasting and forecasting the agricultural GDP of Syria, utilizing delayed time series data spanning from 1963 to 2022. The aim was to determine the most appropriate model for accurately representing the intrinsic complexities and delays present in the data. The approach included an examination of descriptive statistics, autocorrelation functions, and the execution of stationarity tests. The evaluation of model performance was conducted through the use of RMSE, MSE, and MAPE metrics. The findings revealed that the NNAR (3,2) model surpassed both ARIMA and XGBoost, showing the lowest error metrics and illustrating its capacity to effectively capture non-linear relationships within the agricultural GDP series. The exceptional performance observed can be ascribed to the NNAR model’s adaptable framework, which integrates autoregressive elements with neural networks. Projections extending to 2030, produced through the NNAR model, indicated a possible decrease in agricultural GDP, underscoring the difficulties faced by the Syrian agricultural sector. The research suggests the necessity of ongoing monitoring, regular data updates, and additional analysis to enhance these forecasts and guide strategies for the sector’s recovery and growth. It is essential to address the issue of delayed data publication by the Syrian Central Bureau of Statistics to improve the timeliness and accuracy of future economic analyses and forecasts.

**Keywords:** GDP, Agricultural, Nowcasting, Forecasting, NNAR, Machine Learning

**Introduction**

Agriculture Gross Domestic Product GDP, a crucial economic metric, denotes the aggregate value of products and services generated by a nation’s agriculture industry. This data is fundamentally a time series, displaying patterns, seasonality, and volatility affected by climate change, technological progress, and governmental actions. Managing delayed time series creates additional complexity by creating latencies between event occurrence and observation, hence affecting the precision of both nowcasting and forecasting. Analyzing and forecasting agricultural GDP amidst delays necessitates advanced methodologies adept at managing and elucidating non-stationarity intricate connections within the data.

Time series analysis is essential for comprehending and forecasting the behavior of dynamic systems in diverse fields, such as economics, finance, and environmental research. Time series forecasting has been widely utilized through traditional statistical methods, such as Autoregressive Integrated Moving Average (ARIMA) models. ARIMA models proficiently encapsulate the linear relationships and autocorrelation inherent in the data [1][2]. However, these models may struggle to adapt to the intricate linkages and nonlinear patterns commonly found in real-world data. Hybrid models, exemplified by the Neural Network Autoregressive (NNAR) model, have arisen as a more versatile approach by integrating the nonlinear capabilities of neural networks with the benefits of autoregressive models [3][4]. NNAR models are adept at illustrating the complex dynamics of time series data, particularly in contexts marked by substantial nonlinearity [5]. The capacity to handle high-dimensional data and attain remarkable predictive accuracy has contributed to the popularity of machine learning methods such as eXtreme Gradient Boosting (XGBoost) [6]. XGBoost, founded on the principles of gradient boosting decision trees, incrementally constructs an ensemble of weak learners to minimize a loss function and various regularization techniques are utilized to avert overfitting [7].

The agriculture sector in Syria has been a critical component of the economy, significantly contributing to employment and GDP. This sector was substantially affected by the persistent crisis, which led to a decline in the overall productivity. However, Syria’s agricultural production function exhibits a long-term increase in returns to scale, with fixed capital being the most productive factor [8].

This study aims to evaluate the efficacy of ARIMA, NNAR, and XGBoost models in predicting and forecasting Syrian agricultural GDP using lagged time series data. The optimal model for precisely depicting the complexity and delays in the data is identified through a thorough evaluation methodology incorporating measures such as Root Mean Squared Error (RMSE), Mean Squared Error (MSE), and Mean Absolute Percentage Error (MAPE). This research enhances the existing literature by examining the utilization of these models concerning agricultural GDP, offering significant insights for policymakers, economists, and researchers interested in comprehending and forecasting the dynamics of this vital economic sector. The findings reported in this study will improve the accuracy of agricultural GDP nowcasting and forecasting tools, thereby facilitating better decision-making in resource allocation, policy formulation, and risk management. This study builds on prior research by thoroughly investigating the influence of delayed observations on the efficacy of various forecasting models. For that purpose, the delayed time series data will be statistically analyzed, the autocorrelation structure will be investigated, and lead-lag correlations that may affect the accuracy of the forecast will be identified.

**Materials and Methods**

**Dataset**

This section provides a comprehensive overview of the statistical tools and methodologies employed to achieve optimal predictions of the agricultural GDP series values in Syria from 1963 to 2022. Consequently, its significance lies in understanding the real-time and prospective data for the variable. The data utilized in this study is sourced from the Syrian Central Bureau of Statistics – National Accounts Division (Supplementary Table 1) [9].

A time series that displays trend and volatility is deemed non-stationary, affecting its values at various temporal points and rendering it unsuitable for long-term predictions. The enhanced Dickey-Fuller test for time series is expressed by the subsequent equation [1]:

Where *c*: constant, *α*: coefficient on a time trend, : lag order of the autoregressive process. The augmented Dickey–Fuller test (ADF) test is carried out under the null hypothesis *δ* = 0 (not stationary) against the alternative of *δ* < 0 (stationary). If the null hypothesis is not rejected, the first difference is performed to make the series stationary:

**ARIMA model**

Time series forecasting frequently employs ARIMA models, which characterize the autocorrelation present in the data [2]. These models are designated as Auto Regressive Integrated Moving Average (*p*, *d*, *q*). The augmented Dickey–Fuller test (ADF) ascertains (*d*), while the autocorrelation function *ρ*(*p*) and the partial autocorrelation function *R*(*p*) ascertain (*p*) and (*q*) as follows:

The ARIMA (1,1,0) model with drift consists of an autoregressive term of order 1 (AR(1)), integrated of order 1 (I(1)), and no moving average term (MA(0)). The drift represents a constant term in the differenced series. With *y _{t} *representing the time series data at time , the ARIMA (1,1,0) model with drift can be written then as:

Where *c* is the drift (constant term), *ϕ*_{1} is the autoregressive coefficient, *ε _{t}* is the error term (white noise) with mean zero and constant variance σ

^{2},

*y*,

_{t-1}*y*: are lagged values of the series. The model parameters were selected using the auto.arima function, which automates the process of identifying the best-fitting ARIMA model by minimizing information criteria such as the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). The function systematically explores different combinations of autoregressive (AR), integrated (I), and moving average (MA) terms, choosing the model that best fits the training data. The automatic model selection process aims to minimize the following criterion [10]:

_{t-2} is the likelihood of the model, *k* is the number of parameters in the model, The auto.arima function compares models by varying the parameters *p*, *d*, and *q*, which represent the autoregressive order, the degree of differencing, and the moving average order, respectively. The function identifies the model that minimizes AIC, balancing complexity and goodness of fit.

**NNAR model**

The NNAR (Neural Network Autoregressive) model is a sophisticated hybrid statistical and machine learning tool that integrates the capabilities of autoregressive models with the adaptability and nonlinearity of neural networks. This method is particularly beneficial for time series forecasting, as it elucidates intricate patterns and dynamics.

The NNAR model enhances the conventional autoregressive model by integrating neural network components, enabling it to identify nonlinear correlations within the data. The fundamental concept is utilizing historical data to forecast future values, wherein prior observations are inputted into a neural network that generates the predictions [5]. It is delineated by the subsequent components:

**Autoregressive element: **In the conventional autoregressive model, the future value of a time series is articulated as a linear amalgamation of preceding values. For the AR(*p*) model:

where *y _{t}* represents the value at time

*t*,

*ϕ*denotes the coefficients, and

*ϵ*signifies the error term.

_{t}**Neural Network Component:** The NNAR model substitutes the linear combination with a neural network that use historical values as input to generate the future value. This enables the model to identify more intricate, nonlinear associations. The NNAR (*p*,*k*,*m*) model can be formally articulated as follows: *p*: number of delayed observations (autoregressive variables), *k*: number of hidden layers, and *m*: number of neurons in each hidden layer (Fig. 1).

**For the NNAR (3,2,1) model: ***p* = 3: The model uses the last 3 observations to predict the next value. *k* = 2: There are two hidden layers. *m* = 1: The hidden layer contains a neuron (Fig. 2). According to these components, the general formula for the NNAR (3,2,1) model is:

where ƒ is the neural network function. The neural network structure can be detailed as follows:

Input layer: Comprises the preceding three values (*y _{t-1}*,

*y*,

_{t-2}*y*). Hidden layer: Includes a neuron. This neuron gets input from the preceding three observations, processes it via the Rectified Linear Unit (ReLU) activation function, and generates an output

_{t-3}*x*if

*x*is positive; otherwise, it outputs 0. ReLU activation guarantees non-linearity, sparsity, and efficient gradient propagation. The function is defined as follows:

Mathematically, the operations within the network can be described in terms of hidden layer calculations:

where *g* denotes the activation function, *w _{ji}* represents the weights, and

*b*signifies the bias. The mathematical representation based on the computations of the output layer is:

_{j}where *v _{j}*represents the weights linking the hidden layer to the output layer, and

*c*is the bias term of the output cell. The NNAR (3,2,1) model adeptly integrates the historical autoregressive element with the robust pattern recognition skills of neural networks. This combination enables the capture of both linear and nonlinear interactions.

**XGBoost model**

XGBoost (eXtreme Gradient Boosting) is a scalable machine learning framework for tree boosting, predicated on the ideas of gradient boosting decision trees (GBDT). It has emerged as one of the most efficient algorithms for regression, classification, and ranking problems. The model aims to improve computational efficiency and predictive accuracy. The objective of XGBoost, akin to other gradient boosting techniques, is to minimize a loss function by incrementally incorporating new models that rectify the faults of preceding models [7]. This can be expressed mathematically as:

Where: *y _{i} *denotes the predicted value for observation

*i*,

*ƒ*signifies a weak learner (specifically a decision tree in the context of XGBoost),

_{k}*x*represents the input features for observation

_{i}*i*,

*k*indicates the number of iterations (trees), and

*ϵ*is the error term. Each subsequent tree,

*f*, is trained to minimize the residuals of the preceding model. The objective function in XGBoost integrates the loss function

_{k}*L*(which assesses the model’s fit to the data) and regularization terms Ω(

*ƒ*) (which regulate model complexity to prevent overfitting) [10]:

_{k}Where: *L*(*y _{i}*,

*ŷ*): denotes the loss function, specifically the squared error for regression in this context:

_{i}as the regularization term for tree *k*, where *T* denotes the number of leaves, w_{j}^{2} signifies the leaf weights, and *γ* and *λ* are the regularization parameters. The squared error loss is frequently employed for regression problems:

Where: *y _{i}* represents the actual value, and

*ŷ*denotes the anticipated value.

_{i}XGBoost employs an additive boosting methodology. In each iteration, a new decision tree is fitted to the residuals of the preceding tree:

Where: *r _{i}^{(k)}*is the residual for observation

*i*at iteration

*k*, and

*ŷ*

_{i}^{(k-1)}represents the forecast from the preceding iteration. The new tree

*ƒ*is constructed to minimize these residuals, and the model’s forecast is revised as follows:

_{k}Where *η* is the learning rate, controlling the contribution of each tree, XGBoost uses both *L1* and *L2* regularization to control overfitting:

L1 Regularization (Lasso) adds a penalty proportional to the absolute value of the leaf weights |w_{j}|, whereas L2 Regularization (Ridge) adds a penalty proportional to the square of the leaf weights w_{j}^{2}. These regularizations are incorporated into the objective function to prevent overfitting by limiting the complexity of the trees [11].

XGBoost builds trees using a greedy algorithm that splits the data at each node to maximize the reduction in loss. It uses a process called “pruning” to stop growing trees when no significant improvement in the objective function is observed.

**Model evaluation metrics**

For evaluating the model performance, the following metrics are used:

Root Mean Squared Error (RMSE):

Mean Squared Error (MSE):

Mean Absolute Percentage Error (MAPE):

Where *n* is the number of observations, *y _{t}* is the actual value, and

*ŷ*is the predicted value.

_{t}The implementation of this research leveraged the statistical programming language R, specifically employing the RStudio environment. Several essential packages were utilized to facilitate the analysis and model development including: forecast, xgboost, tseries, ggplot2, dplyr, Metrics, and caret.

**Results and Discussion **

This section involves the analysis of the time series of agricultural GDP in Syria and the estimation of the optimal model for data prediction. The initial step involves examining descriptive statistics and the trajectory of the variable’s evolution through the analysis of autocorrelation functions, followed by a comparison of the performance metrics of the employed models, identifying the attributes of the optimal model, and utilizing it for predictions up to the year 2030.

The descriptive statistics (Table 1) offer a preliminary analysis of the agricultural GDP data in Syria from 1963 to 2022, expressed in millions of Syrian pounds at constant prices. The mean agricultural GDP during this period was approximately 143,287.1 million Syrian pounds, whereas the median was 122,226.7 million Syrian pounds, suggesting a marginally right-skewed distribution. The significant disparity between the minimum (50,080.52) and maximum (293,756.0) values underscores considerable variations in agricultural production over time (Fig. 3). The standard deviation of 71,398.47 highlights the considerable variety within the dataset. The positive skewness (0.55) indicates that the distribution features a longer tail on the right, signifying the occurrence of periods with very high agricultural GDP levels. The kurtosis score (2.08) is near the normal distribution’s kurtosis of 3, indicating a modest level of peakedness. The Jarque-Bera test for normalcy produced a p-value of 0.08, somewhat exceeding the standard significance threshold of 0.05. This indicates that although there is some evidence of deviation from normality, it is insufficient to unequivocally reject the null hypothesis of a normal distribution. The descriptive statistics indicate a time series marked by considerable variability, a propensity for positive skewness, and a moderate degree of peakedness, establishing a basis for further study and modeling of Syria’s agricultural GDP series.

An evident increasing trend in agricultural GDP, particularly from the early 1970s to the mid-2010s can be seen. This period was characterized by consistent growth, punctuated by occasional increases and declines. However, the series exhibits significant volatility, particularly in the late 2010s and early 2020s (Fig. 3). The unpredictability is illustrated by the substantial decline in agricultural GDP that occurred subsequent to 2015, which coincided with the onset of the Syrian conflict. The graph illustrates the instability and disruption that resulted from the conflict, as the agriculture sector experienced a substantial decline during this period. Thus, the series appears to be non-stationary, suggesting that its statistical characteristics, such as mean and variance, vacillate over time, particularly as a result of the substantial decline observed in recent years. The non-stationarity underscores the importance of employing appropriate statistical methods to effectively evaluate and forecast the series.

Key insights into the temporal dependencies of the agricultural GDP series were elucidated by the Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) (Fig. 4). The ACF demonstrates a gradual decline, which suggests that past GDP values have a substantial impact on current values according to the fundamental principles outlined in time series analysis [12]. Therefore, a strong positive autocorrelation at lower delays is present. In contrast, the PACF exhibits a severe decline following the initial latency, with subsequent lags largely remaining within the significance bounds. This indicates that the immediate past value is the primary indicator of the direct impact of past values [12]. These patterns indicate that the agricultural GDP series is not random, exhibits temporal dependencies, and could potentially be modeled using an AR(1) process. The ACF’s gradual decay also suggests the possibility of non-stationarity, suggesting that differencing techniques may be necessary to attain stationarity prior to the application of forecasting models.

The NNAR (3,2) model demonstrated superior performance across all three metrics, recording the lowest values for MAPE (5.70%), MSE (0.617), and RMSE (11857.71) (Fig. 5). This indicates that the NNAR model is particularly adept at identifying the fundamental patterns and dynamics present in the agricultural GDP time series. This effectiveness can be attributed to its capacity to handle non-linear relationships and its implementation of a neural network framework. The ARIMA (1,1,0) model exhibits a satisfactory level of accuracy, although it presents higher error values in comparison to NNAR. The straightforward nature of this model may restrict its capacity to identify intricate relationships present in the data. The XGBoost model, however, demonstrated significantly elevated error values, suggesting a less favorable performance in this particular context. This may be due to the model’s possible sensitivity to noise or overfitting, particularly in light of the dataset’s relatively small sample size. The NNAR model demonstrates superior performance, which can be attributed to its flexible framework that effectively combines the advantages of autoregressive models and neural networks. The autoregressive component identifies the natural temporal dependencies present in the time series, whereas the neural network component facilitates the learning of intricate non-linear relationships within the data. This integrated method demonstrates notable advantages for time series of moderate length, specifically those containing between 30 and 100 observations, while effectively managing the autocorrelation patterns commonly found in these series.

Nine simulated trajectories (Series 1-9) of projected Syrian agricultural GDP, as forecasted by NNAR, were visualized (Fig. 6 A). The trajectories illustrate the intrinsic uncertainty associated with NNAR forecasting, revealing a range of possible future scenarios. The variation noted beyond 2022 suggests an increasing level of uncertainty as the forecast period progresses further into the future. The NNAR (3,2) forecasts (Fig. 6. B) offer a clearer representation of the model’s predictions. The black line illustrates the historical data, whereas the blue line represents the point forecast produced by the NNAR model. The shaded blue regions denote prediction intervals, highlighting the uncertainty linked to the forecast. The darker blue region represents a greater degree of confidence (80% to 95%), whereas the lighter blue region denotes a lesser degree of confidence. The prediction intervals expand as the forecast horizon lengthens, indicating an increase in uncertainty regarding future GDP values. Both figures provide important insights into the projected path of agricultural GDP and the related uncertainties, facilitating informed decision-making and risk evaluation.

Table 2 presents the projected values of Syrian agricultural GDP as anticipated by the NNAR model (up to 2030), together with the associated uncertainty estimates. The “Point Forecast” column presents the most likely forecast of agricultural GDP for each year, determined using the NNAR model. The “Lo 80” and “Hi 80” columns indicate the 80% prediction interval, defining the range in which the actual GDP value is expected to fall with 80% confidence. The “Lo 95” and “Hi 95” columns represent the 95% prediction interval, providing a broader range with heightened confidence. Forecasts indicate that the agricultural GDP is projected to attain 60,397.07 million Syrian pounds by the year 2030. The anticipated reduction in agricultural GDP underscores possible difficulties confronting the Syrian agricultural sector.

The delay in data publication by the Syrian Central Bureau of Statistics presents a significant challenge for conducting timely economic analysis and forecasting. Although the title of the table refers to “nowcasting,” it is important to recognize that the forecasts reach beyond the most recent data point, which is likely from 2022. The forecasts appear to be grounded in the historical data that is accessible, along with the presumption that the existing trends and patterns will persist moving forward. Interpreting these forecasts requires careful consideration, acknowledging the inherent limitations linked to data delays and the possibility of unexpected events influencing future agricultural GDP.

The estimated average annual growth rate, based on the point forecasts for the period 2022-2030, is -4.33. The anticipated reduction in agricultural GDP underscores possible difficulties confronting the agricultural sector in Syria. Ongoing monitoring, the incorporation of updated data, and additional analysis are essential for enhancing these forecasts and guiding effective strategies aimed at supporting the recovery and sustainable growth of the agricultural sector.

The NNAR model’s superior performance in forecasting Syrian agricultural GDP is consistent with earlier research that has shown the effectiveness of hybrid models in addressing the intricate dynamics found in time series data. Previous studies have indicated that NNAR models surpass traditional ARIMA models in predicting water treatment plant influent characteristics [3], underscoring their capacity to handle non-linear relationships frequently encountered in real-world data. Similarly, the advantages of NNAR compared to traditional modeling methods were highlighted in forecasting COVID-19 data [4], highlighting its effectiveness for time series analysis. Other researches further supported this idea by illustrating the efficacy of NNAR models in forecasting food grain production in India, surpassing the performance of conventional ARIMA, SutteARIMA, and Holt-Winters approaches [5]. This study’s comparison goes beyond traditional time series models to encompass machine learning approaches such as XGBoost, an algorithm recognized for its predictive accuracy across multiple domains. Although XGBoost has demonstrated potential in time series applications, including the forecasting of hemorrhagic fever with renal syndrome [6] and rainfall patterns [7], the current study emphasizes the possible limitations of relying solely on data-driven methods when addressing complex economic data, which is marked by delayed observations and intrinsic structural factors. This research offers a distinctive contribution by focusing on the management of delayed data in the context of agricultural GDP, in contrast to studies such as [13][14], which have investigated NNAR applications for up to date GDP modeling and Bitcoin forecasting. This approach is particularly relevant in situations like Syria, where there are considerable limitations in data availability, as it is one of the main challenges identified by [15] in predicting regional unemployment. Although neural network models are often regarded as superior in capturing non-linear relationships within timeseries, they may face obstacles in making precise future predictions due to inconsistencies in the data, as illustrated by the example of rainfall data [16]. Consequently, it is essential to select suitable models that align with the characteristics of the datasets in hand in order to obtain more accurate predictions.

**Conclusions**

This research addressed the challenge of delayed time series data, a prevalent issue in economic datasets, revealing a distinctive statistical aspect. The NNAR model demonstrates a notable capacity to capture the delayed effects of historical agricultural GDP values on future trends. Its accurate forecasts, even in the presence of data lags, underscore its robustness and appropriateness for practical applications, particularly in contexts where data availability may be limited. The emphasis on delayed time series and the proven effectiveness of NNAR in these contexts distinguishes this study, providing important insights for economic forecasting and decision making in situations with limited data.

**References**

- Leybourne SJ. Testing for Unit Roots Using Forward and Reverse Dickey‐Fuller Regressions. Oxf. Bull. Econ. Stat. 1995;57(4):559–71. DOI
- Stellwagen E, Tashman L. ARIMA: The Models of Box and Jenkins. Foresight: Int. J. Appl. Forecast. 2013(30).
- Maleki A, Nasseri S, Aminabad MS, Hadi M. Comparison of ARIMA and NNAR Models for Forecasting Water Treatment Plant’s Influent Characteristics. KSCE J. Civ. Eng. 2018;22(9):3233–45. DOI
- Daniyal M, Tawiah K, Muhammadullah S, Opoku-Ameyaw K. Comparison of Conventional Modeling Techniques with the Neural Network Autoregressive Model (NNAR): Application to COVID-19 Data. J. Healthc. Eng. 2022;2022:1–9. DOI
- Ahmar AS, Singh PK, Ruliana R, Pandey AK, Gupta S. Comparison of ARIMA, SutteARIMA, and Holt-Winters, and NNAR Models to Predict Food Grain in India. Forecasting. 2023;5(1):138–52. DOI
- Lv CX, An SY, Qiao BJ, Wu W. Time series analysis of hemorrhagic fever with renal syndrome in mainland China by using an XGBoost forecasting model. BMC Infect. Dis. 2021;21(1). DOI
- Mishra P, Al Khatib AM, Yadav S, Ray S, Lama A, Kumari B, Sharma D, Yadav R. Modeling and forecasting rainfall patterns in India: a time series analysis with XGBoost algorithm. Environ. Earth Sci. 2024;83(6):163. DOI
- Draibati Y, Mohammad M, Atwez M. Estimating Agricultural Production Function in Syria using Autoregressive Distributed Lag Approach (ARDL). J. Agric. Econ. Soc. Sci. 2020;11(12):1101–7. DOI
- Syrian Central Bureau of Statistics. National Accounts Division. 2024.
- Vrieze SI. Model selection and psychological theory: A discussion of the differences between the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Psychol. Methods. 2012;17(2):228–43. DOI
- Budholiya K, Shrivastava SK, Sharma V. An optimized XGBoost based diagnostic system for effective prediction of heart disease. J. King Saud Univ. – Comput. Inf. Sci. 2022;34(7):4514–23. DOI
- Box GE, Jenkins GM, Reinsel GC, Ljung GM. Time series analysis: forecasting and control. John Wiley & Sons; 2015. DOI
- Almarashi AM, Daniyal M, Jamal F. Modelling the GDP of KSA using linear and non-linear NNAR and hybrid stochastic time series models. Abonazel MR, editor. PLOS ONE. 2024;19(2):e0297180. DOI
- Šestanović T. Sveobuhvatan pristup predviđanju Bitcoina pomoću neuronskih mreža. Ekon. Pregl. 2024;75(1):62–85. DOI
- Madaras S. Forecasting the regional unemployment rate based on the Box-Jenkins methodology vs. the Artificial Neural Network approach. Case study of Brașov and Harghita counties. In Forum on Economics and Business. Hungarian Economists’ Society of Romania. 2018;21(135):66-78.
- Chukwueloka EH, Nwosu AO. Modelling and Prediction of Rainfall in the North-Central Region of Nigeria Using ARIMA and NNETAR Model. Climate Change Impacts on Nigeria. Springer Climate (SPCL). 2023;91–114. DOI

Cite this article:

Alakkari, K. Machine learning-based modeling of Syrian agricultural GDP trends: A comparative analysis. *DYSONA – Applied Science*, 2025;6(1): 86-95. doi: 10.30493/das.2024.478968.1125