ENHANCING NONLINEAR EQUATION SOLUTIONS THROUGH THE COMBINATION OF VARIANT NEWTON’S AND HALLEY’S METHODS
Kazhal H. Mohammed Ali1, *Bayda Gh. Fathi1
1Department of Mathematics, College of Science, University of Zakho, Zakho, Kurdistan Region, Iraq
*Corresponding author email: kazhal.mohammed@uoz.edu.krd
Received: 24 Apr 2025 Accepted:22Jun 2025 Published:8 Oct 2025 https://doi.org/10.25271/sjuoz.2025.13.4.1594
ABSTRACT:
This work presents a new iterative method for solving single-variable nonlinear equations. The method achieves ninth-order convergence with just three derivative evaluations per step, offering both accuracy and lower computational cost. Unlike slower bracketing methods, it builds on faster open methods, though these may sometimes fail to converge. By blending ideas from Newton's and Halley's methods, the new approach provides strong performance, as shown by a detailed convergence analysis and MATLAB tests. Compared to existing techniques, it finds solutions in fewer steps and less time, making it especially effective for difficult nonlinear problems
KEYWORDS: Newton’s Method, Variant of Newton’s Method, Halley’s Method, Efficiency Index, Nonlinear Equations.
1. INTRODUCTION
Iterative root‐finding algorithms are indispensable across engineering, physics, and applied mathematics, underpinning models from nonlinear structural analysis to parameter estimation in dynamical systems (Soomro et al., 2023; Naseem et al., 2022). Bracketing methods, such as the bisection algorithm, guarantee convergence, but only at a linear rate, making them impractical for high-precision requirements (Goodman et al., 2017) . Open methods, such as Newton’s method (NM), achieve quadratic convergence but may diverge if the initial estimate is poor or if derivative evaluations are expensive (Kumar et al., 2013).Various mathematical models have been developed for solving differential equations, including the Successive Approximation Method (Sabali et al., 2021), the Adomian Decomposition Method (Azzo et al., 2022), and the Residual Power Series Method (Manaa et al., 2021).
Halley’s method (HM) mitigates this by incorporating second derivatives to attain cubic convergence, but the extra derivative computation can outweigh its faster convergence in practice(Elhasadi, 2007). To reduce sensitivity to starting guesses while retaining high convergence order, variants such as the Weerakoon–Fernando third‐order scheme (Weerakoon et al., 2000) and sixth‐order Halley‐type modifications (Noor et al., 2007) have been proposed; however, each entails trade‐offs between per‐iteration cost and overall efficiency. Silalahi et al. (2017), introduced a method, known as NIH, that combines the Halley method, the Newton method, and the Newton inverse method.
In this paper, we propose the Variant Newton–Halley Method (VNHM), which combines a third-order Newton-type predictor with a Halley-type corrector to achieve ninth-order convergence. Moreover, we prove VNHM’s convergence order and compute its efficiency index via a detailed Taylor series analysis. Furthermore, we demonstrate through MATLAB experiments on eight benchmark functions that VNHM consistently reduces the number of iterations and CPU time compared to Newton’s method, Halley’s method, and the Weerakoon–Fernando variant. Even though VNHM can reach very high accuracy in just a few steps when you can cheaply compute its needed derivatives, it does become overly complex and expensive if those derivatives are hard to get or noisy. Because it relies on calculating both first- and second-order derivatives every time, and its guaranteed success only applies when you start fairly close to the proper solution, it is less well suited to cases where derivative information is costly, unreliable, or when you only need a rough answer. The results were compared with those obtained from the methods in (Silalahi et al., 2017; Weerakoon et al., 2000).
The remainder of this paper is organized as follows. Section 2 reviews NM, HM, Weerakoon–Fernando variant, and NIH before introducing VNHM. Section 3 develops the convergence analysis. Section 4 describes the test functions and their known roots. Section 5 presents numerical comparisons of iteration counts, execution times, and accuracy. Section 6 concludes and outlines directions for extending VNHM to complex‐root problems.
Iterative Methods:
This section will introduce the fundamental ideas behind the NM, HM, VNM, and NIH. Furthermore, the VNHM will be presented.
Newton’s Method (NM)
For the nonlinear scalar
equation , NM is among the most effective
root-finding methods (Madhu et al., 2016). The method's quadratic convergence
rate makes it likely the most widely used approach for solving nonlinear
equations. However, if poor initial assumptions are made, it can occasionally
be weakened. However, to use it as a reference point, it needs to be calculated
as a function's derivative, which is not always simple or even possible, or it
cannot be expressed in terms of an elementary function (McDonough, 2007;Kusni et al., 2016). Newton's method may converge more
quickly than any other method, but performance comparison requires taking both
convergence speed and cost into account (Azure et al., 2019). The general form of the NM is:
.
. (1)
Algorithm (NM) (Tasiu et al., 2020)
Given a sufficiently smooth function ᶂ with
0 on
.
Input: Initial approximation (, error tolerance (
, and the maximum number of iterations (N).
Output: An approximation root, or a message of failure if the tolerance is not met
within
iterations.
Step 1: Set .
Step 2: Repeat until || <
or the maximum number of iterations is reached:
compute .
Step 3: If < Tol, then return
as the approximate solution and stop.
Step 4: Set and go to step 2.
A Variant of Newton’s Method (VNM)
In 2000, Weerakoon and Fernando showed that the method with third-order convergence is the outcome of deriving NM, which entails an indefinite integral of the function's derivative and an approximate rectangle for the relevant area (Weerakoon et al., 2000) . This modification reduces the local truncation error by using a trapezoid rather than a rectangle to approximate this indefinite integral. Iterations can be performed without the need to compute the function's second or higher derivatives, which is the VNM's most significant feature. The general form of the VNM is:
. (2)
Here is obtained using the standard Newton iteration.
Algorithm (VNM)
Given a function ᶂ assuming
With a simple root
of
so
0,
0.
Input: Initial approximation , error tolerance
, optional maximum number of iterations N.
Output: Return as the root approximation or return a ‘no
convergence’ when the tolerance criterion is not met within
Iterations.
Step 1: Set , calculate the first Newton iteration:
.
Step 2: While repeat:
Step 3: Compute the predictor, which was already computed in Step 1:
.
Step 4: Calculate the corrector:
Step 5: If < Tol, then return
and terminate.
Step 6: Set and go to step 3.
Halley’s Iteration Method (HM)
Halley’s method is a third‐order root‐finding
algorithm closely related to Newton’s method (NM). Whereas NM uses the
tangent‐line approximation of to achieve quadratic convergence, Halley’s method
incorporates second‐derivative information to accelerate convergence to
cubic order (Scavo et al., 1995). Given the current iteration
, Halley’s update is:
.
. (3)
Both NM and HM belong to a wider family of explicit iterative schemes that exploit successively higher derivatives to improve convergence order(Yasir Abdul-Hassan, 2016).
Algorithm 2.3. (HM) (Thota et al., 2023)
Given a sufficiently smooth function ᶂ assuming
and
is a simple root of
, so
0,
0,
0.
Input: An initial guess An
error tolerance
, and a maximum number of iterations
.
Output: An approximation root, or a message of failure if no convergence is achieved
within
iterations.
Step 1: Set .
Step 2: While do,
Step 3: For a given , calculate Halley’s update:
.
Step 4: If < Tol, then return
and stop.
Step 5: Set and go to step 3.
Combination of the Newton Method, the Newton Inverse Method, and the Halley Method (NIH)
This approach solves nonlinear equations by combining the Newton method, the Newton inverse method, and the Halley method, as introduced by (Silalahi et al., 2017).
Given a sufficiently smooth function ᶂ assuming
Input: An initial guess An
error tolerance
, and a maximum number of iterations
.
Output: An approximation root, or a message of failure if convergence is not achieved
within
iterations.
Step 1: Set .
Step 2: While do,
Step 3: For a given , compute
Step 4: Evaluate .
Step 5: If or the maximum number of iterations is reached,
terminate; otherwise, return to Step 4.
Proposed Variant Newton–Halley Method (VNHM)
The Proposed Variant Newton–Halley Method (VNHM) combines the strengths of predictor–corrector techniques with the fast convergence of higher-order iterative schemes. The method is developed through a detailed Taylor series analysis. VNHM begins with a Variant Newton Method (VNM) step to ensure stability, followed by a Halley-type corrector to enhance the convergence rate without sacrificing accuracy. The main objective is to provide a method that is both efficient and reliable, while keeping the computational requirements reasonable.
(4)
where and
.
In this section, we provide all the essential steps and explanations needed for full understanding and transparency, as is standard for introducing a new algorithm in numerical analysis.
Derivation of the Method:
Suppose you have a function ᶂ that’s smooth enough, and you want to find a simple
root (i.e.,
0, and
0). Start with an initial guess
close to the root. The method proceeds in three clear
steps:
Step 1: Newton’s Predictor.
This is the usual Newton step, giving a better estimate for the root.
Step 2: Variant Newton (VNM) Correction
Here, you average the derivative at xₙ and yₙ (an approach inspired by the Weerakoon–Fernando method) to get a more stable, higher-order update, but without needing second derivatives. This step often does a good job of improving the guess, especially if Newton’s step was unstable.
Step 3: Halley’s High-Order Corrector.
.
Finally, Halley’s formula is used at . Since
is already a decent approximation, applying Halley’s
step here delivers even higher accuracy, usually more than what’s possible with
either Newton or VNM alone.
Algorithm (VNHM)
Given ᶂ with
continuous and
is a simple root of
.
Input: Initial guess , tolerance
, and a maximum number of iterations
.
Output: An approximation root, or a message of failure if no convergence is achieved
within
iterations.
Step 1: Set .
Step 2: For a given , calculate the predictor step, which involves
,
.
Step 3: Evaluate the Halley correction step as follows:
Step 4: If < Tol, then return
as the approximate solution and stop.
Step 5: Set If
, go to step 2; otherwise, return to the algorithm
failed to converge.
Remarks and Limitations:
Stability: As with all open (non-bracketing) methods, VNHM is not magic. If you start too far from the root, the method may fail to converge or may diverge entirely. Applicability: VNHM is most useful when you need high precision and have easy access to both first and second derivatives. If calculating derivatives is expensive or at risk of error, this method may not be ideal.
Convergence Analysis:
In this section, we present the convergence analysis of the new three-step iterative method (4) for solving nonlinear equations
Theorem: Let be a simple zero of a function that is
continuously differentiable up to order eight on an open interval. If the
initial guess
is chosen sufficiently close to
, then the three‐step VNHM iteration in Algorithm 4 converges to
with ninth‐order accuracy.
Proof: Since and
, Taylor’s theorem around the simple root
gives
. (5)
This expansion will form the
basis for our error‐recurrence analysis. By taking the first derivative
of (5) with respect to , we obtain
. (6)
Substituting in (5) and (6), we have
, (7)
where all terms of order 5 and higher.
. (8)
From (7) and (8), we get
. (9)
When (9) is substituted into
(1) and used , the result is
. (10)
For small , approximation
. (11)
Using the binomial expansion
(i.e.
), we get
(12)
So, the second-order binomial expansion gives
.
(13)
Hence
(14)
where is defined as the constant
This shows that Newton’s iteration,
converges with order 2.
Now, we want to determine how converges to
.
Taylor expansion of around
is
.
(15)
Since
(16)
.
(17)
Similarly, for the iteration we have
(18)
Substituting and
we have
.
(19)
(20)
By substituting (18) and (20) into in (2), we obtain
. (21)
Subtracting from both sides of (21) and let
, we obtain
. (22)
Since
,
(23)
The binomial expansion yields
. (24)
Substituting into (24), we yield
.
(25)
. (26)
After some algebra, we get
(27)
This demonstrates clearly that in (2) has third-order convergence
Now, expanding around
gives
(28)
. (29)
(30)
Replacing
with
where
in (29)- (31), gives
, (31)
, (32)
(33)
. (34)
. (35)
. (36)
+(
)
(37)
Substituting (34)-(37) into (4), we obtain
(38)
(39)
Using the binomial expansion gives
(40)
. (41)
Subtracting from
both sides and setting
, we
obtain
(42)
Since =
, it
follows that
.
(43)
Repeating the same algebraic expansion shows that every O() The contribution drops out, so the first nonzero term is simply
.
Thus, we have
=
(44)
Where
(45)
Recall and using some algebra, (45) becomes
, (46)
where ), n=2,3,4,…
Therefore,
Which shows that the order of convergence of our new proposed method (VNHM) defined in (4) is nine. This completes the proof.
Testing Functions:
We used the same test functions as (Weerakoon et al., 2000) and display the approximate zero found up to the 14 decimal place.
,
.
,
,
.
,
,
.
,
.
,
.62210416355283.
,
.
Numerical Results:
This study was conducted using the following hardware and software: a personal computer with the specifics listed below: Intel(R) Core (TM) i7-10870H CPU @ 2.20GHz 2.21 GHz, RAM 16 GB. The following software is employed: MATLAB software and the Windows 11 Ultimate 64-bit operating system. Now, solve a few nonlinear equations using the new algorithm discovered in this paper. NM, HM, VNM, NIH, and the approach presented in this paper are also compared. Compares the number of iterations, execution time, and accuracy of the proposed method with NM, HM, VNM, and NIH at the set precision. The tolerance is Tol=10-14.
Number of Iterations:
Table 1 presents the number of iterations required by each method. The results show that VNHM requires the fewest total iterations across all test cases, with only 63 iterations in total, fewer than NM, HM, VNM, or NIH. It is also important to note that the number of iterations depends on both the chosen tolerance and the initial starting point; when the initial guess is closer to the actual root, fewer iterations are generally needed.
Table 1: Comparison of the number of iterations of each method.
Function |
|
Number of iterations for each method |
||||
NM |
HM |
VNM |
NIH |
VNHM |
||
|
-0.5 1 2 -0.3 |
132 6 6 54 |
74 4 4 53 |
7 4 4 7 |
7 3 3 11 |
4 3 3 4 |
|
1 3 |
7 7 |
6 7 |
5 4 |
3 3 |
3 3 |
|
2 3 |
6 7 |
5 5 |
5 5 |
3 3
|
3 4 |
|
1 1.7 -0.3 |
5 5 6 |
4 4 5 |
3 4 4 |
3 3 3 |
2 3 3 |
|
3.5 2.5 |
8 7 |
5 5 |
6 5 |
3 3 |
4 3 |
|
1.5 |
7 |
4 |
5 |
3 |
3 |
|
-2 |
9 |
5 |
6 |
4 |
4 |
|
5 |
10 |
7 |
6 |
4 |
5 |
|
3.5 3.25 |
13 9 |
7 6 |
9 7 |
5 4 |
5 4 |
Total |
|
304 |
210 |
96 |
71 |
63 |
Execution Time:
Table 2 displays each method's execution time. Based on the computational results, the VNHM method has the smallest running time overall
Table 2: Execution time for each method across test functions.
Function |
|
Execution time (s). |
|||||
NM |
HM |
VNM |
NIH |
VNHM |
|||
|
-0.5 |
0.012174 |
0.006913 |
0.004003 |
0.005636 |
0.003387 |
|
1 |
0.004597 |
0.001927 |
0.002460 |
0.003301 |
0.001864 |
||
2 |
0.003197 |
0.003522 |
0.002494 |
0.001724 |
0.002106 |
||
-0.3 |
0.003729 |
0.007959 |
0.003999 |
0.010023 |
0.002390 |
||
|
1 |
0.005388 |
0.004280 |
0.004652 |
0.012623 |
0.002643 |
|
3 |
0.003272 |
0.006936 |
0.002767 |
0.003364 |
0.002234 |
||
|
2 |
0.002747 |
0.003331 |
0.003039 |
0.003406 |
0.002665 |
|
3 |
0.005315 |
0.002674 |
0.003690 |
0.002218 |
0.002550 |
||
|
1 |
0.002221 |
0.002055 |
0.003536 |
0.003405 |
0.002452 |
|
1.7 |
0.002353 |
0.002799 |
0.003601 |
0.002561 |
0.002967 |
||
-0.3 |
0.002911 |
0.003183 |
0.003014 |
0.002931 |
0.002227 |
||
|
3.5 |
0.004415 |
0.003599 |
0.003602 |
0.002140 |
0.003596 |
|
2.5 |
0.003077 |
0.002941 |
0.002604 |
0.002100 |
0.002663 |
||
|
1.5 |
0.003304 |
0.002422 |
0.002262 |
0.003735 |
0.002177 |
|
|
-2 |
0.004583 |
0.006338 |
0.004292 |
0.004053 |
0.004003 |
|
|
5 |
0.005394 |
0.007633 |
0.009630 |
0.004031 |
0.007925 |
|
|
3.5 |
0.002997 |
0.004680 |
0.003668 |
0.003145 |
0.004239 |
|
3.25 |
0.003444 |
0.004051 |
0.005279 |
0.003543 |
0.005348 |
||
Total |
|
0.075118 |
0.077243 |
0.068592 |
|
0.057436 |
Accuracy:
Table
3 presents the computed root values obtained by each method for the selected
nonlinear equations. The results show that all five methods generally converge
to the expected root values across most test functions. For the benchmark
function
, however, the root values found by NM, HM, and
VNM (all approximately 3.437471) differ notably from the value obtained by both
VNHM and NIH (approximately 4.622104), which is closer to the true solution
reported by Weerakoon and Fernando. This comparison highlights that, while all
methods perform similarly on standard cases, both VNHM and NIH demonstrate
superior accuracy when solving more challenging equations. These findings
confirm the robustness of VNHM, particularly for difficult problems where traditional
methods may fail to reach the correct root.
Table 3: Accuracy of computed root values for each method.
Function |
|
Root value. |
||||
NM |
HM |
VNM |
NIH |
VNHM |
||
|
-0.5 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
1 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
|
2 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
|
-0.3 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
1.36523001341410 |
|
|
1 |
1.40449164821534 |
1.40449164821534 |
1.40449164821534 |
1.40449164821534 |
1.40449164821534 |
3 |
1.40449164821534 |
1.40449164821534 |
1.40449164821534 |
1.40449164821534 |
1.40449164821534 |
|
|
2 |
0.25753028543986 |
0.25753028543986 |
0.25753028543986 |
0.25753028543986 |
0.25753028543986 |
3 |
0.25753028543986 |
0.25753028543986 |
0.25753028543986 |
0.25753028543986 |
0.25753028543986 |
|
|
1 |
0.73908513321516 |
0.73908513321516 |
0.73908513321516 |
0.73908513321516 |
0.73908513321516 |
1.7 |
0.73908513321516 |
0.73908513321516 |
0.73908513321516 |
0.73908513321516 |
0.73908513321516 |
|
-0.3 |
0.73908513321516 |
0.73908513321516 |
0.73908513321516 |
0.73908513321516 |
0.73908513321516 |
|
2 |
3.5 |
2 |
2 |
2 |
2 |
2 |
2.5 |
2 |
2 |
2 |
2 |
2 |
|
|
1.5 |
2.15443469003188 |
2.15443469003188 |
2.15443469003188 |
2.15443469003188 |
2.15443469003188 |
|
-2 |
-1.20764782713092 |
-1.20764782713092 |
-1.20764782713092 |
-1.20764782713092 |
-1.20764782713092 |
|
5 |
3.43747174342177 |
3.43747174342177 |
3.43747174342177 |
4.62210416355284 |
4.62210416355284 |
|
3.5 |
3 |
3 |
3 |
3 |
3 |
3.25 |
3 |
3 |
3 |
3 |
3 |
Comparisons of Efficiency Index:
The term "efficiency index" compares the performance of
different iterative methods. It depends upon the order of convergence and the
number of functional evaluations of the iterative process. If " ŗ
" denotes the order of convergence and " " Denote the number of functional evaluations of
an iterative method, then the efficiency index E.I is defined as:
.
On this basis, NM(Nazeer et al., 2016) has an efficiency of . HM (Noor et al., 2007) has an order of convergence of three, and
the number of functional evaluations required for this method is three, so its
efficiency
. The VNM has an efficiency of
. The VNHM needs one evaluation of the function's
first and second derivatives. Thus, this method has three functional
evaluations. i.e.
.
Also, in the earlier section, it was proven that the order of convergence of the VNHM is nine. i.e.
ŗ =9.
Thus, the efficiency index of VNHM is:
.
Table 4: Efficiency indices of the compared iterative methods.
Method |
Number of function and Derivative evaluations |
Efficiency index |
NM, quadratic |
2 |
|
HM 3rd order |
3 |
|
VNM 3rd order |
3 |
|
VNHM 9th order |
6 |
|
Real-World Applications:
VNHM’s ability to find roots with very high precision in just a few steps makes it highly suitable for real-time control problems in robotics, where fast and accurate solutions are needed (Martin, 2019). The method is also advantageous in tuning nonlinear stiffness curves in structural analysis (Engelberger, 2014) and for solving transcendental equations in optical design, such as determining resonant frequencies in photonic crystals (Reddy, 2003).In each of these cases, the combination of rapid convergence and moderate derivative evaluation cost enables VNHM to outperform traditional Newton- or Halley-based methods.
Figure 1: Reduction of the relative error graph from iteration 1 to iteration 5.
![]() |
Figure 2: Reduction of relative error graph of iteration 1 to iteration 5.
Figures 1 and 2 illustrate the convergence rates of the five iterative methods for solving nonlinear equations. As shown, the VNHM consistently achieves high accuracy in the fewest steps across all test problems. In practical terms, this demonstrates that an effective combination of iterative techniques can significantly reduce computation time and enhance robustness for a wide range of equations.
CONCLUSIONS
In this paper, we combined the Halley method (HM) and the Variant Newton method (VNM) to construct the Variant Newton–Halley Method (VNHM) for solving nonlinear equations. This method is used to find solutions to nonlinear equations. We have shown that the proposed method has a ninth-order convergence. By using some test examples, the performance and efficiency of the VNHM have been analyzed. Tables 1, 2, 3, and 4 show the best performance of the proposed iterative algorithms as compared to other well-known existing iterative algorithms in terms of accuracy, speed, number of iterations, efficiency index, and computational order of convergence. Also, relative error reduces the fastest among other methods, as shown in Figures 1 and 2. The VNHM is effective for real-valued nonlinear equations but is currently not applicable to complex roots. Future work will focus on extending the method to complex roots.
Acknowledgment:
The authors would like to express their sincere gratitude to the Science Journal of the University of Zakho for its continued support and for providing a platform to share this research. We also thank the anonymous reviewers for their valuable comments and suggestions, which helped improve the quality of this paper.
Ethical Approval:
This study does not involve human participants or animals, and therefore, ethical approval was not required.
Declarations:
Authors' contribution: K.H.M.: Methodology, Writing original draft, Formal analysis, Validation, Software. B.Gh.F.: Resources, Acquisition, Formal Analysis, Investigation, Software Review.
Funding: This work received no external funding.
Availability of data and materials: Data sharing does not apply to this article, as no data sets were generated or analyzed during the current study.
Competing interests: The authors declare that they have neither financial nor conflict of interest.
REFERENCES
Azure, I., Aloliga, G., & Doabil, L. (2019). Comparative Study of Numerical Methods for Solving Non-linear Equations Using Manual Computation. Mathematics Letters, 5(4), 41. DOI: 10.11648/j.ml.20190504.11
Azzo, S. M., & Manaa, S. A. (2022). Sumudu-Decomposition Method to Solve Generalized Hirota-Satsuma Coupled Kdv System. Science Journal of University of Zakho, 10(2), 43–47. DOI: 10.25271/sjuoz.2022.10.2.879
Elhasadi, O. I. (2007). Newton’s and Halley’s methods for real polynomials [Master’s thesis, University of Guelph]. University of 448 Guelph Atrium. https://atrium.lib.uoguelph.ca/xmlui/handle/10214/1002
Engelberger, J. (2014). Springer Handbook of Robotics Robotics & Automation : Books for Robotics. June.
Goodman, R. O. Y. H., & Obel, J. K. W. R. (2017). High-Order Bisection Method For Computing Invariant Manifolds Of Two-Dimensional Maps. 21(7), 2017–2042. DOI: 10.1142/S0218127411029604
Kumar, M., Singh, A. K., & Srivastava, A. (2013). Various Newton-type iterative methods for solving nonlinear equations. Journal of the Egyptian Mathematical Society, 21(3), 334–339. DOI: 10.1016/j.joems.2013.03.001
Kusni, A., & Shamsul, A. (2016). Numerical Study of Some Iterative Methods for Solving Nonlinear Equations. 5(2), 1–10. Retrieved from www.ijesi.org
Madhu, K., & Jayaraman, J. (2016). Higher order methods for nonlinear equations and their basins of attraction. Mathematics, 4(2), 1–20. DOI: 10.3390/math4020022
Manaa, S. A., Easif, F. H., & Murad, J. J. (2021). Residual Power Series Method for Solving Klein-Gordon Schrödinger Equation. Science Journal of University of Zakho, 9(2), 123–127. DOI: 10.25271/sjuoz.2021.9.2.810
Martin, O. J. F. (2019). Molding the flow of light with metasurfaces. 2019 URSI Asia-Pacific Radio Science Conference, AP-RASC 2019, 32–43. DOI: 10.23919/URSIAP-RASC.2019.8738549
McDonough, J. M. (2007). Lectures in Basic Computational Numerical Analysis. Journal of Biomechanics, 39, 163. Retrieved from http://www.engr.uky.edu/~acfd/egr537-lctrs.pdf
Naseem, A., Rehman, M. A., & Abdeljawad, T. (2022). A Novel Root-Finding Algorithm with Engineering Applications and its Dynamics via Computer Technology. IEEE Access, 10(1), 19677–19684. DOI: 10.1109/ACCESS.2022.3150775
Nazeer, W., & Tanveer, M. (2016). Modified Golbabai and Javidi ’ S Method ( Mgjm ) for Solving Modified Golbabai and Javidi ’ S Method ( Mgjm ) for Solving Nonlinear Functions With Convergence of Order Six. January 2015.
Noor, M. A., Khan, W. A., & Hussain, A. (2007). A new modified Halley method without second derivatives for nonlinear equation. Applied Mathematics and Computation, 189(2), 1268–1273. DOI: 10.1016/j.amc.2006.12.011
Reddy, J. N. (2003). Mechanics of Laminated Composite Plates and Shells. Mechanics of Laminated Composite Plates and Shells. DOI: 10.1201/b12409
Sabali, A. J., Manaa, S. A., & Easif, F. H. (2021). New Successive Approximation Methods for Solving Strongly Nonlinear Jaulent-Miodek Equations. Science Journal of University of Zakho, 9(4), 193–197. DOI: 10.25271/sjuoz.2021.9.4.869
Scavo, T. R., & Thoo, J. B. (1995). On the Geometry of Halley’s Method. The American Mathematical Monthly, 102(5), 417–426. DOI: 10.1080/00029890.1995.12004594
Soomro, S. A., Shaikh, A. A., Qureshi, S., & Ali, B. (2023). A Modified Hybrid Method For Solving Non-Linear Equations With Computational Efficiency. VFAST Transactions on Mathematics, 11(2), 126–137. DOI: 10.21015/vtm.v11i2.1620
Silalahi, B. P., Laila, R., & Sitanggang, I. S. (2017). A combination method for solving nonlinear equations. IOP Conference Series: Materials Science and Engineering, 166(1), 12011.
Tasiu, A. R., Abbas, A., Alhassan, M. N., & Umar, A. N. (2020). Comparative Study on Some Methods of Handling Nonlinear Equations. Anale. Seria Informatică, XVIII(2), 2–5.
Thota, S., Gemechu, T., & Ayoade, A. A. (2023). on New Hybrid Root-Finding Algorithms for Solving Transcendental Equations Using Exponential and Halley’S Methods. Ural Mathematical Journal, 9(1), 176–186. DOI: 10.15826/umj.2023.1.016
Weerakoon, S., & Fernando, T. (2000). A variant of Newton’s method with accelerated third-order convergence. Applied Mathematics Letters, 13(8), 87–93.
Yasir Abdul-Hassan, N. (2016). New Predictor-Corrector Iterative Methods with Twelfth-Order Convergence for Solving Nonlinear Equations. American Journal of Applied Mathematics, 4(4), 175. DOI: 10.11648/j.ajam.20160404.12
* Corresponding author
This is an open access under a CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/)