Abstract
In the current era, the theory of vagueness and multi-criteria group decision making (MCGDM) techniques are extensively applied by the researchers in disjunctive fields like recruitment policies, financial investment, design of the complex circuit, clinical diagnosis of disease, material management, etc. Recently, trapezoidal neutrosophic number (TNN) draws a major awareness to the researchers as it plays an essential role to grab the vagueness and uncertainty of daily life problems. In this article, we have focused, derived and established new logarithmic operational laws of trapezoidal neutrosophic number (TNN) where the logarithmic base μ is a positive real number. Here, logarithmic trapezoidal neutrosophic weighted arithmetic aggregation (Larm) operator and logarithmic trapezoidal neutrosophic weighted geometric aggregation (Lgeo) operator have been introduced using the logarithmic operational law. Furthermore, a new MCGDM approach is being demonstrated with the help of logarithmic operational law and aggregation operators, which has been successfully deployed to solve numerical problems. We have shown the stability and reliability of the proposed technique through sensitivity analysis. Finally, a comparative analysis has been presented to legitimize the rationality and efficiency of our proposed technique with the existing methods.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Professor Zadeh [1] introduced the notion of fuzzy set theory to capture the vagueness and uncertainty of realistic problems, which was extended and expanded into intuitionistic fuzzy set(IFS) theory by Professor Attanosov [2]. To snatch the concept of uncertainty, inconsistency and indeterminacy of data in real-life problem, Professor Smarandache [3] presented the origination of neutrosophic set (NS) as an extension of IFS which contains truth membership function (μ), indeterminacy membership function (ι) and falsity membership function (σ). Recently, researchers have introduced pentagonal [4], hexagonal [5], heptagonal [6] fuzzy numbers and its application in different fields. Wang et al. manifested the conception of single-valued neutrosophic set (SVNS) [7] and interval neutrosophic set (INS) [8] which are subclasses of NSs and many other recent works [9,10,11,12] have improved and bring innovation into the NS hypothesis. Liu and Yuan [13] proposed the idea of triangular intuitionistic fuzzy set (TIFN) which is a combination of triangular fuzzy number and intuitionistic fuzzy number. Qin et al. [14] proposed a TODIM-based multi-criteria decision-making (MCDM) for TIFN. Ye [15] introduced the trapezoidal intuitionistic fuzzy number (TrIFN) and solved MCDM problem in this environment. Ye [16] manifested a novel idea of trapezoidal neutrosophic number (TNN) by mixing the concept of SVNS and trapezoidal fuzzy number and utilized it to solve an MCDM problem in trapezoidal neutrosophic (TN) arena. It is to be noted that both trapezoidal fuzzy numbers and neutrosophic numbers are important and effective tools in the field of uncertainty. Now, the concept of TNN can be used more fruitful in the field uncertainty to grab the impreciseness and indeterminacy in a rigorous way. In this direction, Jana et al. [17] has already defined interval traphezoidal neutrosophic numbers and apply it to solve MCGDM problem. Single valued trapezoidal neutrosophic number (SVTNN) [18] is another extension of SVNS. In SVTNN, each component is presented in the form of trapezoidal number that has truth membership degree, indeterminacy membership degree and falsity membership degree. Deli and Subas [19] manifested a ranking technique of TNN and displayed a multi-attribute decision-making (MADM) procedure. Liang et al. [20] initiated score, accuracy and certainty functions of single-valued trapezoidal neutrosophic number (SVTNN) by using center of gravity. Biswas et al. [21] defined cosine similarity measure for trapezoidal fuzzy neutrosophic numbers and presented an MADM based on it. Pramanik and Mallick [22] structured a VIKOR technique for a multi-attribute group decision making (MAGDM) in trapezoidal neutrosophic environment. Biswas et al. [23] gave the idea of TOPSIS method for MADM in TN environment, whereas Sahin et al. [24] presented some weighted arithmetic and geometric operators in SVTN environment and gave their application to MCDM problem. Abdel-Basset et al. [25] defined a type 2 neutrosophic numbers (T2NN) and manifested T2NN-TOPSIS technique to deal with a decision-making problem. Recently, Chakraborty et al. [26,27,28,29] initiated the geometrical concept of pentagonal neutrosophic number and its application in operation research, networking and graph theory arena. In this article, we have introduced new logarithmic operational laws for TNN where the logarithmic base μ is a positive real number and subsequently developed logarithmic trapezoidal neutrosophic weighted arithmetic aggregation (Larm) operator and logarithmic trapezoidal neutrosophic weighted geometric aggregation (Lgeo) operator which have been used to construct a new scheme of MCGDM process.
1.1 Motivation
In this current decade, researchers in the neutrosophic arena are mainly interested in the MCDM problems which are operators based. In the field of aggregation, the best activity is to design new operational laws. The four essential operational laws like addition, multiplication, scalar multiplication of TNN have been characterized by Ye [16]. Recently, Haque et al. [30] introduced exponential operational law where the bases are crisp numbers and the exponents are TNNs. Moreover, logarithmic operational law is a fundamental operational law in the field of aggregation. Li [31] presented logarithmic operational for IFN and developed its corresponding aggregation operators. Garg [32] set forward logarithmic operational law for SVNS and applied it in an MADM problem. Garg [33] defined the logarithmic operational law for Pythagorean fuzzy numbers and developed corresponding aggregation operator and MCDM technique to solve the real-life problems. From the literature survey, we could not notice any logarithmic operational law for TNN till date. To mobilize this research gap, here in this research article, we have defined logarithmic operational law for TNN. Furthermore, we have successfully adopt the proposed logarithmic operator to develop new aggregation formula to aggregate several uncertain information provided by the different decision makers in an MCGDM process. Finally, we have suggested an MCGDM strategy with the help of our defined operational laws and corresponding aggregation operators namely Larm and Lgeo.
1.2 Novelties
Lots of works have been already established in the TN environment. In the meantime researchers have built different formulations and their applications in different fields of TNNs. But, there are still lots of works that can be established in this arena. In this article, we make an attempt to incorporate and address the following points:
-
i)
To define new logarithmic operational law (LOL) for TNNs which is a useful supplement of existing operational law and analyzed their algebraic properties.
-
ii)
To introduce new operators like logarithmic trapezoidal neutrosophic weighted arithmetic aggregation (Larm) and logarithmic trapezoidal neutrosophic weighted geometric aggregation (Lgeo) operators.
-
iii)
Proposition of MCGDM strategy in TN environment.
-
iv)
To demonstrate the proposed method we solved a numerical problem based on a real-life problem.
-
v)
A sensitivity analysis is performed to show the utility and efficiency of the designed method.
1.3 Structure of the paper
The remainder of the article is organized in several sections. Section 2 presents some fundamental Definitions related with IFS and SVNS. In Section 3, we have introduced new logarithmic operational law for TNN and briefly discussed its algebraic properties. In Section 4, we have developed two aggregation operators based on our defined logarithmic operational law. In Section 5, an MCGDM method has been manifested using our defined operational laws and related aggregation operators. A numerical problem is taken to exhibit the applicability of defined logarithmic operational law and a sensitivity analysis are performed to show the utility of the designed method in Section 6. Finally, we conclude our results in Section 7.
2 Mathematical preliminaries
Basic Definitions and operations related with SVNSs and TNSs are presented as follows:
Definition 2.1
Let S be a universal set. Then
is said to be single-valued neutrosophic set (SVNS) [3] on S, where \( \mu : S \rightarrow [0,~1]\), \( \iota : S \rightarrow [0,~1]\) and \( \sigma : S \rightarrow [0,~1] \) with the condition 0 ≤ μ(s) + ι(s) + σ(s) ≤ 3. Here, μ(s), ι(s) and σ(s) are called the truth-membership function, indeterminacy-membership function and falsity-membership function respectively of the element to the set N. For convenience, we represent this SVNS as \(\widetilde {N}= \lbrace \langle \mu , ~\iota , ~\sigma \rangle \), where μ, ι, σ ∈ [0, 1], 0 ≤ μ + ι + σ ≤ 3} and and called as a single-valued neutrosophic number (SVNN).
Definition 2.2
Let S be a universal set. Then trapezoidal neutrosophic set \(\widetilde {A}\) is defined by Ye [16] in the following form:
where T(s) ⊂ [0, 1], I(s) ⊂ [0, 1], F(s) ⊂ [0, 1] are three trapezoidal neutrosophic numbers and \( T(s)=\left ({\alpha }(s),~{\beta }(s), ~{\gamma }(s), ~{\mu }(s)\right ): S \rightarrow [0,~1]\), \( I(s)= \left ({\lambda }(s),~{\mu }(s), {\kappa }(s), ~{\iota }(s)\right ) : S \rightarrow [0,~1]\) and \( F(s)= \left ({\phi }(s),~{\rho }(s),~{\psi }(s), \\ {\sigma }(s)\right ) : S \rightarrow [0,~1] \) with the condition 0 ≤ μ(s) + ι(s) + σ(s) ≤ 3 for all s ∈ S. Here, T(s), I(s) and F(s) are called the truth-membership function, indeterminacy-membership function and falsity-membership function respectively of the element to the set \(\widetilde {A}\). For convenience, we represent the set as \({\widetilde A}= \left \lbrace \langle ({a},{b},{c},{d}),({k},{l},{m},{n}),({x},{y},{v},{w})\rangle \right . \): \(\left . ~0\leq {d} + {n} + {w}\leq 3 \right \rbrace \) and called as a trapezoidal neutrosophic number (TNN).
Proposition 2.1
Let \(\widetilde {A}_{k}= \left \langle ({a}_{k},{b}_{k}, {c}_{k}, {d}_{k}),\right .({l}_{k}, {m}_{k}, {n}_{k},{p}_{k}),\)\(\left .({x}_{k}, {y}_{k}, {v}_{k}, {w}_{i} )\right \rangle \) (k = 1, 2) be any two TNNs. Then, we have the following operational rules [16]:
-
i)
\(\widetilde {A}_{1} \bigoplus \widetilde {A}_{2}=\left \langle \left ({a}_{1} + {a}_{2} - {a}_{1}{a}_{2}, {b}_{1} + {b}_{2} -{b}_{1}{b}_{2},{c}_{1} \right .\right .\left .+ {c}_{2} -{c}_{1}{c}_{2}, {d}_{1} + {d}_{2} -{d}_{1}{d}_{2}\right ),\left ({l}_{1}{l}_{2},{m}_{1}{m}_{2}, {n}_{1}{n}_{2}, {p}_{1}{p}_{2} \right ),\left .\left ({x}_{1}{x}_{2}, {y}_{1}{y}_{2}, {v}_{1}{v}_{2}, {w}_{1} {w}_{2} \right ) \right \rangle \)
-
ii)
\( \widetilde {A}_{1} \bigotimes \widetilde {A}_{2}=\left \langle \left ({a}_{1}{a}_{2}, {b}_{1}{b}_{2}, {c}_{1}{c}_{2},{d}_{1}{d}_{2}\right )\right .,\left ({l}_{1} + {l}_{2}- {l}_{1}{l}_{2},{m}_{1} + {m}_{2}-{m}_{1}{m}_{2}, {n}_{1} + {n}_{2} \right .\left .-{n}_{1}{n}_{2}, {p}_{1} + {p}_{2} -{p}_{1}{p}_{2} \right ),\left ({x}_{1} + {x}_{2} - {x}_{1}{x}_{2}, {y}_{1}+ {y}_{2} -{y}_{1}{y}_{2}, {v}_{1}+ {v}_{2} \right .\left .\left .- {v}_{1}{v}_{2}, {w}_{1} +{w}_{2} - {w}_{1}{w}_{2} \right ) \right \rangle \)
-
iii)
\( {\mu }\widetilde {A}_{1} = \left \langle \left (1-(1-{a}_{1})^{\mu }, 1-(1-{b}_{1})^{\mu },\right .\right .\left .1-(1-{c}_{1})^{\mu }, 1-(1-{d}_{1})^{\mu }\right ),\left ({{l}_{1}}^{\mu }, {{m}_{1}}^{\mu }, {{n}_{1}}^{\mu }, {{k}_{1}}^{\mu }\right ), \left ({{x}_{1}}^{\mu }, {{y}_{1}}^{\mu }, {{v}_{1}}^{\mu }, {{w}_{1}}^{\mu }\right )\)
-
iv)
\( (\widetilde {A}_{1})^{\mu } = \left \langle \left ({a}_{1}^{\lambda }, {{b}_{1}}^{\mu }, {{c}_{1}}^{\mu }, {{d}_{1}}^{\mu }\right ),\left (1-(1-{l}_{1})^{\mu },\right .\right .1-(1-{{m}_{1}})^{\mu }, 1-(1-{{n}_{1}})^{\mu }, \left .1-(1-{{k}_{1}})^{\mu }\right ),\left (1-(1-{{x}_{1}})^{\mu }, 1-(1-{{y}_{1}})^{\mu }, \right .\left .1-(1-{{v}_{1}})^{\mu }, 1-(1-{{w}_{1}})^{\mu }\right ) \)
Definition 2.3
Let \(\widetilde {A}_{s}= \langle ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}),({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}),({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} )\rangle \) (s = 1, 2,⋯ , p) be any collection of TNNs. Then the trapezoidal neutrosophic number weighted arithmetic averaging (TNNWAA) operator is defined in [16] as
where ϕs (s = 1, 2,⋯ , p) is the weight of \(\widetilde A_{s}~(s=1, 2,\cdots ,p)\) with ϕs ∈ [0, 1] and \(\displaystyle \sum \limits ^{p}_{s=1}\phi _ s=1\).
Definition 2.4
Let \(\widetilde {A}_{s}= \left \langle ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}),\right .\left .({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}),({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} )\right \rangle \), (s = 1, 2,⋯ , p) be collections of TNNs. Then the trapezoidal neutrosophic number weighted geometric averaging (TNNWGA) operator is defined in [16] as
where ϕs (s = 1, 2,⋯ , p) is the weight of \(\widetilde A_{s}~(s=1, 2,\cdots ,p)\) with ϕs ∈ [0, 1] and \(\displaystyle {\sum }^{p}_{s=1}\phi _{s}=1\).
2.1 Application of aggregation operators
Aggregation operators are mainly used in MCDM/MCGDM techniques to aggregate the input values of certain alternatives under the different criteria. Let, we want to evaluate an alternative under different criteria in which computational entities are in the form of TNNs. Now, we need to introduce a technique to aggregate all the evaluation values into a single value in the form of TNN. For this purpose, we have used aggregation operators TNNWAA & TNNWGA as introducded by Ye [16]. Since TNN is an another environment in the neutrosophic field, then the above aggregation operators must have an crucial impact on MCDM/MCGDM techniques in this TN environment. Here, we have presented the following example to demonstrate the application of of above mention aggregation operators:
Example 2.1
Let someone wants to buy a new mobile phone based on the criterion of better camera quality, graphics and RAM services. Let the available alternatives are mobile companies namely X1, X2 and X3, which are evaluated under the following criteria:
-
1)
Y1 indicates the camera quality.
-
2)
Y2 indicates the graphics quality services.
-
3)
Y3 indicates the RAM quality services.
whose weight vector is (0.33, 0.32, 0.35). Figure 1 show the schematic diagram of the application of aggregation operators.
The input values of the decision making problem in TN environment are given in the following matrix

Now, if we use the operator TNNWAA on the above decision matrix, then we get the evaluation value alternatives as follows:
Again, if we use the operator TNNWGA, we get
From the above example, it is observed that after utilizing the aggregation operators, we get the evaluation value of the alternatives in the aggregated form. Now, if we apply the score function [16], then we observe that, mobile company X3 is the best option in presence of the underlying three criterion. Based on above example, we observe that if we want to evaluate some alternatives under different criteria in TN environment, then first we need to apply the aggregation operators to convert the system into a single decision matrix. After that, utilizing the fruitful cripsification technique, we can get the associated crisp values of each alternatives. Finally, the best alternative can be determined by taking the highest crisp value among finite alternatives.
2.2 De-Neutrosophication of a TNN
De-Neutrosophication is the technique where an appreciable result is generated for crispsification. In the neutrosophic environment researchers are highly devoted to convert a TNN into a crisp number through various methods and techniques. Here, we use Removal Area Technique (RAT) to calculate de-Neutrosophication value of TNNs that is defined as follows:
Definition 2.1.1
Let \(\widetilde {A}= \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\rangle \) be any TNN, then the de-Neutrosophication value of \(\widetilde {A}\) (utilizing Removal Area technique) is given by Chakraborty et al. [10] as
Definition 2.1.2
Let \(\widetilde {A}_{1}\) and \(\widetilde {A}_{2}\) be any two TNNs, then the ranking technique is defined as follows
-
i)
If \(D_{Neu}(\widetilde {A}_{1})>D_{Neu}(\widetilde {A}_{2})\), then \( \widetilde {A}_{1}>\widetilde {A}_{2}\)
-
ii)
If \(D_{Neu}(\widetilde {A}_{1})<D_{Neu}(\widetilde {A}_{2})\), then \( \widetilde {A}_{1}<\widetilde {A}_{2}\).
3 Logarithmic operational law for TNN
In this section logarithmic function on TNN is defined and studied where the base (μ) is considered as positive real number. Let \(\widetilde {A}\) be a TNN and μ > 0 be a real number. Since in real field \(\log _{\mu }0\) and \(\log _{1}x\) are undefined, where x is a real number, so we assume that \( \widetilde A \neq 0\), \( \widetilde A \neq \left \langle [0, 0, 0, 0],[1, 1, 1, 1],[1, 1, 1, 1] \right \rangle \) and μ≠ 1. We define the logarithm of TNN as follows:
Definition 3.1
Let V be an universal set and Let \(\widetilde {A}= \left \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\right \rangle \) be any TNN. Then, we define
Here, we shall discuss some elementary Properties of \(\log _{\mu } \widetilde {A}\) which are as follows:
Theorem 3.1
Let \(\widetilde {A}= \left \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),\right .\left .({x}, {y}, {v}, {w} )\right \rangle \) be a TNN. Then \(\log _{\mu } \widetilde {A}\) is a TNN.
Proof
Let \(\widetilde {A}= \left \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\right \rangle \) be a TNN. Then a, b, c, d, l, m, n, p, x, y, v, w ∈ [0, 1] with 0 ≤ d + p + w ≤ 3. □
Case 1
When 0 < μ ≤ min (a, b, c, d, 1 − l, 1 − m, 1 − n, 1 − p) < 1, then we have,
\( 0 \leq \log _{\mu }{a},\log _{\mu }{b},\log _{\mu }{c},\log _{\mu }{d},\log _{\mu }(1-{l}),\log _{\mu }(1-{m}),\log _{\mu }(1-{n}),\log _{\mu }(1-{p}),\log _{\mu }(1-{x}),\log _{\mu }(1-{y}),\log _{\mu }(1-{v}),\log _{\mu }(1-{w})\leq 1\)
Hence, \(0 \leq 1- \log _{\mu }{a}, 1- \log _{\mu }{b}, 1- \log _{\mu }{c}, 1- \log _{\mu }{d},\log _{\mu }(1-{l}),\log _{\mu }(1-{m}),\log _{\mu }(1-{n}),\log _{\mu }(1-{p}),\log _{\mu }(1-{x}),\log _{\mu }(1-{y}),\log _{\mu }(1-{v}),\log _{\mu }(1-{w})\leq 1\) and \(0\leq \log _{\mu }{a}+\log _{\mu }(1-{p})+\log _{\mu }(1-{w})\leq 3\).
Thus, \( \log _{\mu }\widetilde {A}\) is a TNN.
Case 2
When \( 0<\frac {1}{\mu } \leq \) min (a, b, c, d, 1 − l, 1 − m, 1 − n, 1 − p) < 1, then proceeding in the similar way as in the above case 1, we can prove that \(\log _{\mu }\widetilde {A}\) is a TNN.
Thus, we conclude that \(\log _{\mu }\widetilde {A}\) is a TNN.
Theorem 3.2
Let \(\widetilde {A}= \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),\) (x, y, v, w)〉 be any TNN and 0 < μ ≤ min (a, b, c, d, 1 − l, 1 − m, 1 − n, 1 − p, 1 − x, 1 − y, 1 − v, 1 − w) < 1, then
-
i)
\(\mu ^{\log _{\mu }\widetilde {A}}= \widetilde {A}\)
-
ii)
\(\log _{\mu }\mu ^{\widetilde {A}}=\widetilde {A}\)
Proof
-
i)
Using the Properties 2.1 and the Definition 3.1, we get
$$ \begin{array}{@{}rcl@{}} \mu^{\log_{\mu}\widetilde{A}} &=& \left\langle \left[ \mu^{1-(1-\log_{\mu}{a})},\mu^{1-(1-\log_{\mu}{b})},\mu^{1-(1-\log_{\mu}{c})},\mu^{1-(1-\log_{\mu}{d})}\right],\left[1-\mu^{\log_{\mu}(1-{l})}, 1-\mu^{\log_{\mu}(1-{m})}, 1-\mu^{\log_{\mu}(1-{n})},\right.\right.\\ &&\left.\left. 1-\mu^{\log_{\mu}(1-{p})}\right],\left[1-\mu^{\log_{\mu}(1-{x})}, 1-\mu^{\log_{\mu}(1-{y})}, 1-\mu^{\log_{\mu}(1-{v})}, 1-\mu^{\log_{\mu}(1-{w})}\right]\right\rangle\\ &=&\left\langle \left[\mu^{\log_{\mu}{a}},\mu^{\log_{\mu}{b}},\mu^{\log_{\mu}{c}},\mu^{\log_{\mu}{d}}\right],\left[1-(1-{l}), 1-(1-{m}), 1-(1-{n}), 1-(1-{p})\right],\left[1-(1-{x}),\right.\right.\\ &&\left.\left.1-(1-{y}), 1-(1-{v}), 1-(1-{w})\right]\right\rangle\\ &=& \left\langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\right\rangle\\ &=&\widetilde{A}. \end{array} $$ -
ii)
Again utilizing Properties 2.1 and the Definition 3.2, we get
$$ \begin{array}{@{}rcl@{}} \log_{\mu}\mu^{\widetilde{A}} &=& \log_{\mu}\left\langle [\mu^{1-{a}},\mu^{1-{b}},\mu^{1-{c}},\mu^{1-{d}}],[1-\mu^{{l}}, 1-\mu^{{m}}, 1-\mu^{{n}}, 1-\mu^{{p}}],[1-\mu^{{x}}, 1-\mu^{{y}}, 1-\mu^{{v}}, 1-\mu^{{w}}]\right\rangle\\ &= &\left\langle \left[1-\log_{\mu}\mu^{1-{a}}, 1-\log_{\mu}\mu^{1-{b}}, 1-\log_{\mu}\mu^{1-{c}}, 1-\log_{\mu}\mu^{1-{d}}\right],\left[\log_{\mu}(1-(1-\mu^{{l}})),\log_{\mu}(1-(1-\mu^{{m}})),\right.\right.\\ &&\left.\log_{\mu}(1-(1-\mu^{{n}})),\log_{\mu}(1-(1-\mu^{{p}}))\right],\left[\log_{\mu}(1-(1-\mu^{{x}})),\log_{\mu}(1-(1-\mu^{{y}})),\log_{\mu}(1-(1-\mu^{{v}})),\right.\\ &&\left.\left.\log_{\mu}(1-(1-\mu^{{w}}))\right]\right\rangle\\ &=&\left\langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\right\rangle\\ &=&\widetilde{A}. \end{array} $$
□
Theorem 3.3
Let \(\widetilde {A}_{t}= \left \langle ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}),({l}_{t}, {m}_{t}, {n}_{t},{p}_{t}),\right .\) \(\left .({x}_{t}, {y}_{t}, {v}_{t}, {w}_{t} )\right \rangle \) (t = 1, 2) be any two TNNs and 0 < μ ≤ min (at, bt, ct, dt, 1 − lt, 1 − mt, 1 − nt, 1 − pt, 1 − xt, 1 − yt, 1 − vt, 1 − wt) < 1. Then
-
i)
\( \log _{\mu }{\widetilde {A}_{1}} \bigoplus \log _{\mu }\widetilde {A}_{2} = \log _{\mu }\widetilde {A}_{2}\bigoplus \log _{\mu }\widetilde {A}_{1}\);
-
ii)
\( \log _{\mu }{\widetilde {A}_{1}} \bigotimes \log _{\mu }\widetilde {A}_{2} = \log _{\mu }{A}_{2}\bigotimes \log _{\mu }\widetilde {A}_{1}\).
Proof
The proof of the above Theorem follows from Properties 2.1 and Definition 3.1. □
Theorem 3.4
Let \(\widetilde {A}_{t}= \langle ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}),({l}_{t}, {m}_{t}, {n}_{t},{p}_{t}),\) (xt, yt, vt, wt)〉 (t = 1, 2, 3) be any three TNNs and 0 < μ ≤ min (at, bt, ct, dt, 1 − lt, 1 − mt, 1 − nt, 1 − pt, 1 − xt, 1 − yt, 1 − vt, 1 − wt) < 1. Then
-
i)
\( \log _{\mu }{\widetilde {A}_{1}} \bigoplus \log _{\mu }\widetilde {A}_{2} \bigoplus \log _{\mu }\widetilde {A}_{3} = \log _{\mu }\widetilde {A}_{3} \bigoplus \log _{\mu }\widetilde {A}_{2}\) \(\bigoplus \log _{\mu }{A}_{1}\);
-
ii)
\( \log _{\mu }{\widetilde {A}_{1}} \bigotimes \log _{\mu }\widetilde {A}_{2} \bigotimes \log _{\mu }\widetilde {A}_{3} = \log _{\mu }\widetilde {A}_{3} \bigotimes \log _{\mu }\widetilde {A}_{2}\) \(\bigotimes \log _{\mu }\widetilde {A}_{1}\).
Proof
The proof of the above Theorem follows from Properties 2.1 and Definition 3.1. □
Theorem 3.5
Let \(\widetilde {A}_{t}= \left \langle ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}),~({l}_{t}, {m}_{t}, {n}_{t},{p}_{t})\right .\), \(\left .({x}_{t}, {y}_{t}, {v}_{t}, {w}_{t} )\right \rangle \) (t = 1, 2) be any two TNNs and 0 < μ ≤ min \(\left ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}, 1-{l}_{t}, 1-{m}_{t}, 1-{n}_{t}, 1\!-{p}_{t}, 1-{x}_{t}, 1-\right .\left .{y}_{t}, 1-{v}_{t}, 1-{w}_{t}\right )< 1\). Then
-
i)
\( k(\log _{\mu }\widetilde {A}_{1} \bigoplus \log _{\mu }\widetilde {A}_{2})= k \log _{\mu }\widetilde {A}_{1} \bigoplus k\log _{\mu }\widetilde {A}_{2}\);
-
ii)
(\( \log _{\mu }\widetilde {A}_{1} \bigotimes \log _{\mu }\widetilde {A}_{2})^{k}=(\log _{\mu }\widetilde {A}_{1})^{k} \bigotimes (\log _{\mu }\widetilde {A}_{2})^{k};\)
-
iii)
\( k_{1} \log _{\mu }\widetilde {A}_{1} \bigoplus k_{2}\log _{\mu }\widetilde {A}_{1}=(k_{1} + k_{2})\log _{\mu }\widetilde {A}_{1}\);
-
iv)
\((\log _{\mu }\widetilde {A}_{1})^{k_{1}}\bigotimes (\log _{\mu }\widetilde {A}_{1})^{k_{2}}= (\log _{\mu }\widetilde {A}_{1})^{k_{1} + k_{2}};\)
-
v)
\(((\log _{\mu }\widetilde {A}_{1})^{k_{1}})^{k_{2}}= (\log _{\mu }\widetilde {A}_{1})^{k_{1}k_{2}},\) where k, k1,& k2 are positive real numbers.
Proof
-
i)
We know,
$$ \begin{array}{lll} &&\log_{\mu}\widetilde{A}_{1}\\ &=&\left\langle\left[1-\log_{\mu}{a}_{1}, 1-\log_{\mu}{b}_{1}, 1-\log_{\mu}{c}_{1}, 1-\log_{\mu}{d}_{1}\right],\left[\log_{\mu}(1-{l}_{1}),\log_{\mu}(1-{m}_{1}),\log_{\mu}(1-{n}_{1}),\log_{\mu}(1-{p}_{1})\right],\right.\\ &&\left.\left[\log_{\mu}(1-{x}_{1}),\log_{\mu}(1-{y}_{1}),\log_{\mu}(1-{v}_{1}),\log_{\mu}(1-{w}_{1})\right]\right\rangle. \end{array} $$$$ \begin{array}{lll} &&\log_{\mu}\widetilde{A}_{2}\\ &=&\left\langle\left[1-\log_{\mu}{a}_{2}, 1-\log_{\mu}{b}_{2}, 1-\log_{\mu}{c}_{2}, 1-\log_{\mu}{d}_{2}\right],\left[\log_{\mu}(1-{l}_{2}),\log_{\mu}(1-{m}_{2}),\log_{\mu}(1-{n}_{2}),\log_{\mu}(1-{p}_{2})\right],\right.\\ &&\left.\left[\log_{\mu}(1-{x}_{2}),\log_{\mu}(1-{y}_{2}),\log_{\mu}(1-{v}_{2}),\log_{\mu}(1-{w}_{2})\right]\right\rangle . \\ &\therefore&\log_{\mu}\widetilde{A}_{1}\bigoplus \log_{\mu} \widetilde{A}_{2} \\ &=& \left\langle \left[1- (\log_{\mu}{a}_{1})(\log_{\mu}{a}_{2}), 1- (\log_{\mu}{b}_{1})(\log_{\mu}{b}_{2}), 1- (\log_{\mu}{c}_{1})(\log_{\mu}{c}_{2}), 1- (\log_{\mu}{d}_{1})(\log_{\mu}{d}_{2})\right],\right.\\ &&[\log_{\mu}(1-{l}_{1})\log_{\mu}(1-{l}_{2}),\log_{\mu}(1-{m}_{1})\log_{\mu}(1-{m}_{2}),\log_{\mu}(1-{n}_{1})\log_{\mu}(1-{n}_{2})\log_{\mu}(1-{p}_{1})\log_{\mu}(1-{p}_{2})],\\ &&\left.[\log_{\mu}(1-{x}_{1})\log_{\mu}(1-{x}_{2}),\log_{\mu}(1-{y}_{1})\log_{\mu}(1-{y}_{2}),\log_{\mu}(1-{v}_{1})\log_{\mu}(1-{v}_{2}),\log_{\mu}(1-{w}_{1})\log_{\mu}(1-{w}_{2})]\right\rangle. \end{array} $$Now for k > 0 we have,
$$ \begin{array}{lll} && k(\log_{\mu} \widetilde{A}_{1}\bigoplus \log_{\mu} \widetilde{A}_{2}), \\ &=& \langle \left[1- ((\log_{\mu}{a}_{1})(\log_{\mu}{a}_{2}))^{k}, 1- ((\log_{\mu}{b}_{1})(\log_{\mu}{b}_{2}))^{k}, 1- ((\log_{\mu}{c}_{1})(\log_{\mu}{c}_{2}))^{k}, 1- ((\log_{\mu}{d}_{1})(\log_{\mu}{d}_{2}))^{k}\right],\\ &&\left[((\log_{\mu}(1-{l}_{1})\log_{\mu}(1-{l}_{2}))^{k},((\log_{\mu}(1-{m}_{1})\log_{\mu}(1-{m}_{2}))^{k},((\log_{\mu}(1-{n}_{1})\log_{\mu}(1-{n}_{2}))^{k},\right.\\ &&\left.((\log_{\mu}(1-{p}_{1})\log_{\mu}(1-{p}_{2}))^{k}\right],\left[(((\log_{\mu}(1-{x}_{1})\log_{\mu}(1-{x}_{2}))^{k},((\log_{\mu}(1-{y}_{1})\log_{\mu}(1-{y}_{2}))^{k},\right.\\ &&\left( (\log_{\mu}(1-{v}_{1})\log_{\mu}(1-{v}_{2}))^{k},((\log_{\mu}(1-{w}_{1})\log_{\mu}(1-{w}_{2}))^{k}\right] \rangle\\ &=&\left\langle[1-(\log_{\mu}{a}_{1})^{k}, 1-(\log_{\mu}{b}_{1})^{k}, 1-(\log_{\mu}{c}_{1})^{k}, 1-(\log_{\mu}{d}_{1})^{k}],[(\log_{\mu}(1-{l}_{1}))^{k},(\log_{\mu}(1-{m}_{1}))^{k},\right.\\ &&\left.\left.(\log_{\mu}(1-{n}_{1}))^{k},(\log_{\mu}(1-{p}_{1}))^{k}\right],\left[(\log_{\mu}(1-{x}_{1}))^{k},(\log_{\mu}(1-{y}_{1}))^{k},(\log_{\mu}(1-{v}_{1}))^{k},(\log_{\mu}(1-{w}_{1}))^{k}\right]\right\rangle\\ && \bigoplus \left\langle[1-(\log_{\mu}{a}_{2})^{k}, 1-(\log_{\mu}{b}_{2})^{k}, 1-(\log_{\mu}{c}_{2})^{k}, 1-(\log_{\mu}{d}_{2})^{k}],[(\log_{\mu}(1-{l}_{2}))^{k},(\log_{\mu}(1-{m}_{2}))^{k},\right.\\ &&\left.\left.(\log_{\mu}(1-{n}_{2}))^{k},(\log_{\mu}(1-{p}_{2}))^{k}\right],\left[(\log_{\mu}(1-{x}_{2}))^{k},(\log_{\mu}(1-{y}_{2}))^{k},(\log_{\mu}(1-{v}_{2}))^{k},(\log_{\mu}(1-{w}_{2}))^{k}\right]\right\rangle \\ &=&k\log_{\mu} \widetilde{A}_{1} \bigoplus k\log_{\mu} \widetilde{A}_{2}. \end{array} $$ -
ii)
This proof is similar to the previous one.
-
iii)
For any k1, k2 > 0, we have
$$ \begin{array}{lll} &&k_{1}\log_{\mu}\widetilde{A}_{1} \bigoplus k_{2}\log_{\mu}\widetilde{A}_{1}\\ &=&\left\langle\left[1-(\log_{\mu}{a}_{1})^{k_{1}}, 1-(\log_{\mu}{b}_{1})^{k_{1}}, 1-(\log_{\mu}{c}_{1})^{k_{1}}, 1-(\log_{\mu}{d}_{1})^{k_{1}}\right],\left[(\log_{\mu}(1-{l}_{1}))^{k_{1}},(\log_{\mu}(1-{m}_{1}))^{k_{1}},\right.\right.\\ &&\left.\left.(\log_{\mu}(1-{n}_{1}))^{k_{1}},(\log_{\mu}(1-{p}_{1}))^{k_{1}}\right],\left[(\log_{\mu}(1-{x}_{1}))^{k_{1}},(\log_{\mu}(1-{y}_{1}))^{k_{1}},(\log_{\mu}(1-{v}_{1}))^{k_{1}},(\log_{\mu}(1-{w}_{1}))^{k_{1}}\right]\right\rangle \end{array} $$$$ \begin{array}{lll} && \bigoplus \left\langle\left[1-(\log_{\mu}{a}_{2})^{k_{1}}, 1-(\log_{\mu}{b}_{2})^{k_{1}}, 1-(\log_{\mu}{c}_{2})^{k_{2}}, 1-(\log_{\mu}{d}_{2})^{k_{2}}\right],\left[(\log_{\mu}(1-{l}_{2}))^{k_{2}},(\log_{\mu}(1-{m}_{2}))^{k_{2}},\right.\right.\\ &&\left.\left.(\log_{\mu}(1-{n}_{2}))^{k_{2}},(\log_{\mu}(1-{p}_{2}))^{k_{2}}\right],\left[(\log_{\mu}(1-{x}_{2}))^{k_{2}},(\log_{\mu}(1-{y}_{2}))^{k_{2}},(\log_{\mu}(1-{v}_{2}))^{k_{2}},(\log_{\mu}(1-{w}_{2}))^{k_{2}}\right]\right\rangle\\ &=&\left\langle\left[1-(\log_{\mu}{a}_{1})^{k_{1} + k_{2}}, 1-(\log_{\mu}{b}_{1})^{k_{1} + k_{2}}, 1-(\log_{\mu}{c}_{1})^{k_{1} + k_{2}}, 1-(\log_{\mu}{d}_{1})^{k_{1} + k_{2}}\right],\left[(\log_{\mu}(1-{l}_{1}))^{k_{1}+ k_{2}},\right.\right.\\ &&\left.(\log_{\mu}(1-{m}_{1}))^{k_{1}+ k_{2}},(\log_{\mu}(1-{n}_{1}))^{k_{1}+ k_{2}},(\log_{\mu}(1-{p}_{1}))^{k_{1}+ k_{2}}\right],\left[(\log_{\mu}(1-{x}_{1}))^{k_{1}+ k_{2}},(\log_{\mu}(1-{y}_{1}))^{k_{1}+ k_{2}},\right.\\ &&\left.\left.(\log_{\mu}(1-{v}_{1}))^{k_{1}+ k_{2}},(\log_{\mu}(1-{w}_{1}))^{k_{1}+ k_{2}}\right]\right\rangle\\ &=&(k_{1} + k_{2})\log_{\mu}\widetilde{A}_{1}. \end{array} $$ -
iv)
Again for any k1, k2 > 0, we get
$$ \begin{array}{lll} &&(\log_{\mu}\widetilde{A}_{1})^{k_{1}} \bigotimes (\log_{\mu}\widetilde{A}_{1})^{k_{2}}\\ &=&\left\langle \left[(1-\log_{\mu}{a}_{1})^{k_{1}},(1-\log_{\mu}{b}_{1})^{k_{1}},(1-\log_{\mu}{c}_{1})^{k_{1}},(1-\log_{\mu}{d}_{1})^{k_{1}}\right],\left[1-(1-\log_{\mu}(1-{l}_{1}))^{k_{1}},\right.\right.\\ &&\left.1-(1-\log_{\mu}(1-{m}_{1}))^{k_{1}}, 1-(1-\log_{\mu}(1-{n}_{1}))^{k_{1}}, 1-(1-\log_{\mu}(1-{p}_{1}))^{k_{1}}\right],\left[1-(1-\log_{\mu}(1-{x}_{1}))^{k_{1}},\right.\\ &&\left.\left.1-(1-\log_{\mu}(1-{y}_{1}))^{k_{1}}, 1-(1-\log_{\mu}(1-{v}_{1}))^{k_{1}}, 1-(1-\log_{\mu}(1-{w}_{1}))^{k_{1}}\right]\right\rangle \bigotimes\left\langle \left[(1-\log_{\mu}{a}_{1})^{k_{2}},(1-\log_{\mu}{b}_{1})^{k_{2}},\right.\right.\\ &&\left.(1-\log_{\mu}{c}_{1})^{k_{2}},(1-\log_{\mu}{d}_{1})^{k_{2}}\right],\left[1-(1-\log_{\mu}(1-{l}_{1}))^{k_{2}}, 1-(1-\log_{\mu}(1-{m}_{1}))^{k_{2}}, 1-(1-\log_{\mu}(1-{n}_{1}))^{k_{2}},\right.\\ &&\left.1-(1-\log_{\mu}(1-{p}_{1}))^{k_{2}}\right],\left[1-(1-\log_{\mu}(1-{x}_{1}))^{k_{2}}, 1-(1-\log_{\mu}(1-{y}_{1}))^{k_{2}}, 1-(1-\log_{\mu}(1-{v}_{1}))^{k_{2}},\right.\\ &&\left.\left.1-(1-\log_{\mu}(1-{w}_{1}))^{k_{2}}\right]\right\rangle\\ &=&\left\langle \left[(1-\log_{\mu}{a}_{1})^{k_{1} +k_{2}},(1-\log_{\mu}{b}_{1})^{k_{1} +k_{2}},(1-\log_{\mu}{c}_{1})^{k_{1} +k_{2}},(1-\log_{\mu}{d}_{1})^{k_{1} +k_{2}}\right],\right.\\ &&\left[1-(1-\log_{\mu}(1-{l}_{1}))^{k_{1} +k_{2}}, 1-(1-\log_{\mu}(1-{m}_{1}))^{k_{1} +k_{2}}\right],\left[1-(1-\log_{\mu}(1-{n}_{1}))^{k_{1} +k_{2}},\right.\\ &&\left.1-(1-\log_{\mu}(1-{p}_{1}))^{k_{1} +k_{2}}\right],\left[1-(1-\log_{\mu}(1-{x}_{1}))^{k_{1} +k_{2}}, 1-(1-\log_{\mu}(1-{y}_{1}))^{k_{1} +k_{2}},\right.\\ &&\left.1-(1-\log_{\mu}(1-{v}_{1}))^{k_{1} +k_{2}}, 1-(1-\log_{\mu}(1-{w}_{1}))^{k_{1} +k_{2}}]\right\rangle\\ &=&{(\log_{\mu}\widetilde{A}_{1})^{k_{1}+ k_{2}}} \end{array} $$ -
v)
The proof of this part is trivial and hence omitted.
□
4 Aggregation operators
In the decision making method, generally two types of aggregation operators are used namely weighted arithmetic operator and geometric averaging operator. Here, we proposed two new aggregation operators laws for TNN namely Larm and Lgeo, which are as follows:
Definition 4.1
Let \(\widetilde {A}_{t}= \langle ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}),({l}_{t}, {m}_{t}, {n}_{t},{p}_{t}),\) (xt, yt, vt, wt)〉 (t = 1, 2,⋯ , k) be any collection of TNNs and 0 < μ ≤ min (at, bt, ct, dt, 1 − lt, 1 − mt, 1 − nt, 1 − pt, 1 − xt, 1 − yt, 1 − vt, 1 − wt) < 1. The logarithmic trapezoidal neutrosophic weighted arithmetic aggregation operator \(L_{arm}: {\Gamma }^{k}\rightarrow {\Gamma }\) is defined as
where ω = (ϕ1, ϕ2,⋯ , ϕk)T is the weight vector with ϕt ≥ 0 and \(\displaystyle \sum \limits _{t=1}^{k}\phi _{t} =1\).
Note 4.1.1
For convenience, we denote \(L_{arm}({A}_{1}, \widetilde {A}_{2},\cdots ,{\widetilde {A}_{k})} =L_{arm}.\)
Theorem 4.1
Let \(\widetilde {A}_{s}= \langle ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}),({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}),\) (xs, ys, vs, ws)〉 (s = 1, 2,⋯ , p) be any collection of TNNs. Then the aggregated value by using Larm operator is also TNN and is given by
Proof
To prove the Theorem 4.1, we use mathematical induction on s, where 0 < μs ≤ min (as, bs, cs, ds, 1 − ls, 1 − ms, 1 − ns, 1 − ps, 1 − xs, 1 − ys, 1 − vs, 1 − ws) < 1 (s = 1, 2,⋯ , p). When s = 2, we get
Thus, the Theorem is true for s= 2. Let us assume that the Theorem is true for s = p. Then
Now,
This shows that the Theorem is valid for s= p + 1. Hence by mathematical induction, we can say that the above Theorem holds for all integral value of s.
Again, if \( 0< \frac {1}{\mu _{s}} \leq \) min \(\left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s},\right .\)\(\left . 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s}, 1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right )< 1\), then proceeding in the similar approach as in above case, we also get
□
4.1 Properties of aggregation operator
In this subsection the Properties of Larm operator has been presented. Here, it is assumed that μ1 = μ2 = ⋯ = μp = μ (say) and 0 < μ ≤ min \(\left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s}, 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s},\right .\left .1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right )< 1\). Also ω = (ϕ1, ϕ2,⋯ , ϕp)T be the weight vector such that ϕs ≥ 0 and \(\displaystyle \sum \limits _{s=1}^{p}\phi _{s}=1.\)
Lemma 4.1.1 (Idempotency of L arm operator )
If \( \widetilde {A}_{s}=\widetilde {A} \), ∀ s, where \(\widetilde {A}=\langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\rangle \) then
Proof
Since \( \widetilde {A}_{s}=\widetilde {A} \), ∀ s, where \(\widetilde {A}=\langle ({a},{b}, {c}, {d}),\) (l, m, n, p),(x, y, v, w)〉 is an TNN such that \(\widetilde {A}_{s}=\widetilde {A},~\forall ~s \). Then, from the Theorem (4.1), we get
□
Lemma 4.1.2 (Boundedness of L arm operator )
Let \(\widetilde {A}_{s}= \left \langle \left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}\right ),\left ({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}\right ),\left ({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} \right )\right \rangle ,~(s=1, 2,\cdots ,p)\) be any collection of TNNs and let \(\widetilde {A}_{min} = \left \langle \left [\min \limits {a}_{s},\min \limits {b}_{s}, {\min \limits } {c}_{s}, \min \limits {d}_{s}\right ],\left [\max \limits {l}_{s}, \max \limits {m}_{s},\right .\right .\left .\left . ~~~~~~~\max \limits {n}_{s}, \max \limits {p}_{s}\right ],\left [{\max \limits } {x}_{s}, \max \limits {y}_{s}, \max \limits {v}_{s}, \max \limits {w}_{s}\right ]\right \rangle ,\)\(\widetilde {A}_{max}= \left \langle \left [{\max \limits } {a}_{s},{\max \limits } {b}_{s},{\max \limits } {c}_{s},{\max \limits } {d}_{s}\right ],\left [{\min \limits } {l}_{s}, {\min \limits } \right .\right .~~~~~\left .\left .{m}_{s},{\min \limits } {n}_{s},{\min \limits } {p}_{s}\right ],\left [{\min \limits } {x}_{s},{\min \limits } {y}_{s},{\min \limits } {v}_{s},{\min \limits } {w}_{s}\right ]\right \rangle ,\)\(\widetilde {A^{-}}=L_{arm}(\widetilde {A}_{min},~\widetilde {A}_{min},~\cdots ,\widetilde {A}_{min}),\widetilde {A^{+}}=L_{arm}(\widetilde {A}_{max},~\widetilde {A}_{max},~\cdots ,\widetilde {A}_{max}).\)
Then we have
Proof
The proof of the Lemma follows from the Theorem 4.1 and the Lemma 4.1.1. □
Lemma 4.1.3 (Monotonicity of L arm operator)
Let \(\widetilde {A}_{s}= \left \langle \left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}\right ),\left ({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}\right ),\left ({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} \right )\right \rangle \) and \({\widetilde A^{\prime }_{s}}= \left \langle \left ({a^{\prime }_{s}},{b^{\prime }_{s}}, {c^{\prime }_{s}}, {d^{\prime }_{s}}\right ),\left ({l^{\prime }_{s}}, {m^{\prime }_{s}}, {n^{\prime }_{s}},{p^{\prime }_{s}}\right ), \left ({x^{\prime }_{s}}, {y^{\prime }_{s}}, {v^{\prime }_{s}}, {w^{\prime }_{s}}\right )\right \rangle ,\) (s = 1, 2,⋯ , p) be two collection of TNNs. If \(\widetilde {A}_{s}\leq {\widetilde A^{\prime }_{s}}~~\forall ~s\), then
Proof
The proof of above Lemma is similar to the Lemma 4.1.2 and hence omitted. □
Definition 4.2
Let \(\widetilde {A}_{s}= \left \langle \left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}\right ),\left ({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}\right ),\right .\left .\left ({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} \right )\right \rangle ,\) (s = 1, 2,⋯ , p) be any collection of TNNs and 0 < μ ≤ min \(\left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s}, 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s},\right .\left .1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right )< 1.\) The logarithmic trapezoidal neutrosophic weighted geometric aggregation operator \(L_{geo}: {\Gamma }^{p}\rightarrow {\Gamma }\) is defined as
where ω = (ϕ1, ϕ2,⋯ , ϕp)T is the weight vector with ϕs ≥ 0 and \(\displaystyle \sum \limits _{s=1}^{p}\phi _{s} =1\).
Theorem 4.2
Let \(\widetilde {A}_{s}= \left \langle \left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}\right ),\left ({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}\right ),\right .\left .\left ({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} \right )\right \rangle \) (s = 1, 2,⋯ , p) be any collection of TNNs. Then the aggregated value by using Lgeo operator is also TNN and is given by
Proof
The proof of the Theorem is exactly same as Theorem 4.1 and hence omitted. □
Note 4.2.1
For convenience, we denote \(L_{geo}(\widetilde {A}_{1}, \widetilde {A}_{2},\cdots ,\) \({\widetilde {A}_{p}})=L_{geo}.\)
5 MCGDM technique based on L arm and L geo operators
MCGDM is a branch of operational research. In MCGDM technique, a group of expert/decision-makers are involve to select the best alternative from a given set of feasible alternatives with respect to some given criteria. Here, we have introduced an MCGDM technique by utilizing the operators Larm & Lgeo, scalar multiplication & addition of TNNs and its defuzzyfication method. In this technique, we have considered the influence of the decision makers weights in the decision making procedure. Here, we have considered the MCGDM technique as follows:
Let U = {U1, U2,⋯ , Uu} be the set of ‘u’ different alternatives and V = {V1, V2,⋯ , Vv} be the set of ‘v’ different attributes with the associated weight vectors ω = (ϕ1, ϕ2,⋯ , ϕv)T, where ϕt ≥ 0 and \(\displaystyle \sum \limits _{t=1}^{v}\phi _{t} =1\). Also, we take the set of decision-makers W = {W1, W2,⋯ , Ww} whose weight values are assumed as Ω = {Ω1,Ω2,⋯ ,Ωw}, where Ωk ≥ 0, (k = 1, 2,⋯ , w) and satisfy the condition \(\displaystyle \sum \limits _{k=1}^{w}{\Omega }_{k}=1\). Here, the weight values of the decision-maker’s will be assumed according to ability of judgement, thinking ability, knowledge power, etc. According to the suitable judgement of the decision-makers, firstly we have constructed the decision matrices related with different alternatives. The evaluated values for the alternatives on the attributes are given as
The associated decision matrix (DM) is characterized as follows:

where r = 1, 2,⋯ , w.
Let the logarithmic base index for TNNs are given by \(\mu ^{r}_{ij}\) (i = 1, 2,⋯ , u), (j = 1, 2,⋯ , v) where \(0< \mu ^{r}_{ij} \leq \) min (aij, bij, cij, dij, 1 − lij, 1 − mij, 1 − nij, 1 − pij, 1 − xij, 1 − yij, 1 − vij, 1 − wij) < 1 which are summarised in the matrix form as follows:

where r = 1, 2,⋯ , w.
Now, our MCGDM technique under TN environment has been executed through the following steps:
-
Step 1: Firstly, we apply the Larm or Lgeo operator on every decision matrix DMr to get a new column matrix \(C_{u\times 1}^{r}\) as follows
$$ \begin{array}{@{}rcl@{}} C_{u\times 1}^{r} &=& TNWEA (\widetilde{A}_{1}, \widetilde{A}_{2},\cdots,\widetilde{A}_{v})\\ &=& \begin{array}{ll} U_{1}\\ U_{2}\\ \vdots\\ U_{u} \end{array} \left( \begin{array}{ll} \widetilde{A}_{11}^{r}&\\ \widetilde{A}_{21}^{r}&\\ {\vdots} &\\ \widetilde{A}_{u1}^{r}& \end{array}\right), \end{array} $$where entities of column matrix \(C_{u\times 1}^{r}\) is the aggregated evaluation values with respect to different criterion (r = 1, 2, ⋯ , w).
-
Step 2: Here, we obtain overall attribute values \(\widetilde {A}_{s1}\) corresponding to the alternatives Us (s = 1, 2,⋯ , u) after utilizing decision-maker’s (Ωk) weights according to the relation \(\displaystyle \sum \limits _{k=1}^{w} {\Omega }_{k} C_{u\times 1}^{k}\) (scalar multiplication and addition of TNNs) in the form of final decision matrix (DM) as follows
$$ \mathbf{DM} = \begin{array}{ll} U_{1}\\ U_{2}\\ \vdots\\ U_{u} \end{array} \left( \begin{array}{cc} \widetilde{A}_{11}\\ \widetilde{A}_{21}\\ {\vdots} \\ \widetilde{A}_{u1} \end{array}\right)~. $$ -
Step 3: We calculate \(D_{Neu}(\widetilde {A}_{s1})\) of the alternatives Us, (s = 1, 2, ⋯ , u) utilizing de-Neutrosophication technique according to the Definition 2.1.1.
-
Step 4: After getting all the de-Neutrosophication values of the corresponding alternatives, the alternatives have been ranked according to the Definition 2.1.2 and select the best one.
Remark 5.1
The steps of MCGDM technique have been shown pictorially in Fig. 2.
6 Detection of most harmful virus by utilizing proposed MCGDM technique
Let us consider a realistic problem linked with medical domain due to presence of disjunctive kinds of virus in our environment. In this current era, we observed that, humans of our world are suffering from many diseases and they deal with disjunctive sort of symptoms in exclusive times. It is a burning issue to identify which virus is the most harmful virus for human in recent times. Peoples always went to the hospital or nursing home and meet the doctor’s for advice. Now the doctor’s always try to identify the fever according to lab test report and symptoms on the patient’s body. But, sometimes their minds are in dilemma about the virus and symptoms when they are so closely related to each other. Thus it is a problem of uncertainty domain in which neutrosophic components are present. People of our society are come to know about the virus and its effects according to the opinions of doctors. Now, our problem is to collect data’s from different doctors (Junior, Adult, Senior) related with virus and symptoms and create decision matrices in hesitation arena and focus to find out the most harmful virus in our environment. Thus, it becomes an MCGDM problem having three alternatives, three attributes and three types of decision-maker.
Let the alternatives are: U1 = Virus 1 (Ebola Virus), U2 = Virus 2 (Marburg Virus), U3 = Virus 3 (Corona Virus) and the corresponding attributes are V1 = Symptom 1 (Vomiting), V2 = Symptom 2 (Sore Throat Problem), V3 = Symptom 3 (cough and Red Eyes). Let us consider the decision-makers W1 = Junior Doctor, W2 = Adult Doctor, W3 = Senior Doctor having weight value D = {0.33, 0.37, 0.3} and the weight corresponding to the attribute function is taken as Δ = {0.32, 0.35, 0.33}. The three alternatives are to be evaluated under these three attributes and give their preferences in terms of TNNs by the decision-makers.
The evaluated information of the alternatives Ui, (i = 1, 2,⋯ , u) under the attribute Vj, (j = 1, 2,⋯ , v) are characterized in the following trapezoidal neutrosophic number decision matrices:



Furthermore, the logarithmic base matrices of corresponding decision matrices are characterized as

Now, we have used the proposed technique under TN environment as follows:
-
Step 1: Firstly, we use the Larm operator on each decision matrix DMr according to the equation (1) we have new column matrices \(C_{3\times 1}^{r} (r=1, 2, 3)\) as follows
$$ \mathbf{C_{3\times 1}^{1}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.3918, 0.7694, 0.7743, 0.7893),(0.2910, 0.3385, 0.3476, 0.3871),(0.244, 0.3208, 0.1988, 0.3331) \rangle\\ \langle (0.3808, 0.5407, 0.5420, 0.6125),(0.5126, 0.6103, 0.6805, 0.7422),(0.0944, 0.3979, 0.4810, 0.5483)\rangle\\ \langle (0.6726, 0.7638, 0.7826, 0.7905),(0.3944, 0.5541, 0.5755 , 0.6033),(0.3667, 0.4647, 0.6541, 0.7051)\rangle \end{array}\right), $$$$\mathbf{C_{3\times 1}^{2}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.1775, 0.5579, 0.7214, 0.7517),(0.4421, 0.4886, 0.5037, 1.000),(0.3250, 0.7206, 0.7281, 0.7392) \rangle\\ \langle (0.4970, 0.5352, 0.696, 0.6406),(0.4562, 0.5515, 0.5803, 0.6293),(0.4089, 0.4185, 0.4271, 0.7255)\rangle\\ \langle (0.2484, 0.7406, 0.7768, 0.7573),(0.4227, 0.4427, 0.4623, 0.5397),(0.4269, 0.4952, 0.6144, 0.7539 )\rangle \end{array}\right), $$$$\mathbf{C_{3\times 1}^{3}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.5616, 0.7568, 0.8417, 0.8529),(0.2368, 0.3843, 0.6788, 0.7408),(0.5078, 0.6911, 0.7518, 0.8314) \rangle\\ \langle (0.6192, 0.6198, 0.6795, 0.7452),(0.3422, 0.6367, 0.6438, 0.7248),(0.6838, 0.6838, 0.8295, 0.8438)\rangle\\ \langle (0.2446, 0.3002, 0.395, 0.6889),(0.3986, 0.4046, 0.7998, 0.8819),(0.5960, 0.6107, 0.6146, 0.6787 )\rangle \end{array}\right). $$Again, if we utilize the operator Lgeo operator according to the (2) on every decision matrix DMr, we get new column matrices \(C_{3\times 1}^{r} (r=1, 2, 3)\) as follows:
$$\mathbf{\left( C_{3\times 1}^{1}\right)_{geo}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.6082, 0.6306, 0.6357, 0.6407),(0.5090, 0.5215, 0.5524, 0.7129),(0.7560, 0.7792, 0.8012, 0.8669) \rangle\\ \langle (0.4192, 0.4513, 0.4580, 0.4875),(0.1874, 0.3897, 0.4195, 0.5780),(0.6056, 0.6210, 0.7190, 0.7517)\rangle\\ \langle (0.3274, 0.3362, 0.6174, 0.6395),(0.4056, 0.4459, 0.5245, 0.6967),(0.6333, 0.6353, 0.6459, 0.6949)\rangle \end{array}\right), $$$$\mathbf{\left( C_{3\times 1}^{2}\right)_{geo}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.3225, 0.4421, 0.4786, 0.4830),(0.5079, 0.5114, 0.5963, 1.000),(0.2675, 0.2694, 0.3819, 0.4608) \rangle\\ \langle (0.5030, 0.5648, 0.6040, 0.6594),(0.5438, 0.5485, 0.5697, 0.5707),(0.5411, 0.5915, 0.7290, 0.7745)\rangle\\ \langle (0.4516, 0.4594, 0.5232, 0.5427),(0.5773, 0.5573, 0.6377, 0.6603),(0.1731, 0.3048, 0.3856, 0.4461)\rangle \end{array}\right), $$$$\mathbf{\left( C_{3\times 1}^{3}\right)_{geo}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.4384, 0.4432, 0.4583, 0.4710),(.5632, 0.6157, 0.6212, 0.6592),(0.2922, 0.3089, 0.3482, 0.3686) \rangle\\ \langle (0.3808, 0.4842, 0.5205, 0.548),(0.4578, 0.5433, 0.5562, 0.5752),(0.3162, 0.3162, 0.3705, 0.3862)\rangle\\ \langle (0.7554, 0.7998, 0.8050, 0.8111),(0.4014, 0.5954, 0.6002, 0.6181),(0.4040, 0.4893, 0.5954, 0.6213)\rangle \end{array}\right). $$ -
Step 2: We now apply decision-maker’s weight maintaining the relation \(\displaystyle \sum \limits _{k=1}^{w} {\Omega }_{k} C_{u\times 1}^{k}\) (scalar multiplication and addition of TNNs) and we have overall attribute values \(\widetilde {A}_{s1}\) for the alternatives Us(s = 1, 2, 3) under the operator Larm as follows
$$ \mathbf{\left( DM\right)_{arm}} = \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle (0.3836, 0.7019, 0.7575, 0.7615),(0.3193, 0.4179, 0.5298, 0.6054),(0.3380, 0.4476, 0.5765, 0.5886) \rangle\\ \langle (0.5044, 0.5680, 0.5910, 0.6677),(0.5063, 0.6009, 0.6350, 0.6995),(0.3069, 0.3727, 0.5138, 0.5494) \rangle\\ \langle (0.4278, 0.6474, 0.6789, 0.6819),(0.4059, 0.4902, 0.5858, 0.6171),(0.4731, 0.5164, 0.5533, 0.5951) \rangle \end{array}\right). $$On the other side, if we apply decision-makers weight under the operator Lgeo according to the relation \(\displaystyle \sum \limits _{k=1}^{w} \rho _{k} C_{u\times 1}^{k}\), we get overall attribute values \(\widetilde {A}_{s1}\) for the alternatives Us(s = 1, 2, 3) which is given as
$$ \mathbf{\left( DM\right)_{geo}} = \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle (0.3744, 0.4202, 0.4539, 0.5389),(0.4633, 0.5766, 0.6512, 0.6632 ),(0.3374, 0.3808, 0.4830, 0.5119) \rangle\\ \langle (0.5138, 0.5512, 0.6269, 0.6395),(0.2051, 0.3952, 0.4612, 0.4615),(0.4459, 0.4931, 0.5589, 0.5948) \rangle\\ \langle (0.6565, 0.6947, 0.6998, 0.7335),(0.337, 0.3570, 0.3698 , 0.3709),(0.3425, 0.4061, 0.4238, 0.4567) \rangle \end{array}\right). $$ -
Step 3: The de-Neutrosofication values of \(\widetilde {A}_{{s1}}\), (s = 1, 2, 3) corresponding to Larm operator are \(D_{Neu}(\widetilde {A}_{11})= 0.5208\), \(D_{Neu}(\widetilde {A}_{21}) = 0.4882\), \(D_{Neu}(\widetilde {A}_{31}) = 0.5837\). On the other hand, the de-Neutrosofication values of \(\widetilde {A}_{{s1}}\), (s = 1, 2, 3) corresponding to operator Lgeo are \(D_{Neu}(\widetilde {A}_{11})= 0.5156\), \(D_{Neu}(\widetilde {A}_{21})=0.51\), \(D_{Neu}(\widetilde {A}_{31})= 0.5601\).
-
Step 4: The ranking order of de-Neutrosofication values is \(D_{Neu}(\widetilde {A}_{31})> D_{Neu}(\widetilde {A}_{11})> D_{Neu}(\widetilde {A}_{21})\) for the operator Larm. Therefore, the ranking order of the alternatives is U3 > U1 > U2. Therefore, U3 is the best option. Again, under the operator Lgeo, the ranking order of the alternatives is U3 > U2 > U1. Therefore, U3 is the best option.
6.1 Sensitivity analysis
The logical approach of sensitivity analysis is performed by exchanging the weights of the decision-makers keeping the remainder of the term are unchanged. Here, we perform sensitivity analysis under the Larm and Lgeo operators to capture the influence of the decision-makers weight on the relative matrix and their ranking. The sensitivity analysis results are shown in the Tables 1 & 2 under the operators Larm and Lgeo respectively. In the Figs. 3 and 4, we have represented the corresponding weights values of different decision-makers and the ranking order of the alternatives respectively under Larm operator. Also, in the Figs. 5 and 6, we have presented the related weights values of different decision-makers and the ranking order of the alternatives respectively under Lgeo operator.
In Table 1, we consider different weight vectors of the decision-makers and get U3 is the best option in four cases and U1 is the best option for one case under the operator Larm. Again, in Table 2, we consider same weight vectors of the decision-makers as in Table 1 and get U3 is the best option in all cases under the operator Lgeo.
6.2 Comparative analysis
To demonstrate the efficiency and validity of our proposed method, we have presented a comparison study of our method with the existing methods in Table 3.
From the Table 3, we have observed that aggregation operator proposed by Ye [15] cannot be apply in our decision matrices as indeterminacy part is absent in this aggregation operator. Also Liang et al. [20], Biswas et al. [23], Pranab et al. [34], Pramanik & Mallick [35], Liu & Zhang [36] and Wu et al. [37] work on SVTNN environment which is different from general TNN [16] through its basic character. So we cannot apply this method in our decision matrices to execute the best alternative. Thus, we have applied the operators TNNWAA [16], TNNWGA [16], ITNNWAA [17] and ITNNWGA [17] on our data set and obtained the results. Interestingly, we have found that the ranking order under the different operators and our method are exactly same. On the other hand we have already checked the stability of our obtained results through sensitivity analysis. These phenomenons clearly show the efficiency & reliability of our proposed logarithmic operational law based MCGDM technique.
7 Conclusion
In this article, we have presented new logarithmic operational laws for TNNs which is a productive enhancement of existing operational laws. We have studied their mathematical Properties like boundedness, monotonicity etc. Moreover, we have proposed the logarithmic trapezoidal neutrosophic weighted arithmetic aggregation operator Larm and logarithmic trapezoidal neutrosophic weighted geometric aggregation operator Lgeo and presented an MCGDM technique in TN environment by using these aggregation operators. A numerical problem has been taken up to demonstrate the proposed MCGDM method. Also, we have discussed the usefulness and the utility of the proposed method through a sensitivity analysis. Finally, a comparison study of our proposed technique with existing methods has been presented to justify the rationality and efficiency of our proposed technique. From this article, we can conclude that our defined operational law and its corresponding MCGDM technique give a new direction to deal decision-making problems.
In the future work, the defined logarithmic operational law can be expanded to the other uncertain environments to enrich the decision-making procedure. Researchers can immensely apply these ideas of neutrosophic number in numerous flourishing research fields like mobile computing, pattern recognition, cloud computing, etc.
References
Zadeh LA (1965) Fuzzy sets. Inf Control 8(5):338–353
Attanassov K (1986) Intuitionistic fuzzy sets. Fuzzy Set Syst 20:87–96
Smarandache F (1999) A unifying field in logics. neutrosophy: neutrosophic probability, set and logic. American Research Press, Rehoboth
Chakraborty A, Mondal S, Ahmadian A, Senu N, Dey D, Alam S, Salahshour S (2019) The pentagonal fuzzy number: its different representations, properties, ranking, defuzzification and application in game problem. Symmetry 11(2):248–277. https://doi.org/10.3390/sym11020248
Chakraborty A, Maity S, Jain S, Mondal S, Alam S (2020) Hexagonal fuzzy number and its distinctive representation, ranking, defuzzification technique and application in production inventory management problem, granular computing. Springer, Berlin. https://doi.org/10.1007/s41066-020-00212-8
Maity S, Chakraborty A, De SK, Mondal S, Alam S (2020) A comprehensive study of a backlogging EOQ model with non-linear heptagonal dense fuzzy environment. Rairo Oper Res 54(1):267–286. https://doi.org/10.1051/ro/2018114
Wang H, Smarandache F, Zhang YQ, Sunderraman R (2005) Interval neutrosophic sets and logic: Theory and applications in computing, Hexis, Arizona
Wang H, Smarandache F, Zhang YQ, Sunderraman R (2010) Single valued neutrosophic sets. Multispace Multistructure 4:410–413
Chakraborty A, Mondal S, Ahmadian A (2018) Different forms of triangular neutrosophic numbers, de-neutrosophication techniques, and their applications. Symmetry 10(8):1–27. https://doi.org/10.3390/sym10080327
Chakraborty A, Mondal S, Alam S, Mahata A (2021) Different linear and non-linear form of trapezoidal neutrosophic numbers, de-neutrosophication techniques and its application in time-cost optimization technique, sequencing problem. Rairo Oper Res 55:S97–S118. https://doi.org/10.1051/ro/2019090
Chakraborty A, Mondal S, Alam S, Mahata A (2020) Cylindrical neutrosophic single-valued numberand its application in networking problem, multi criterion decision making problem and graph theory. CAAI Trans Intell Technol 5(2):68–77. https://doi.org/10.1049/trit.2019.0083
Chakraborty A, Mondal S, Alam S (2019) Disjunctive representation of triangular bipolar neutrosophic numbers, de-bipolarization technique and application in multi-criteria decision-making problems, vol 11
Liu F, Yuan XH (2007) Fuzzy number intuitionistic fuzzy set. Fuzzy Syst Math 21(1):88–91
Qin Q, Liang F, Li L, Wangchen Y, Yu GF (2017) A TODIM-based multi-criteria group decision making with triangular intuitionistic fuzzy numbers. Appl Soft Comput 55:93–107. https://doi.org/10.1016/j.asoc.2017.01.041
Ye J (2014) Prioritized aggregation operators of trapezoidal intuitionistic fuzzy sets and their application to multi criteria decision making. Neural Comput and Appl 25(6):1447–1454
Ye J (2015) Trapezoidal neutrosophic set and its application to multiple attribute decision-making. Neural Comput Appl 26:1157–1166. https://doi.org/10.1007/s00521-014-1787-6
Jana C, Pal M, Karaaslan F, Wang J-Q (2020) Trapezoidal neutrosophic aggregation operators and its application in multiple attribute decision making process. Scientica Iranica 37(3):1655–1673
Ye J (2017) Some weighted aggregation operators of trapezoidal neutrosophic numbers and their multiple attribute decision making method. Informatica 28:387–402
Deli I, Subas Y (2016) A ranking method of single valued neutrosophic numbers and its applications to multi-attribute decision making problems. Int J Mach Learn Cybern 8(4):1309–1322. https://doi.org/10.1007/s13042016-0505-3
Liang RX, Wang JQ, Zhang HY (2018) A multi-criteria decision making method based on single valued trapezoidal neutrosophic preference relation with complete weight information. Neural Comput Appl 30(11):3383–3398. https://doi.org/10.1007/s00521-017-2925-8
Biswas P, Pramanik S, Giri BC (2015) Cosine similarity measure based multi-attribute decision-making with trapezoidal fuzzy neutrosophic numbers. Neutrosophic Sets Syst 8:46–56. https://doi.org/10.5281/zenodo.571274
Pramanik S, Mallick R (2018) VIKOR based MAGDM strategy with trapezoidal neutrosophic number. Neutrosophic Sets Syst 22:118–130. https://doi.org/10.5281/zenodo.2160840
Biswas P, Pramanik S, Giri BC (2018) TOPSIS strategy for multiattribute decision making with trapezoidal numbers. Neutrosophic Sets Syst 19:29–39. https://doi.org/10.5281/zenodo.1235335
Sahin M, Ulucay V, Acioglu H (2018) Some weighted arithmetic operator and geometric operators with SVNSs and their application to multi-criteria decision making problem. In: New trends in neutrosophic theory and applications, Pons Editions, Brussels, vol 2, pp 85–104
Abdel-Basset M, Saleh M, Gamal A, Smarandache F (2019) An approach of TOPSIS technique for developing supplier selection with group decision making under type-2 neutrosophic number. Appl Soft Comput J 77:438–452
Chakraborty A, Broumi S, Singh PK (2019) Some properties of pentagonal neutrosophic numbers and its applications in transportation problem environment. Neutrosophic Sets Syst 28:200–215
Chakraborty A, Mondal S, Broumi S (2019) De-neutrosophication technique of pentagonal neutrosophic number and application in minimal spanning tree. Neutrosophic Sets Syst 29:1–18. https://doi.org/10.5281/zenodo.3514383
Chakraborty A (2020) A new score function of pentagonal neutrosophic number and its application in networking problem. Int J Neutrosophic Sci 1(1):35–46
Chakraborty A, Banik B, Mondal S, Alam S (2020) Arithmetic and geometric operators of pentagonal neutrosophic number and its application in mobile communication based MCGDM problem. Neutrosophic Sets Syst 32(1):1–20. https://doi.org/10.5281/zenodo.3723145
Haque TS, Chakraborty A, Mondal S, Alam S (2021) A new exponential operational law for trapezoidal neutrosophic number and pollution in megacities related MCGDM problem. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s12652-021-03223-8
Li Z, Wei F (2017) The logarithmic operational laws of intuitionistic fuzzy sets and intuitionistic fuzzy numbers. J Intell Fuzzy Syst 33:3241–3253
Garg H (2019) New logarithmic operational laws and their aggregation operators for Pythagorean fuzzy set and their applications. Int J Intell Syst 34:82–106
Garg H (2019) New logarithmic operational laws and their aggregation operators for Pythagorean fuzzy set and their applications. Int J Intell Syst 34:82–106
Biswas P, Pramanik S, Giri BC (2018) TOPSIS strategy for multi-attribute decision making with trapezoidal neutrosophic numbers. Neutrosophic Sets Syst 19:1–12
Pramanik S, Mallick R (2019) TODIM strategy formulti-attribute group decision making in trapezoidal neutrosophic number environment. Complex Intell Syste 5:379–389. https://doi.org/10.1007/s40747-019-0110-7
Liu P, Zhang X (2017) Some Maclaurin symmetric mean operators for single-valued trapezoidal neutrosophic numbers and their applications to group decision making. Int J Fuzzy Syst 20(1):45–61. https://doi.org/10.1007/s40815-017-0335-9
Wu X, Qian J, Peng J, Xu C (2018) A multi-criteria group decision-making method with possibility degree and power aggregation operators of single trapezoidal neutrosophic numbers. Symmetry 10(11):590–610. https://doi.org/10.3390/sym10110590
Acknowledgements
This article is funded by Council of Scientific & Industrial Research (CSIR), India (File no. - 08/003(0136)/2019-EMR-I).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Haque, T.S., Chakraborty, A., Mondal, S.P. et al. A novel logarithmic operational law and aggregation operators for trapezoidal neutrosophic number with MCGDM skill to determine most harmful virus. Appl Intell 52, 4398–4417 (2022). https://doi.org/10.1007/s10489-021-02583-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-021-02583-0