1 Introduction

Professor Zadeh [1] introduced the notion of fuzzy set theory to capture the vagueness and uncertainty of realistic problems, which was extended and expanded into intuitionistic fuzzy set(IFS) theory by Professor Attanosov [2]. To snatch the concept of uncertainty, inconsistency and indeterminacy of data in real-life problem, Professor Smarandache [3] presented the origination of neutrosophic set (NS) as an extension of IFS which contains truth membership function (μ), indeterminacy membership function (ι) and falsity membership function (σ). Recently, researchers have introduced pentagonal [4], hexagonal [5], heptagonal [6] fuzzy numbers and its application in different fields. Wang et al. manifested the conception of single-valued neutrosophic set (SVNS) [7] and interval neutrosophic set (INS) [8] which are subclasses of NSs and many other recent works [9,10,11,12] have improved and bring innovation into the NS hypothesis. Liu and Yuan [13] proposed the idea of triangular intuitionistic fuzzy set (TIFN) which is a combination of triangular fuzzy number and intuitionistic fuzzy number. Qin et al. [14] proposed a TODIM-based multi-criteria decision-making (MCDM) for TIFN. Ye [15] introduced the trapezoidal intuitionistic fuzzy number (TrIFN) and solved MCDM problem in this environment. Ye [16] manifested a novel idea of trapezoidal neutrosophic number (TNN) by mixing the concept of SVNS and trapezoidal fuzzy number and utilized it to solve an MCDM problem in trapezoidal neutrosophic (TN) arena. It is to be noted that both trapezoidal fuzzy numbers and neutrosophic numbers are important and effective tools in the field of uncertainty. Now, the concept of TNN can be used more fruitful in the field uncertainty to grab the impreciseness and indeterminacy in a rigorous way. In this direction, Jana et al. [17] has already defined interval traphezoidal neutrosophic numbers and apply it to solve MCGDM problem. Single valued trapezoidal neutrosophic number (SVTNN) [18] is another extension of SVNS. In SVTNN, each component is presented in the form of trapezoidal number that has truth membership degree, indeterminacy membership degree and falsity membership degree. Deli and Subas [19] manifested a ranking technique of TNN and displayed a multi-attribute decision-making (MADM) procedure. Liang et al. [20] initiated score, accuracy and certainty functions of single-valued trapezoidal neutrosophic number (SVTNN) by using center of gravity. Biswas et al. [21] defined cosine similarity measure for trapezoidal fuzzy neutrosophic numbers and presented an MADM based on it. Pramanik and Mallick [22] structured a VIKOR technique for a multi-attribute group decision making (MAGDM) in trapezoidal neutrosophic environment. Biswas et al. [23] gave the idea of TOPSIS method for MADM in TN environment, whereas Sahin et al. [24] presented some weighted arithmetic and geometric operators in SVTN environment and gave their application to MCDM problem. Abdel-Basset et al. [25] defined a type 2 neutrosophic numbers (T2NN) and manifested T2NN-TOPSIS technique to deal with a decision-making problem. Recently, Chakraborty et al. [26,27,28,29] initiated the geometrical concept of pentagonal neutrosophic number and its application in operation research, networking and graph theory arena. In this article, we have introduced new logarithmic operational laws for TNN where the logarithmic base μ is a positive real number and subsequently developed logarithmic trapezoidal neutrosophic weighted arithmetic aggregation (Larm) operator and logarithmic trapezoidal neutrosophic weighted geometric aggregation (Lgeo) operator which have been used to construct a new scheme of MCGDM process.

1.1 Motivation

In this current decade, researchers in the neutrosophic arena are mainly interested in the MCDM problems which are operators based. In the field of aggregation, the best activity is to design new operational laws. The four essential operational laws like addition, multiplication, scalar multiplication of TNN have been characterized by Ye [16]. Recently, Haque et al. [30] introduced exponential operational law where the bases are crisp numbers and the exponents are TNNs. Moreover, logarithmic operational law is a fundamental operational law in the field of aggregation. Li [31] presented logarithmic operational for IFN and developed its corresponding aggregation operators. Garg [32] set forward logarithmic operational law for SVNS and applied it in an MADM problem. Garg [33] defined the logarithmic operational law for Pythagorean fuzzy numbers and developed corresponding aggregation operator and MCDM technique to solve the real-life problems. From the literature survey, we could not notice any logarithmic operational law for TNN till date. To mobilize this research gap, here in this research article, we have defined logarithmic operational law for TNN. Furthermore, we have successfully adopt the proposed logarithmic operator to develop new aggregation formula to aggregate several uncertain information provided by the different decision makers in an MCGDM process. Finally, we have suggested an MCGDM strategy with the help of our defined operational laws and corresponding aggregation operators namely Larm and Lgeo.

1.2 Novelties

Lots of works have been already established in the TN environment. In the meantime researchers have built different formulations and their applications in different fields of TNNs. But, there are still lots of works that can be established in this arena. In this article, we make an attempt to incorporate and address the following points:

  1. i)

    To define new logarithmic operational law (LOL) for TNNs which is a useful supplement of existing operational law and analyzed their algebraic properties.

  2. ii)

    To introduce new operators like logarithmic trapezoidal neutrosophic weighted arithmetic aggregation (Larm) and logarithmic trapezoidal neutrosophic weighted geometric aggregation (Lgeo) operators.

  3. iii)

    Proposition of MCGDM strategy in TN environment.

  4. iv)

    To demonstrate the proposed method we solved a numerical problem based on a real-life problem.

  5. v)

    A sensitivity analysis is performed to show the utility and efficiency of the designed method.

1.3 Structure of the paper

The remainder of the article is organized in several sections. Section 2 presents some fundamental Definitions related with IFS and SVNS. In Section 3, we have introduced new logarithmic operational law for TNN and briefly discussed its algebraic properties. In Section 4, we have developed two aggregation operators based on our defined logarithmic operational law. In Section 5, an MCGDM method has been manifested using our defined operational laws and related aggregation operators. A numerical problem is taken to exhibit the applicability of defined logarithmic operational law and a sensitivity analysis are performed to show the utility of the designed method in Section 6. Finally, we conclude our results in Section 7.

2 Mathematical preliminaries

Basic Definitions and operations related with SVNSs and TNSs are presented as follows:

Definition 2.1

Let S be a universal set. Then

$$ \widetilde{N}= \left\lbrace \langle s, ~\mu(s), ~\iota(s), ~\sigma(s)\rangle ;~ s\in S \right\rbrace $$

is said to be single-valued neutrosophic set (SVNS) [3] on S, where \( \mu : S \rightarrow [0,~1]\), \( \iota : S \rightarrow [0,~1]\) and \( \sigma : S \rightarrow [0,~1] \) with the condition 0 ≤ μ(s) + ι(s) + σ(s) ≤ 3. Here, μ(s), ι(s) and σ(s) are called the truth-membership function, indeterminacy-membership function and falsity-membership function respectively of the element to the set N. For convenience, we represent this SVNS as \(\widetilde {N}= \lbrace \langle \mu , ~\iota , ~\sigma \rangle \), where μ, ι, σ ∈ [0, 1], 0 ≤ μ + ι + σ ≤ 3} and and called as a single-valued neutrosophic number (SVNN).

Definition 2.2

Let S be a universal set. Then trapezoidal neutrosophic set \(\widetilde {A}\) is defined by Ye [16] in the following form:

$$ \widetilde{A}= \left\lbrace \langle s, ~ T(s), ~ I(s), ~ F(s)\rangle ; ~s\in S\right\rbrace $$

where T(s) ⊂ [0, 1], I(s) ⊂ [0, 1], F(s) ⊂ [0, 1] are three trapezoidal neutrosophic numbers and \( T(s)=\left ({\alpha }(s),~{\beta }(s), ~{\gamma }(s), ~{\mu }(s)\right ): S \rightarrow [0,~1]\), \( I(s)= \left ({\lambda }(s),~{\mu }(s), {\kappa }(s), ~{\iota }(s)\right ) : S \rightarrow [0,~1]\) and \( F(s)= \left ({\phi }(s),~{\rho }(s),~{\psi }(s), \\ {\sigma }(s)\right ) : S \rightarrow [0,~1] \) with the condition 0 ≤ μ(s) + ι(s) + σ(s) ≤ 3 for all sS. Here, T(s), I(s) and F(s) are called the truth-membership function, indeterminacy-membership function and falsity-membership function respectively of the element to the set \(\widetilde {A}\). For convenience, we represent the set as \({\widetilde A}= \left \lbrace \langle ({a},{b},{c},{d}),({k},{l},{m},{n}),({x},{y},{v},{w})\rangle \right . \): \(\left . ~0\leq {d} + {n} + {w}\leq 3 \right \rbrace \) and called as a trapezoidal neutrosophic number (TNN).

Proposition 2.1

Let \(\widetilde {A}_{k}= \left \langle ({a}_{k},{b}_{k}, {c}_{k}, {d}_{k}),\right .({l}_{k}, {m}_{k}, {n}_{k},{p}_{k}),\)\(\left .({x}_{k}, {y}_{k}, {v}_{k}, {w}_{i} )\right \rangle \) (k = 1, 2) be any two TNNs. Then, we have the following operational rules [16]:

  1. i)

    \(\widetilde {A}_{1} \bigoplus \widetilde {A}_{2}=\left \langle \left ({a}_{1} + {a}_{2} - {a}_{1}{a}_{2}, {b}_{1} + {b}_{2} -{b}_{1}{b}_{2},{c}_{1} \right .\right .\left .+ {c}_{2} -{c}_{1}{c}_{2}, {d}_{1} + {d}_{2} -{d}_{1}{d}_{2}\right ),\left ({l}_{1}{l}_{2},{m}_{1}{m}_{2}, {n}_{1}{n}_{2}, {p}_{1}{p}_{2} \right ),\left .\left ({x}_{1}{x}_{2}, {y}_{1}{y}_{2}, {v}_{1}{v}_{2}, {w}_{1} {w}_{2} \right ) \right \rangle \)

  2. ii)

    \( \widetilde {A}_{1} \bigotimes \widetilde {A}_{2}=\left \langle \left ({a}_{1}{a}_{2}, {b}_{1}{b}_{2}, {c}_{1}{c}_{2},{d}_{1}{d}_{2}\right )\right .,\left ({l}_{1} + {l}_{2}- {l}_{1}{l}_{2},{m}_{1} + {m}_{2}-{m}_{1}{m}_{2}, {n}_{1} + {n}_{2} \right .\left .-{n}_{1}{n}_{2}, {p}_{1} + {p}_{2} -{p}_{1}{p}_{2} \right ),\left ({x}_{1} + {x}_{2} - {x}_{1}{x}_{2}, {y}_{1}+ {y}_{2} -{y}_{1}{y}_{2}, {v}_{1}+ {v}_{2} \right .\left .\left .- {v}_{1}{v}_{2}, {w}_{1} +{w}_{2} - {w}_{1}{w}_{2} \right ) \right \rangle \)

  3. iii)

    \( {\mu }\widetilde {A}_{1} = \left \langle \left (1-(1-{a}_{1})^{\mu }, 1-(1-{b}_{1})^{\mu },\right .\right .\left .1-(1-{c}_{1})^{\mu }, 1-(1-{d}_{1})^{\mu }\right ),\left ({{l}_{1}}^{\mu }, {{m}_{1}}^{\mu }, {{n}_{1}}^{\mu }, {{k}_{1}}^{\mu }\right ), \left ({{x}_{1}}^{\mu }, {{y}_{1}}^{\mu }, {{v}_{1}}^{\mu }, {{w}_{1}}^{\mu }\right )\)

  4. iv)

    \( (\widetilde {A}_{1})^{\mu } = \left \langle \left ({a}_{1}^{\lambda }, {{b}_{1}}^{\mu }, {{c}_{1}}^{\mu }, {{d}_{1}}^{\mu }\right ),\left (1-(1-{l}_{1})^{\mu },\right .\right .1-(1-{{m}_{1}})^{\mu }, 1-(1-{{n}_{1}})^{\mu }, \left .1-(1-{{k}_{1}})^{\mu }\right ),\left (1-(1-{{x}_{1}})^{\mu }, 1-(1-{{y}_{1}})^{\mu }, \right .\left .1-(1-{{v}_{1}})^{\mu }, 1-(1-{{w}_{1}})^{\mu }\right ) \)

Definition 2.3

Let \(\widetilde {A}_{s}= \langle ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}),({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}),({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} )\rangle \) (s = 1, 2,⋯ , p) be any collection of TNNs. Then the trapezoidal neutrosophic number weighted arithmetic averaging (TNNWAA) operator is defined in [16] as

$$ \begin{array}{@{}rcl@{}} &&TNNWAA(\widetilde A_{1},\widetilde A_{2},\cdots,\widetilde A_{p})\\ &= &\sum\limits^{p}_{s=1}\phi_{s}\widetilde A_{s}\\ &= &\left\langle[1-\prod\limits^{p}_{s=1}(1-{a}_{s})^{\phi_{s}}, 1-\prod\limits^{p}_{s=1}(1-{b}_{s})^{\phi_{s}},\right.\\ &&\left.\left.1-\prod\limits^{p}_{s=1}(1-{c}_{s})^{\phi_{s}}, 1-\prod\limits^{p}_{s=1}(1-{d}_{s})^{\phi_{s}}\right],\right.\\ &&\left.\left[\prod\limits^{p}_{s=1}({{l}_{s}})^{\phi_{s}},\prod\limits^{p}_{s=1}({{m}_{s}})^{\phi_{s}}, \prod\limits^{p}_{s=1}({{n}_{s}})^{\phi_{s}}, \prod\limits^{p}_{s=1}({{p}_{s}})^{\phi_{s}}\right],\right.\\ &&\left.\left[\prod\limits^{p}_{s=1}({{x}_{s}})^{\phi_{s}},\prod\limits^{p}_{s=1}({{y}_{s}})^{\phi_{s}}, \prod\limits^{p}_{s=1}({{v}_{s}})^{\phi_{s}},\prod\limits^{p}_{s=1}({{w}_{s}})^{\phi_{s}}\right]\right\rangle\\ \end{array} $$

where ϕs (s = 1, 2,⋯ , p) is the weight of \(\widetilde A_{s}~(s=1, 2,\cdots ,p)\) with ϕs ∈ [0, 1] and \(\displaystyle \sum \limits ^{p}_{s=1}\phi _ s=1\).

Definition 2.4

Let \(\widetilde {A}_{s}= \left \langle ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}),\right .\left .({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}),({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} )\right \rangle \), (s = 1, 2,⋯ , p) be collections of TNNs. Then the trapezoidal neutrosophic number weighted geometric averaging (TNNWGA) operator is defined in [16] as

$$ \begin{array}{@{}rcl@{}} &&TNNWGA(\widetilde A_{1},\widetilde A_{2},\cdots,\widetilde A_{p})\\ &=&\prod\limits^{p}_{s=1}(\widetilde A_{s})^{\phi_{s}} \\ &=&\left\langle\left[\prod\limits^{p}_{s=1}({a}_{s})^{\phi_{s}},~~\prod\limits^{p}_{s=1}({b}_{s})^{\phi_{s}}, \prod\limits^{p}_{s=1}({c}_{s})^{\phi_{s}},\prod\limits^{p}_{s=1}({d}_{s})^{\phi_{s}}\right],\right.\\ &&\left[1-\prod\limits^{p}_{s=1}(1-{l}_{s})^{\phi_{s}},~~1-\prod\limits^{p}_{s=1}(1-{m}_{s})^{\phi_{s}},\right.\\ &&\left.1-\prod\limits^{p}_{s=1}(1-{n}_{s})^{\phi_{s}}, 1-\prod\limits^{p}_{s=1}(1-{p}_{s})^{\phi_{s}}\right], \end{array} $$
$$ \begin{array}{@{}rcl@{}} &&\left[1-\prod\limits^{p}_{s=1}(1-{x}_{s})^{\phi_{s}}, 1-\prod\limits^{k}_{s=1}(1-{y}_{s})^{\phi_{s}},\right.\\ &&\left.\left.1-\prod\limits^{k}_{s=1}(1-{v}_{s})^{\phi_{s}}, 1-\prod\limits^{k}_{s=1}(1-{x}_{s})^{\phi_{s}}\right]\right\rangle \end{array} $$

where ϕs (s = 1, 2,⋯ , p) is the weight of \(\widetilde A_{s}~(s=1, 2,\cdots ,p)\) with ϕs ∈ [0, 1] and \(\displaystyle {\sum }^{p}_{s=1}\phi _{s}=1\).

2.1 Application of aggregation operators

Aggregation operators are mainly used in MCDM/MCGDM techniques to aggregate the input values of certain alternatives under the different criteria. Let, we want to evaluate an alternative under different criteria in which computational entities are in the form of TNNs. Now, we need to introduce a technique to aggregate all the evaluation values into a single value in the form of TNN. For this purpose, we have used aggregation operators TNNWAA & TNNWGA as introducded by Ye [16]. Since TNN is an another environment in the neutrosophic field, then the above aggregation operators must have an crucial impact on MCDM/MCGDM techniques in this TN environment. Here, we have presented the following example to demonstrate the application of of above mention aggregation operators:

Example 2.1

Let someone wants to buy a new mobile phone based on the criterion of better camera quality, graphics and RAM services. Let the available alternatives are mobile companies namely X1, X2 and X3, which are evaluated under the following criteria:

  1. 1)

    Y1 indicates the camera quality.

  2. 2)

    Y2 indicates the graphics quality services.

  3. 3)

    Y3 indicates the RAM quality services.

whose weight vector is (0.33, 0.32, 0.35). Figure 1 show the schematic diagram of the application of aggregation operators.

Fig. 1
figure 1

Application of aggregation operators

The input values of the decision making problem in TN environment are given in the following matrix

Now, if we use the operator TNNWAA on the above decision matrix, then we get the evaluation value alternatives as follows:

$$ \begin{array}{l} X_{1}\\ X_{2}\\ X_{3} \end{array} \left( \begin{array}{l} \langle (0.3368, 0.6413, 0.7666, 0.8398),(0.3771, 0.5923, 0.7635, 0.8637),(0.3289, 0.4924, 0.6313, 0.7315) \rangle \\ \langle(0.5098, 0.6362, 0.7365, 0.8398),(0.5955, 0.6663, 0.7969, 0.9000),(0.1000, 0.3587, 0.5329, 0.6982) \rangle \\ \langle(0.7000, 0.7695, 0.8431, 0.9000),(0.2514, 0.3771, 0.5028, 0.6435),(0.2239, 0.3371, 0.5161, 0.7188) \rangle \end{array}\right) $$

Again, if we use the operator TNNWGA, we get

$$ \begin{array}{l} X_{1}\\ X_{2}\\ X_{3} \end{array} \left( \begin{array}{l} \langle (0.3318, 0.6222, 0.7188, 0.8307),(0.418, 0.6067, 0.7695, 0.8725),(0.3337, 0.5056, 0.6362, 0.7376) \rangle \\ \langle (0.4962, 0.6313, 0.7306, 0.8307),(0.6093, 0.6711, 0.8188, 0.9000),(0.1000, 0.3778, 0.5376, 0.7146) \rangle \\ \langle (0.7000, 0.7635, 0.8337, 0.9000),(0.2725, 0.4180, 0.5825, 0.7263),(0.2724, 0.3740, 0.6102, 0.7666)\rangle \end{array}\right) $$

From the above example, it is observed that after utilizing the aggregation operators, we get the evaluation value of the alternatives in the aggregated form. Now, if we apply the score function [16], then we observe that, mobile company X3 is the best option in presence of the underlying three criterion. Based on above example, we observe that if we want to evaluate some alternatives under different criteria in TN environment, then first we need to apply the aggregation operators to convert the system into a single decision matrix. After that, utilizing the fruitful cripsification technique, we can get the associated crisp values of each alternatives. Finally, the best alternative can be determined by taking the highest crisp value among finite alternatives.

2.2 De-Neutrosophication of a TNN

De-Neutrosophication is the technique where an appreciable result is generated for crispsification. In the neutrosophic environment researchers are highly devoted to convert a TNN into a crisp number through various methods and techniques. Here, we use Removal Area Technique (RAT) to calculate de-Neutrosophication value of TNNs that is defined as follows:

Definition 2.1.1

Let \(\widetilde {A}= \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\rangle \) be any TNN, then the de-Neutrosophication value of \(\widetilde {A}\) (utilizing Removal Area technique) is given by Chakraborty et al. [10] as

$$ D_{Neu}(\widetilde{A})=\frac{{a}+ {b}+{c}+{d}+{l}+{m}+{n}+{p}+{x}+{y}+{v}+{w}}{12}. $$

Definition 2.1.2

Let \(\widetilde {A}_{1}\) and \(\widetilde {A}_{2}\) be any two TNNs, then the ranking technique is defined as follows

  1. i)

    If \(D_{Neu}(\widetilde {A}_{1})>D_{Neu}(\widetilde {A}_{2})\), then \( \widetilde {A}_{1}>\widetilde {A}_{2}\)

  2. ii)

    If \(D_{Neu}(\widetilde {A}_{1})<D_{Neu}(\widetilde {A}_{2})\), then \( \widetilde {A}_{1}<\widetilde {A}_{2}\).

3 Logarithmic operational law for TNN

In this section logarithmic function on TNN is defined and studied where the base (μ) is considered as positive real number. Let \(\widetilde {A}\) be a TNN and μ > 0 be a real number. Since in real field \(\log _{\mu }0\) and \(\log _{1}x\) are undefined, where x is a real number, so we assume that \( \widetilde A \neq 0\), \( \widetilde A \neq \left \langle [0, 0, 0, 0],[1, 1, 1, 1],[1, 1, 1, 1] \right \rangle \) and μ≠ 1. We define the logarithm of TNN as follows:

Definition 3.1

Let V be an universal set and Let \(\widetilde {A}= \left \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\right \rangle \) be any TNN. Then, we define

$$ \log_{\mu} \widetilde{A} = \begin{cases} \left\langle \left[1-\log_{\mu}{a}, 1-\log_{\mu}{b}, 1-\log_{\mu}{c}, 1-\log_{\mu}{d}\right],\left[\log_{\mu}(1-{l}),\log_{\mu} (1-{m}),\log_{\mu} (1-{n}),\log_{\mu} (1-{p})\right],\right.\\ \left.\left[\log_{\mu} (1-{x}),\log_{\mu} (1-{y}),\log_{\mu} (1-{v}),\log_{\mu} (1-{w})\right]\right\rangle,\\ \text{when } 0 < \mu \leq \min \left( {a},{b}, {c}, {d}, 1-{l}, 1-{m}, 1-{n}, 1-{p}, 1-{x}, 1-{y}, 1-{v}, 1-{w}\right)< 1 ; \\ \left\langle \left[1-\log_{\frac{1}{\mu}}{a}, 1-\log_{\frac{1}{\mu}}{b}, 1-\log_{\frac{1}{\mu}}{c}, 1-\log_{\frac{1}{\mu}}{d}\right],\left[\log_{\frac{1}{\mu}}(1-{l}), \log_{\frac{1}{\mu}}(1-{m}),\log_{\frac{1}{\mu}}(1-{n}),\log_{\frac{1}{\mu}}(1-{p})\right]\right.\\ \left.\left[\log_{\frac{1}{\mu}}(1-{x}),\log_{\frac{1}{\mu}}(1-{y}),\log_{\frac{1}{\mu}}(1-{v}),\log_{\frac{1}{\mu}}(1-{w})\right]\right\rangle,\\ \text{when } 0<\frac{1}{\mu} \leq \min \left( {a},{b}, {c}, {d}, 1-{l}, 1-{m}, 1-{n}, 1-{p}, 1-{x}, 1-{y}, 1-{v}, 1-{w}\right)< 1 . \end{cases} $$

Here, we shall discuss some elementary Properties of \(\log _{\mu } \widetilde {A}\) which are as follows:

Theorem 3.1

Let \(\widetilde {A}= \left \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),\right .\left .({x}, {y}, {v}, {w} )\right \rangle \) be a TNN. Then \(\log _{\mu } \widetilde {A}\) is a TNN.

Proof

Let \(\widetilde {A}= \left \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\right \rangle \) be a TNN. Then a, b, c, d, l, m, n, p, x, y, v, w ∈ [0, 1] with 0 ≤ d + p + w ≤ 3. □

Case 1

When 0 < μ ≤ min (a, b, c, d, 1 − l, 1 − m, 1 − n, 1 − p) < 1, then we have,

\( 0 \leq \log _{\mu }{a},\log _{\mu }{b},\log _{\mu }{c},\log _{\mu }{d},\log _{\mu }(1-{l}),\log _{\mu }(1-{m}),\log _{\mu }(1-{n}),\log _{\mu }(1-{p}),\log _{\mu }(1-{x}),\log _{\mu }(1-{y}),\log _{\mu }(1-{v}),\log _{\mu }(1-{w})\leq 1\)

Hence, \(0 \leq 1- \log _{\mu }{a}, 1- \log _{\mu }{b}, 1- \log _{\mu }{c}, 1- \log _{\mu }{d},\log _{\mu }(1-{l}),\log _{\mu }(1-{m}),\log _{\mu }(1-{n}),\log _{\mu }(1-{p}),\log _{\mu }(1-{x}),\log _{\mu }(1-{y}),\log _{\mu }(1-{v}),\log _{\mu }(1-{w})\leq 1\) and \(0\leq \log _{\mu }{a}+\log _{\mu }(1-{p})+\log _{\mu }(1-{w})\leq 3\).

Thus, \( \log _{\mu }\widetilde {A}\) is a TNN.

Case 2

When \( 0<\frac {1}{\mu } \leq \) min (a, b, c, d, 1 − l, 1 − m, 1 − n, 1 − p) < 1, then proceeding in the similar way as in the above case 1, we can prove that \(\log _{\mu }\widetilde {A}\) is a TNN.

Thus, we conclude that \(\log _{\mu }\widetilde {A}\) is a TNN.

Theorem 3.2

Let \(\widetilde {A}= \langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),\) (x, y, v, w)〉 be any TNN and 0 < μ ≤ min (a, b, c, d, 1 − l, 1 − m, 1 − n, 1 − p, 1 − x, 1 − y, 1 − v, 1 − w) < 1, then

  1. i)

    \(\mu ^{\log _{\mu }\widetilde {A}}= \widetilde {A}\)

  2. ii)

    \(\log _{\mu }\mu ^{\widetilde {A}}=\widetilde {A}\)

Proof

  1. i)

    Using the Properties 2.1 and the Definition 3.1, we get

    $$ \begin{array}{@{}rcl@{}} \mu^{\log_{\mu}\widetilde{A}} &=& \left\langle \left[ \mu^{1-(1-\log_{\mu}{a})},\mu^{1-(1-\log_{\mu}{b})},\mu^{1-(1-\log_{\mu}{c})},\mu^{1-(1-\log_{\mu}{d})}\right],\left[1-\mu^{\log_{\mu}(1-{l})}, 1-\mu^{\log_{\mu}(1-{m})}, 1-\mu^{\log_{\mu}(1-{n})},\right.\right.\\ &&\left.\left. 1-\mu^{\log_{\mu}(1-{p})}\right],\left[1-\mu^{\log_{\mu}(1-{x})}, 1-\mu^{\log_{\mu}(1-{y})}, 1-\mu^{\log_{\mu}(1-{v})}, 1-\mu^{\log_{\mu}(1-{w})}\right]\right\rangle\\ &=&\left\langle \left[\mu^{\log_{\mu}{a}},\mu^{\log_{\mu}{b}},\mu^{\log_{\mu}{c}},\mu^{\log_{\mu}{d}}\right],\left[1-(1-{l}), 1-(1-{m}), 1-(1-{n}), 1-(1-{p})\right],\left[1-(1-{x}),\right.\right.\\ &&\left.\left.1-(1-{y}), 1-(1-{v}), 1-(1-{w})\right]\right\rangle\\ &=& \left\langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\right\rangle\\ &=&\widetilde{A}. \end{array} $$
  2. ii)

    Again utilizing Properties 2.1 and the Definition 3.2, we get

    $$ \begin{array}{@{}rcl@{}} \log_{\mu}\mu^{\widetilde{A}} &=& \log_{\mu}\left\langle [\mu^{1-{a}},\mu^{1-{b}},\mu^{1-{c}},\mu^{1-{d}}],[1-\mu^{{l}}, 1-\mu^{{m}}, 1-\mu^{{n}}, 1-\mu^{{p}}],[1-\mu^{{x}}, 1-\mu^{{y}}, 1-\mu^{{v}}, 1-\mu^{{w}}]\right\rangle\\ &= &\left\langle \left[1-\log_{\mu}\mu^{1-{a}}, 1-\log_{\mu}\mu^{1-{b}}, 1-\log_{\mu}\mu^{1-{c}}, 1-\log_{\mu}\mu^{1-{d}}\right],\left[\log_{\mu}(1-(1-\mu^{{l}})),\log_{\mu}(1-(1-\mu^{{m}})),\right.\right.\\ &&\left.\log_{\mu}(1-(1-\mu^{{n}})),\log_{\mu}(1-(1-\mu^{{p}}))\right],\left[\log_{\mu}(1-(1-\mu^{{x}})),\log_{\mu}(1-(1-\mu^{{y}})),\log_{\mu}(1-(1-\mu^{{v}})),\right.\\ &&\left.\left.\log_{\mu}(1-(1-\mu^{{w}}))\right]\right\rangle\\ &=&\left\langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\right\rangle\\ &=&\widetilde{A}. \end{array} $$

Theorem 3.3

Let \(\widetilde {A}_{t}= \left \langle ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}),({l}_{t}, {m}_{t}, {n}_{t},{p}_{t}),\right .\) \(\left .({x}_{t}, {y}_{t}, {v}_{t}, {w}_{t} )\right \rangle \) (t = 1, 2) be any two TNNs and 0 < μ ≤ min (at, bt, ct, dt, 1 − lt, 1 − mt, 1 − nt, 1 − pt, 1 − xt, 1 − yt, 1 − vt, 1 − wt) < 1. Then

  1. i)

    \( \log _{\mu }{\widetilde {A}_{1}} \bigoplus \log _{\mu }\widetilde {A}_{2} = \log _{\mu }\widetilde {A}_{2}\bigoplus \log _{\mu }\widetilde {A}_{1}\);

  2. ii)

    \( \log _{\mu }{\widetilde {A}_{1}} \bigotimes \log _{\mu }\widetilde {A}_{2} = \log _{\mu }{A}_{2}\bigotimes \log _{\mu }\widetilde {A}_{1}\).

Proof

The proof of the above Theorem follows from Properties 2.1 and Definition 3.1. □

Theorem 3.4

Let \(\widetilde {A}_{t}= \langle ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}),({l}_{t}, {m}_{t}, {n}_{t},{p}_{t}),\) (xt, yt, vt, wt)〉 (t = 1, 2, 3) be any three TNNs and 0 < μ ≤ min (at, bt, ct, dt, 1 − lt, 1 − mt, 1 − nt, 1 − pt, 1 − xt, 1 − yt, 1 − vt, 1 − wt) < 1. Then

  1. i)

    \( \log _{\mu }{\widetilde {A}_{1}} \bigoplus \log _{\mu }\widetilde {A}_{2} \bigoplus \log _{\mu }\widetilde {A}_{3} = \log _{\mu }\widetilde {A}_{3} \bigoplus \log _{\mu }\widetilde {A}_{2}\) \(\bigoplus \log _{\mu }{A}_{1}\);

  2. ii)

    \( \log _{\mu }{\widetilde {A}_{1}} \bigotimes \log _{\mu }\widetilde {A}_{2} \bigotimes \log _{\mu }\widetilde {A}_{3} = \log _{\mu }\widetilde {A}_{3} \bigotimes \log _{\mu }\widetilde {A}_{2}\) \(\bigotimes \log _{\mu }\widetilde {A}_{1}\).

Proof

The proof of the above Theorem follows from Properties 2.1 and Definition 3.1. □

Theorem 3.5

Let \(\widetilde {A}_{t}= \left \langle ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}),~({l}_{t}, {m}_{t}, {n}_{t},{p}_{t})\right .\), \(\left .({x}_{t}, {y}_{t}, {v}_{t}, {w}_{t} )\right \rangle \) (t = 1, 2) be any two TNNs and 0 < μ ≤ min \(\left ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}, 1-{l}_{t}, 1-{m}_{t}, 1-{n}_{t}, 1\!-{p}_{t}, 1-{x}_{t}, 1-\right .\left .{y}_{t}, 1-{v}_{t}, 1-{w}_{t}\right )< 1\). Then

  1. i)

    \( k(\log _{\mu }\widetilde {A}_{1} \bigoplus \log _{\mu }\widetilde {A}_{2})= k \log _{\mu }\widetilde {A}_{1} \bigoplus k\log _{\mu }\widetilde {A}_{2}\);

  2. ii)

    (\( \log _{\mu }\widetilde {A}_{1} \bigotimes \log _{\mu }\widetilde {A}_{2})^{k}=(\log _{\mu }\widetilde {A}_{1})^{k} \bigotimes (\log _{\mu }\widetilde {A}_{2})^{k};\)

  3. iii)

    \( k_{1} \log _{\mu }\widetilde {A}_{1} \bigoplus k_{2}\log _{\mu }\widetilde {A}_{1}=(k_{1} + k_{2})\log _{\mu }\widetilde {A}_{1}\);

  4. iv)

    \((\log _{\mu }\widetilde {A}_{1})^{k_{1}}\bigotimes (\log _{\mu }\widetilde {A}_{1})^{k_{2}}= (\log _{\mu }\widetilde {A}_{1})^{k_{1} + k_{2}};\)

  5. v)

    \(((\log _{\mu }\widetilde {A}_{1})^{k_{1}})^{k_{2}}= (\log _{\mu }\widetilde {A}_{1})^{k_{1}k_{2}},\) where k, k1,& k2 are positive real numbers.

Proof

  1. i)

    We know,

    $$ \begin{array}{lll} &&\log_{\mu}\widetilde{A}_{1}\\ &=&\left\langle\left[1-\log_{\mu}{a}_{1}, 1-\log_{\mu}{b}_{1}, 1-\log_{\mu}{c}_{1}, 1-\log_{\mu}{d}_{1}\right],\left[\log_{\mu}(1-{l}_{1}),\log_{\mu}(1-{m}_{1}),\log_{\mu}(1-{n}_{1}),\log_{\mu}(1-{p}_{1})\right],\right.\\ &&\left.\left[\log_{\mu}(1-{x}_{1}),\log_{\mu}(1-{y}_{1}),\log_{\mu}(1-{v}_{1}),\log_{\mu}(1-{w}_{1})\right]\right\rangle. \end{array} $$
    $$ \begin{array}{lll} &&\log_{\mu}\widetilde{A}_{2}\\ &=&\left\langle\left[1-\log_{\mu}{a}_{2}, 1-\log_{\mu}{b}_{2}, 1-\log_{\mu}{c}_{2}, 1-\log_{\mu}{d}_{2}\right],\left[\log_{\mu}(1-{l}_{2}),\log_{\mu}(1-{m}_{2}),\log_{\mu}(1-{n}_{2}),\log_{\mu}(1-{p}_{2})\right],\right.\\ &&\left.\left[\log_{\mu}(1-{x}_{2}),\log_{\mu}(1-{y}_{2}),\log_{\mu}(1-{v}_{2}),\log_{\mu}(1-{w}_{2})\right]\right\rangle . \\ &\therefore&\log_{\mu}\widetilde{A}_{1}\bigoplus \log_{\mu} \widetilde{A}_{2} \\ &=& \left\langle \left[1- (\log_{\mu}{a}_{1})(\log_{\mu}{a}_{2}), 1- (\log_{\mu}{b}_{1})(\log_{\mu}{b}_{2}), 1- (\log_{\mu}{c}_{1})(\log_{\mu}{c}_{2}), 1- (\log_{\mu}{d}_{1})(\log_{\mu}{d}_{2})\right],\right.\\ &&[\log_{\mu}(1-{l}_{1})\log_{\mu}(1-{l}_{2}),\log_{\mu}(1-{m}_{1})\log_{\mu}(1-{m}_{2}),\log_{\mu}(1-{n}_{1})\log_{\mu}(1-{n}_{2})\log_{\mu}(1-{p}_{1})\log_{\mu}(1-{p}_{2})],\\ &&\left.[\log_{\mu}(1-{x}_{1})\log_{\mu}(1-{x}_{2}),\log_{\mu}(1-{y}_{1})\log_{\mu}(1-{y}_{2}),\log_{\mu}(1-{v}_{1})\log_{\mu}(1-{v}_{2}),\log_{\mu}(1-{w}_{1})\log_{\mu}(1-{w}_{2})]\right\rangle. \end{array} $$

    Now for k > 0 we have,

    $$ \begin{array}{lll} && k(\log_{\mu} \widetilde{A}_{1}\bigoplus \log_{\mu} \widetilde{A}_{2}), \\ &=& \langle \left[1- ((\log_{\mu}{a}_{1})(\log_{\mu}{a}_{2}))^{k}, 1- ((\log_{\mu}{b}_{1})(\log_{\mu}{b}_{2}))^{k}, 1- ((\log_{\mu}{c}_{1})(\log_{\mu}{c}_{2}))^{k}, 1- ((\log_{\mu}{d}_{1})(\log_{\mu}{d}_{2}))^{k}\right],\\ &&\left[((\log_{\mu}(1-{l}_{1})\log_{\mu}(1-{l}_{2}))^{k},((\log_{\mu}(1-{m}_{1})\log_{\mu}(1-{m}_{2}))^{k},((\log_{\mu}(1-{n}_{1})\log_{\mu}(1-{n}_{2}))^{k},\right.\\ &&\left.((\log_{\mu}(1-{p}_{1})\log_{\mu}(1-{p}_{2}))^{k}\right],\left[(((\log_{\mu}(1-{x}_{1})\log_{\mu}(1-{x}_{2}))^{k},((\log_{\mu}(1-{y}_{1})\log_{\mu}(1-{y}_{2}))^{k},\right.\\ &&\left( (\log_{\mu}(1-{v}_{1})\log_{\mu}(1-{v}_{2}))^{k},((\log_{\mu}(1-{w}_{1})\log_{\mu}(1-{w}_{2}))^{k}\right] \rangle\\ &=&\left\langle[1-(\log_{\mu}{a}_{1})^{k}, 1-(\log_{\mu}{b}_{1})^{k}, 1-(\log_{\mu}{c}_{1})^{k}, 1-(\log_{\mu}{d}_{1})^{k}],[(\log_{\mu}(1-{l}_{1}))^{k},(\log_{\mu}(1-{m}_{1}))^{k},\right.\\ &&\left.\left.(\log_{\mu}(1-{n}_{1}))^{k},(\log_{\mu}(1-{p}_{1}))^{k}\right],\left[(\log_{\mu}(1-{x}_{1}))^{k},(\log_{\mu}(1-{y}_{1}))^{k},(\log_{\mu}(1-{v}_{1}))^{k},(\log_{\mu}(1-{w}_{1}))^{k}\right]\right\rangle\\ && \bigoplus \left\langle[1-(\log_{\mu}{a}_{2})^{k}, 1-(\log_{\mu}{b}_{2})^{k}, 1-(\log_{\mu}{c}_{2})^{k}, 1-(\log_{\mu}{d}_{2})^{k}],[(\log_{\mu}(1-{l}_{2}))^{k},(\log_{\mu}(1-{m}_{2}))^{k},\right.\\ &&\left.\left.(\log_{\mu}(1-{n}_{2}))^{k},(\log_{\mu}(1-{p}_{2}))^{k}\right],\left[(\log_{\mu}(1-{x}_{2}))^{k},(\log_{\mu}(1-{y}_{2}))^{k},(\log_{\mu}(1-{v}_{2}))^{k},(\log_{\mu}(1-{w}_{2}))^{k}\right]\right\rangle \\ &=&k\log_{\mu} \widetilde{A}_{1} \bigoplus k\log_{\mu} \widetilde{A}_{2}. \end{array} $$
  2. ii)

    This proof is similar to the previous one.

  3. iii)

    For any k1, k2 > 0, we have

    $$ \begin{array}{lll} &&k_{1}\log_{\mu}\widetilde{A}_{1} \bigoplus k_{2}\log_{\mu}\widetilde{A}_{1}\\ &=&\left\langle\left[1-(\log_{\mu}{a}_{1})^{k_{1}}, 1-(\log_{\mu}{b}_{1})^{k_{1}}, 1-(\log_{\mu}{c}_{1})^{k_{1}}, 1-(\log_{\mu}{d}_{1})^{k_{1}}\right],\left[(\log_{\mu}(1-{l}_{1}))^{k_{1}},(\log_{\mu}(1-{m}_{1}))^{k_{1}},\right.\right.\\ &&\left.\left.(\log_{\mu}(1-{n}_{1}))^{k_{1}},(\log_{\mu}(1-{p}_{1}))^{k_{1}}\right],\left[(\log_{\mu}(1-{x}_{1}))^{k_{1}},(\log_{\mu}(1-{y}_{1}))^{k_{1}},(\log_{\mu}(1-{v}_{1}))^{k_{1}},(\log_{\mu}(1-{w}_{1}))^{k_{1}}\right]\right\rangle \end{array} $$
    $$ \begin{array}{lll} && \bigoplus \left\langle\left[1-(\log_{\mu}{a}_{2})^{k_{1}}, 1-(\log_{\mu}{b}_{2})^{k_{1}}, 1-(\log_{\mu}{c}_{2})^{k_{2}}, 1-(\log_{\mu}{d}_{2})^{k_{2}}\right],\left[(\log_{\mu}(1-{l}_{2}))^{k_{2}},(\log_{\mu}(1-{m}_{2}))^{k_{2}},\right.\right.\\ &&\left.\left.(\log_{\mu}(1-{n}_{2}))^{k_{2}},(\log_{\mu}(1-{p}_{2}))^{k_{2}}\right],\left[(\log_{\mu}(1-{x}_{2}))^{k_{2}},(\log_{\mu}(1-{y}_{2}))^{k_{2}},(\log_{\mu}(1-{v}_{2}))^{k_{2}},(\log_{\mu}(1-{w}_{2}))^{k_{2}}\right]\right\rangle\\ &=&\left\langle\left[1-(\log_{\mu}{a}_{1})^{k_{1} + k_{2}}, 1-(\log_{\mu}{b}_{1})^{k_{1} + k_{2}}, 1-(\log_{\mu}{c}_{1})^{k_{1} + k_{2}}, 1-(\log_{\mu}{d}_{1})^{k_{1} + k_{2}}\right],\left[(\log_{\mu}(1-{l}_{1}))^{k_{1}+ k_{2}},\right.\right.\\ &&\left.(\log_{\mu}(1-{m}_{1}))^{k_{1}+ k_{2}},(\log_{\mu}(1-{n}_{1}))^{k_{1}+ k_{2}},(\log_{\mu}(1-{p}_{1}))^{k_{1}+ k_{2}}\right],\left[(\log_{\mu}(1-{x}_{1}))^{k_{1}+ k_{2}},(\log_{\mu}(1-{y}_{1}))^{k_{1}+ k_{2}},\right.\\ &&\left.\left.(\log_{\mu}(1-{v}_{1}))^{k_{1}+ k_{2}},(\log_{\mu}(1-{w}_{1}))^{k_{1}+ k_{2}}\right]\right\rangle\\ &=&(k_{1} + k_{2})\log_{\mu}\widetilde{A}_{1}. \end{array} $$
  4. iv)

    Again for any k1, k2 > 0, we get

    $$ \begin{array}{lll} &&(\log_{\mu}\widetilde{A}_{1})^{k_{1}} \bigotimes (\log_{\mu}\widetilde{A}_{1})^{k_{2}}\\ &=&\left\langle \left[(1-\log_{\mu}{a}_{1})^{k_{1}},(1-\log_{\mu}{b}_{1})^{k_{1}},(1-\log_{\mu}{c}_{1})^{k_{1}},(1-\log_{\mu}{d}_{1})^{k_{1}}\right],\left[1-(1-\log_{\mu}(1-{l}_{1}))^{k_{1}},\right.\right.\\ &&\left.1-(1-\log_{\mu}(1-{m}_{1}))^{k_{1}}, 1-(1-\log_{\mu}(1-{n}_{1}))^{k_{1}}, 1-(1-\log_{\mu}(1-{p}_{1}))^{k_{1}}\right],\left[1-(1-\log_{\mu}(1-{x}_{1}))^{k_{1}},\right.\\ &&\left.\left.1-(1-\log_{\mu}(1-{y}_{1}))^{k_{1}}, 1-(1-\log_{\mu}(1-{v}_{1}))^{k_{1}}, 1-(1-\log_{\mu}(1-{w}_{1}))^{k_{1}}\right]\right\rangle \bigotimes\left\langle \left[(1-\log_{\mu}{a}_{1})^{k_{2}},(1-\log_{\mu}{b}_{1})^{k_{2}},\right.\right.\\ &&\left.(1-\log_{\mu}{c}_{1})^{k_{2}},(1-\log_{\mu}{d}_{1})^{k_{2}}\right],\left[1-(1-\log_{\mu}(1-{l}_{1}))^{k_{2}}, 1-(1-\log_{\mu}(1-{m}_{1}))^{k_{2}}, 1-(1-\log_{\mu}(1-{n}_{1}))^{k_{2}},\right.\\ &&\left.1-(1-\log_{\mu}(1-{p}_{1}))^{k_{2}}\right],\left[1-(1-\log_{\mu}(1-{x}_{1}))^{k_{2}}, 1-(1-\log_{\mu}(1-{y}_{1}))^{k_{2}}, 1-(1-\log_{\mu}(1-{v}_{1}))^{k_{2}},\right.\\ &&\left.\left.1-(1-\log_{\mu}(1-{w}_{1}))^{k_{2}}\right]\right\rangle\\ &=&\left\langle \left[(1-\log_{\mu}{a}_{1})^{k_{1} +k_{2}},(1-\log_{\mu}{b}_{1})^{k_{1} +k_{2}},(1-\log_{\mu}{c}_{1})^{k_{1} +k_{2}},(1-\log_{\mu}{d}_{1})^{k_{1} +k_{2}}\right],\right.\\ &&\left[1-(1-\log_{\mu}(1-{l}_{1}))^{k_{1} +k_{2}}, 1-(1-\log_{\mu}(1-{m}_{1}))^{k_{1} +k_{2}}\right],\left[1-(1-\log_{\mu}(1-{n}_{1}))^{k_{1} +k_{2}},\right.\\ &&\left.1-(1-\log_{\mu}(1-{p}_{1}))^{k_{1} +k_{2}}\right],\left[1-(1-\log_{\mu}(1-{x}_{1}))^{k_{1} +k_{2}}, 1-(1-\log_{\mu}(1-{y}_{1}))^{k_{1} +k_{2}},\right.\\ &&\left.1-(1-\log_{\mu}(1-{v}_{1}))^{k_{1} +k_{2}}, 1-(1-\log_{\mu}(1-{w}_{1}))^{k_{1} +k_{2}}]\right\rangle\\ &=&{(\log_{\mu}\widetilde{A}_{1})^{k_{1}+ k_{2}}} \end{array} $$
  5. v)

    The proof of this part is trivial and hence omitted.

4 Aggregation operators

In the decision making method, generally two types of aggregation operators are used namely weighted arithmetic operator and geometric averaging operator. Here, we proposed two new aggregation operators laws for TNN namely Larm and Lgeo, which are as follows:

Definition 4.1

Let \(\widetilde {A}_{t}= \langle ({a}_{t},{b}_{t}, {c}_{t}, {d}_{t}),({l}_{t}, {m}_{t}, {n}_{t},{p}_{t}),\) (xt, yt, vt, wt)〉 (t = 1, 2,⋯ , k) be any collection of TNNs and 0 < μ ≤ min (at, bt, ct, dt, 1 − lt, 1 − mt, 1 − nt, 1 − pt, 1 − xt, 1 − yt, 1 − vt, 1 − wt) < 1. The logarithmic trapezoidal neutrosophic weighted arithmetic aggregation operator \(L_{arm}: {\Gamma }^{k}\rightarrow {\Gamma }\) is defined as

$$ \begin{array}{@{}rcl@{}} L_{arm}(\widetilde{A}_{1}, \widetilde{A}_{2},\cdots,\widetilde{A}_{k})&=&\phi_{1}\log_{\mu_{1}}\widetilde{A}_{1}\bigoplus \phi_{2}\log_{\mu_{2}}\widetilde{A}_{2}\bigoplus\\ &&\cdots\bigoplus\phi_{k}\log_{\mu_{k}}\widetilde{A}_{k} , \end{array} $$

where ω = (ϕ1, ϕ2,⋯ , ϕk)T is the weight vector with ϕt ≥ 0 and \(\displaystyle \sum \limits _{t=1}^{k}\phi _{t} =1\).

Note 4.1.1

For convenience, we denote \(L_{arm}({A}_{1}, \widetilde {A}_{2},\cdots ,{\widetilde {A}_{k})} =L_{arm}.\)

Theorem 4.1

Let \(\widetilde {A}_{s}= \langle ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}),({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}),\) (xs, ys, vs, ws)〉 (s = 1, 2,⋯ , p) be any collection of TNNs. Then the aggregated value by using Larm operator is also TNN and is given by

$$ L_{arm} =\begin{cases} \left\langle \left[1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{a}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{b}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{c}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{d}_{s})^{\phi_{s}}\right],\right.\\ \left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{l}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{m}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{n}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{p}_{s}))^{\phi_{s}}\right],\\ \left.\left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{x}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{y}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{v}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{w}_{s}))^{\phi_{s}}\right]\right\rangle ;\\ 0< \mu_{s} \leq \min \left( {a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s}, 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s}, 1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right)< 1 \\ \left\langle \left[1-\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}{a}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}{b}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}{c}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}{d}_{s})^{\phi_{s}}\right],\right.\\ \left[\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{l}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{m}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{n}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{p}_{s}))^{\omega_{s}}\right],\\ \left.\left[\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{x}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{y}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{v}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{w}_{s}))^{\omega_{s}}\right]\right\rangle ;\\ 0< \frac{1}{\mu_{s}} \leq \min \left( {a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s}, 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s}, 1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right)< 1 \end{cases} $$
(1)

Proof

To prove the Theorem 4.1, we use mathematical induction on s, where 0 < μs ≤ min (as, bs, cs, ds, 1 − ls, 1 − ms, 1 − ns, 1 − ps, 1 − xs, 1 − ys, 1 − vs, 1 − ws) < 1 (s = 1, 2,⋯ , p). When s = 2, we get

$$ \begin{array}{lll} &&L_{arm}(\widetilde{A}_{1}, \widetilde{A}_{2})\\ &=&\phi_{1}\log_{\mu_{1}}\widetilde{A}_{1}\bigoplus \phi_{2}\log_{\mu_{2}}\widetilde{A}_{2}\\ &=&\left\langle \left[1- (\log_{\mu_{1}}{a}_{1})^{\phi_{1}}, 1- (\log_{\mu_{1}}{b}_{1})^{\phi_{1}}, 1- (\log_{\mu_{1}}{c}_{1})^{\phi_{1}}, 1- (\log_{\mu_{1}}{d}_{1})^{\phi_{1}}\right],\left[(\log_{\mu_{1}}(1-{l}_{1}))^{\phi_{1}},(\log_{\mu_{1}}(1-{m}_{1}))^{\phi_{1}},\right.\right.\\ &&\left.\left.(\log_{\mu_{1}}(1-{n}_{1}))^{\phi_{1}},(\log_{\mu_{1}}(1-{p}_{1}))^{\phi_{1}}\right],\left[(\log_{\mu_{1}}(1-{x}_{1}))^{\phi_{1}},(\log_{\mu_{1}}(1-{y}_{1}))^{\phi_{1}},(\log_{\mu_{1}}(1-{v}_{1}))^{\phi_{1}},(\log_{\mu_{1}}(1-{w}_{1}))^{\phi_{1}}\right]\right\rangle\\ &&\bigoplus \left\langle \left[1- (\log_{\mu_{2}}{a}_{2})^{\phi_{2}}, 1- (\log_{\mu_{2}}{b}_{2})^{\phi_{2}}, 1- (\log_{\mu_{2}}{c}_{2})^{\phi_{2}}, 1- (\log_{\mu_{2}}{d}_{2})^{\phi_{2}}\right],\left[(\log_{\mu_{2}}(1-{l}_{2}))^{\phi_{2}},(\log_{\mu_{2}}(1-{m}_{2}))^{\phi_{2}},\right.\right.\\ &&\left.\left.(\log_{\mu_{2}}(1-{n}_{2}))^{\phi_{2}},(\log_{\mu_{2}}(1-{p}_{2}))^{\phi_{2}}\right],\left[(\log_{\mu_{2}}(1-{x}_{2}))^{\phi_{2}},(\log_{\mu_{2}}(1-{y}_{2}))^{\phi_{2}},(\log_{\mu_{2}}(1-{v}_{2}))^{\phi_{2}},(\log_{\mu_{2}}(1-{w}_{2}))^{\phi_{2}}\right]\right\rangle\\ &=&\left\langle \left[1- (\log_{\mu_{1}}{a}_{1})^{\phi_{1}}(\log_{\mu_{2}}{a}_{2})^{\phi_{2}}, 1- (\log_{\mu_{1}}{b}_{1})^{\phi_{1}}(\log_{\mu_{2}}{b}_{2})^{\phi_{2}}, 1{-} (\log_{\mu_{1}}{c}_{1})^{\phi_{1}}(\log_{\mu_{2}}{c}_{2})^{\phi_{2}}, 1{-} (\log_{\mu_{1}}{d}_{1})^{\phi_{1}}(\log_{\mu_{2}}{d}_{2})^{\phi_{2}}\right],\right.\\ &&\left[(\log_{\mu_{1}}(1-{l}_{1}))^{\phi_{1}}(\log_{\mu_{2}}(1-{l}_{2}))^{\phi_{2}},(\log_{\mu_{1}}(1-{m}_{1}))^{\phi_{1}}(\log_{\mu_{2}}(1-{m}_{2}))^{\phi_{2}},(\log_{\mu_{1}}(1-{n}_{1}))^{\phi_{1}}(\log_{\mu_{2}}(1-{n}_{2}))^{\phi_{2}},\right.\\ &&\left.(\log_{\mu_{1}}(1-{p}_{1}))^{\phi_{1}}(\log_{\mu_{2}}(1-{p}_{2}))^{\phi_{2}}\right],\left[(\log_{\mu_{1}}(1-{x}_{1}))^{\phi_{1}}(\log_{\mu_{2}}(1-{x}_{2}))^{\phi_{2}},(\log_{\mu_{1}}(1-{y}_{1}))^{\phi_{1}}(\log_{\mu_{2}}(1-{y}_{2}))^{\phi_{2}},\right.\\ &&\left.\left.(\log_{\mu_{1}}(1-{v}_{1}))^{\phi_{1}}(\log_{\mu_{2}}(1-{v}_{2}))^{\phi_{2}},(\log_{\mu_{1}}(1-{w}_{1}))^{\phi_{1}}(\log_{\mu_{2}}(1-{w}_{2}))^{\phi_{2}}\right]\right\rangle\\ &=&\left\langle \left[1-\prod\limits_{s=1}^{2}(\log_{\mu_{s}}{a}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{2}(\log_{\mu_{s}}{b}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{2}(\log_{\mu_{s}}{c}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{2}(\log_{\mu_{s}}{d}_{s})^{\phi_{s}}\right],\right.\\ && \left[\prod\limits_{s=1}^{2}(\log_{\mu_{s}}(1-{l}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{2}(\log_{\mu_{s}}(1-{m}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{2}(\log_{\mu_{s}}(1-{n}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{2}(\log_{\mu_{s}}(1-{p}_{s}))^{\phi_{s}}\right],\\ &&\left.\left[\prod\limits_{s=1}^{2}(\log_{\mu_{s}}(1-{x}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{2}(\log_{\mu_{s}}(1-{y}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{2}(\log_{\mu_{s}}(1-{v}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{2}(\log_{\mu_{s}}(1-{w}_{s}))^{\phi_{s}}\right]\right\rangle \end{array} $$

Thus, the Theorem is true for s= 2. Let us assume that the Theorem is true for s = p. Then

$$ \begin{array}{lll} &&L_{arm}(\widetilde{A}_{1}, \widetilde{A}_{2},\cdots,\widetilde{A}_{p})\\ &=&\left\langle [1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{a}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{b}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{c}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{d}_{s})^{\phi_{s}}\right], \left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{l}_{s}))^{\phi_{s}},\right.\\ &&\left.\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{m}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{n}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{p}_{s}))^{\phi_{s}}\right],\left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{x}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{y}_{s}))^{\phi_{s}},\right.\\ &&\left.\left.\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{v}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{w}_{s}))^{\phi_{s}}\right]\right\rangle \end{array} $$

Now,

$$ \begin{array}{lll} &&L_{arm}(\widetilde{A}_{1}, \widetilde{A}_{2},\cdots,\widetilde{A}_{p}, \widetilde{A}_{p+1})\\ &=&\left\langle \left[1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{a}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{b}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{c}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{d}_{s})^{\phi_{s}}\right], \left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{l}_{s}))^{\phi_{s}},\right.\right.\\ &&\left.\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{m}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{n}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{m}(\log_{\mu_{s}}(1-{p}_{s}))^{\phi_{s}}\right],\left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{x}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{y}_{s}))^{\phi_{s}},\right.\\ &&\left.\left.\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{v}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{w}_{s}))^{\phi_{s}}\right]\right\rangle \bigoplus \phi_{p+1}\log_{\mu_{p+1}}\widetilde{A}_{p+1}\\ &=&\left\langle \left[1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{a}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{b}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{c}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{d}_{s})^{\phi_{s}}\right], \left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{l}_{s}))^{\phi_{s}},\right.\right.\\ &&\left.\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{m}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{n}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{p}_{s}))^{\phi_{s}}\right],\left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{x}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{y}_{s}))^{\phi_{s}},\right. \end{array} $$
$$ \begin{array}{lll} &&\left.\left.\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{v}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{w}_{s}))^{\phi_{s}}\right]\right\rangle \bigoplus \left\langle \left[\vphantom{\prod\limits_{s=1}^{p}}1- (\log_{\mu_{p+1}}{a}_{p+1})^{\phi_{p+1}}, 1- (\log_{\mu_{p+1}}{b}_{p+1})^{\phi_{p+1}},\right.\right.\\ &&\left.\prod\limits_{s=1}^{p}1- (\log_{\mu_{p+1}}{c}_{p+1})^{\phi_{p+1}}, 1- (\log_{\mu_{p+1}}{d}_{p+1})^{\phi_{p+1}}\right],\left[\vphantom{\prod\limits_{s=1}^{p}}(\log_{\mu_{p+1}}(1-{l}_{p+1}))^{\phi_{p+1}},(\log_{\mu_{p+1}}(1-{m}_{p+1}))^{\phi_{p+1}},\right.\\ &&\left.\vphantom{\prod\limits_{s=1}^{p}}(\log_{\mu_{p+1}}(1-{n}_{p+1}))^{\phi_{p+1}},(\log_{\mu_{p+1}}(1-{p}_{p+1}))^{\phi_{p+1}}\right],\left[\vphantom{\prod\limits_{s=1}^{p}}(\log_{\mu_{p+1}}(1-{x}_{p+1}))^{\phi_{p+1}},(\log_{\mu_{p+1}}(1-{y}_{p+1}))^{\phi_{p+1}},\right.\\ &&\left.\left.(\log_{\mu_{p+1}}(1-{v}_{p+1}))^{\phi_{p+1}},(\log_{\mu_{p+1}}(1-{w}_{p+1}))^{\phi_{1}}\vphantom{\prod\limits_{s=1}^{p}}\right]\right\rangle\\ &=&\left\langle \left[1-\prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}{a}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}{b}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{c}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}{d}_{s})^{\phi_{s}}\right], \left[\prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}(1-{l}_{s}))^{\phi_{s}},\right.\right.\\ &&\left.\prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}(1-{m}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}(1-{n}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}(1-{p}_{s}))^{\phi_{s}}\right],\left[\prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}(1-{x}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}(1-{y}_{s}))^{\phi_{s}},\right.\\ &&\left.\prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}(1-{v}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p+1}(\log_{\mu_{s}}(1-{w}_{s}))^{\phi_{s}}]\right\rangle \end{array} $$

This shows that the Theorem is valid for s= p + 1. Hence by mathematical induction, we can say that the above Theorem holds for all integral value of s.

Again, if \( 0< \frac {1}{\mu _{s}} \leq \) min \(\left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s},\right .\)\(\left . 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s}, 1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right )< 1\), then proceeding in the similar approach as in above case, we also get

$$ \begin{array}{lll} &&L_{arm}(\widetilde{A}_{1}, \widetilde{A}_{2},\cdots,\widetilde{A}_{p})\\ &=&\left\langle \left[1-\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}{a}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}{b}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}{c}_{s})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}{d}_{s})^{\phi_{s}}\right],\right.\\ && \left[\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{l}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{m}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{n}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{p}_{s}))^{\omega_{s}}\right],\\ &&\left. \left[\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{x}_{s}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{y}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{v}_{s}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\frac{1}{\mu_{s}}}(1-{w}_{s}))^{\omega_{s}}\right]\right\rangle. \end{array} $$

4.1 Properties of aggregation operator

In this subsection the Properties of Larm operator has been presented. Here, it is assumed that μ1 = μ2 = ⋯ = μp = μ (say) and 0 < μ ≤ min \(\left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s}, 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s},\right .\left .1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right )< 1\). Also ω = (ϕ1, ϕ2,⋯ , ϕp)T be the weight vector such that ϕs ≥ 0 and \(\displaystyle \sum \limits _{s=1}^{p}\phi _{s}=1.\)

Lemma 4.1.1 (Idempotency of L arm operator )

If \( \widetilde {A}_{s}=\widetilde {A} \), ∀ s, where \(\widetilde {A}=\langle ({a},{b}, {c}, {d}),({l}, {m}, {n},{p}),({x}, {y}, {v}, {w} )\rangle \) then

$$ L_{arm}(\widetilde{A}_{1}, \widetilde{A}_{2},\cdots,\widetilde{A}_{p})= \log_{\mu}\widetilde{A} $$

Proof

Since \( \widetilde {A}_{s}=\widetilde {A} \), ∀ s, where \(\widetilde {A}=\langle ({a},{b}, {c}, {d}),\) (l, m, n, p),(x, y, v, w)〉 is an TNN such that \(\widetilde {A}_{s}=\widetilde {A},~\forall ~s \). Then, from the Theorem (4.1), we get

$$ \begin{array}{lll} &&L_{arm}(\widetilde{A}_{1}, \widetilde{A}_{2},\cdots,\widetilde{A}_{p})\\ &=&\left\langle \left[1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{a})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{b})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{c})^{\phi_{s}}, 1-\prod\limits_{s=1}^{p}(\log_{\mu_{s}}{d})^{\phi_{s}}\right],\left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{l}))^{\phi_{s}},\right.\right.\\ &&\left.\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{m}))^{\phi_{s}}, \prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{n}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{p}))^{\phi_{s}}\right],\left[\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{x}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{y}))^{\phi_{s}},\right.\\ &&\left.\left.\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{v}))^{\phi_{s}},\prod\limits_{s=1}^{p}(\log_{\mu_{s}}(1-{w}))^{\phi_{s}}\right]\right\rangle \end{array} $$
$$ \begin{array}{lll} &=&\left\langle \left[1-(\log_{\mu}{a})^{\sum\phi_{s}}, 1-(\log_{\mu}{b})^{\sum\phi_{s}}, 1-(\log_{\mu}{c})^{\sum\phi_{s}}, 1-(\log_{\mu}{d})^{\sum\phi_{s}}\right],\left[(\log_{\mu}(1-{l}))^{\sum\phi_{s}},(\log_{\mu}(1-{m}))^{\sum\phi_{s}},\right.\right.\\ &&(\log_{\mu}(1-{n}))^{\sum\phi_{s}},(\log_{\mu}(1-{p}))^{\sum\phi_{s}}],[(\log_{\mu}(1-{x}))^{\sum\phi_{s}},(\log_{\mu}(1-{y}))^{\sum\phi_{s}},(\log_{\mu}(1-{v}))^{\sum\phi_{s}},\\ &&\left.\left.(\log_{\mu}(1-{w}))^{\sum\phi_{s}}\right]\right\rangle, ~~~\left[~\text{since,}~\delta_{s}=\delta~ \text{and}~ \widetilde{A}_{s}=\widetilde{A} ~\forall~ s\right] \\ &= &\log_{\mu}\widetilde{A}. \end{array} $$

Lemma 4.1.2 (Boundedness of L arm operator )

Let \(\widetilde {A}_{s}= \left \langle \left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}\right ),\left ({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}\right ),\left ({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} \right )\right \rangle ,~(s=1, 2,\cdots ,p)\) be any collection of TNNs and let \(\widetilde {A}_{min} = \left \langle \left [\min \limits {a}_{s},\min \limits {b}_{s}, {\min \limits } {c}_{s}, \min \limits {d}_{s}\right ],\left [\max \limits {l}_{s}, \max \limits {m}_{s},\right .\right .\left .\left . ~~~~~~~\max \limits {n}_{s}, \max \limits {p}_{s}\right ],\left [{\max \limits } {x}_{s}, \max \limits {y}_{s}, \max \limits {v}_{s}, \max \limits {w}_{s}\right ]\right \rangle ,\)\(\widetilde {A}_{max}= \left \langle \left [{\max \limits } {a}_{s},{\max \limits } {b}_{s},{\max \limits } {c}_{s},{\max \limits } {d}_{s}\right ],\left [{\min \limits } {l}_{s}, {\min \limits } \right .\right .~~~~~\left .\left .{m}_{s},{\min \limits } {n}_{s},{\min \limits } {p}_{s}\right ],\left [{\min \limits } {x}_{s},{\min \limits } {y}_{s},{\min \limits } {v}_{s},{\min \limits } {w}_{s}\right ]\right \rangle ,\)\(\widetilde {A^{-}}=L_{arm}(\widetilde {A}_{min},~\widetilde {A}_{min},~\cdots ,\widetilde {A}_{min}),\widetilde {A^{+}}=L_{arm}(\widetilde {A}_{max},~\widetilde {A}_{max},~\cdots ,\widetilde {A}_{max}).\)

Then we have

$$ \widetilde{A^{-}}\leq L_{arm}(\widetilde{A}_{1},\widetilde{A}_{2},\cdots,\widetilde{A}_{p})\leq \widetilde{A^{+}} $$

Proof

The proof of the Lemma follows from the Theorem 4.1 and the Lemma 4.1.1. □

Lemma 4.1.3 (Monotonicity of L arm operator)

Let \(\widetilde {A}_{s}= \left \langle \left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}\right ),\left ({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}\right ),\left ({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} \right )\right \rangle \) and \({\widetilde A^{\prime }_{s}}= \left \langle \left ({a^{\prime }_{s}},{b^{\prime }_{s}}, {c^{\prime }_{s}}, {d^{\prime }_{s}}\right ),\left ({l^{\prime }_{s}}, {m^{\prime }_{s}}, {n^{\prime }_{s}},{p^{\prime }_{s}}\right ), \left ({x^{\prime }_{s}}, {y^{\prime }_{s}}, {v^{\prime }_{s}}, {w^{\prime }_{s}}\right )\right \rangle ,\) (s = 1, 2,⋯ , p) be two collection of TNNs. If \(\widetilde {A}_{s}\leq {\widetilde A^{\prime }_{s}}~~\forall ~s\), then

$$ L_{arm}(\widetilde{A}_{1},\widetilde{A}_{2},\cdots,\widetilde{A}_{p})\leq L_{arm}({\widetilde A^{\prime}_{1}},{\widetilde A^{\prime}_{2}},\cdots,{\widetilde A^{\prime}_{p}}) $$

Proof

The proof of above Lemma is similar to the Lemma 4.1.2 and hence omitted. □

Definition 4.2

Let \(\widetilde {A}_{s}= \left \langle \left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}\right ),\left ({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}\right ),\right .\left .\left ({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} \right )\right \rangle ,\) (s = 1, 2,⋯ , p) be any collection of TNNs and 0 < μ ≤ min \(\left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s}, 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s},\right .\left .1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right )< 1.\) The logarithmic trapezoidal neutrosophic weighted geometric aggregation operator \(L_{geo}: {\Gamma }^{p}\rightarrow {\Gamma }\) is defined as

$$ \begin{array}{@{}rcl@{}} L_{geo}\left( \widetilde{A}_{1}, \widetilde{A}_{2},\cdots,\widetilde{A}_{p}\right) &=& \left( \log_{\mu_{1}}\widetilde{A}_{1}\right)^{\phi_{1}}\bigotimes \left( \log_{\mu_{2}}\widetilde{A}_{2}\right)^{\phi_{2}}\\ &&\bigotimes\cdots\bigotimes \left( \log_{\mu_{p}}\widetilde{A}_{p}\right)^{\phi_{p}} \end{array} $$

where ω = (ϕ1, ϕ2,⋯ , ϕp)T is the weight vector with ϕs ≥ 0 and \(\displaystyle \sum \limits _{s=1}^{p}\phi _{s} =1\).

Theorem 4.2

Let \(\widetilde {A}_{s}= \left \langle \left ({a}_{s},{b}_{s}, {c}_{s}, {d}_{s}\right ),\left ({l}_{s}, {m}_{s}, {n}_{s},{p}_{s}\right ),\right .\left .\left ({x}_{s}, {y}_{s}, {v}_{s}, {w}_{s} \right )\right \rangle \) (s = 1, 2,⋯ , p) be any collection of TNNs. Then the aggregated value by using Lgeo operator is also TNN and is given by

$$ L_{geo} =\begin{cases} \left\langle \left[{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}{a}_{s})^{\phi_{s}},{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}{b}_{s})^{\phi_{s}},{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}{c}_{s})^{\phi_{s}},{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}{d}_{s})^{\phi_{s}}\right],\right.\\ \left[1-{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}(1-{l}_{s}))^{\phi_{s}}, 1-{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}(1-{m}_{s}))^{\phi_{s}}, 1-{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}(1-{n}_{s}))^{\phi_{s}},\right.\\ \left. 1-{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}(1-{p}_{s}))^{\phi_{s}}\right], \left[1-{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}(1-{x}_{s}))^{\phi_{s}}, 1-{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}(1-{y}_{s}))^{\phi_{s}},\right.\\ \left.\left.1-{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}(1-{v}_{s}))^{\phi_{s}}, 1-{\prod}_{s=1}^{p}(1-\log_{\mu_{s}}(1-{w}_{s}))^{\phi_{s}}\right]\right\rangle ;\\ 0< \mu_{s} \leq \min \left( {a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s}, 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s}, 1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right)< 1 \\ \left\langle \left[{\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}{a}_{s})^{\phi_{s}},{\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}{b}_{s})^{\phi_{s}}, \prod\limits_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}{c}_{s})^{\phi_{s}},{\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}{d}_{s})^{\phi_{s}}\right],\right.\\ \left[1-{\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}(1-{l}_{s}))^{\phi_{s}}, 1-{\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}(1-{m}_{s}))^{\phi_{s}}, 1- {\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}(1-{n}_{s}))^{\phi_{s}},\right.\\ \left. 1- {\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}(1-{p}_{s}))^{\phi_{s}}\right], \left[1-{\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}(1-{x}_{s}))^{\phi_{s}}, 1-{\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}(1-{y}_{s}))^{\phi_{s}},\right.\\ \left.\left.1- {\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}(1-{v}_{s}))^{\phi_{s}}, 1- {\prod}_{s=1}^{p}(1-\log_{\frac{1}{\mu_{s}}}(1-{w}_{s}))^{\phi_{s}}\right]\right\rangle ;\\ 0< \frac{1}{\mu_{s}} \leq \min \left( {a}_{s},{b}_{s}, {c}_{s}, {d}_{s}, 1-{l}_{s}, 1-{m}_{s}, 1-{n}_{s}, 1-{p}_{s}, 1-{x}_{s}, 1-{y}_{s}, 1-{v}_{s}, 1-{w}_{s}\right)< 1 \end{cases} $$
(2)

Proof

The proof of the Theorem is exactly same as Theorem 4.1 and hence omitted. □

Note 4.2.1

For convenience, we denote \(L_{geo}(\widetilde {A}_{1}, \widetilde {A}_{2},\cdots ,\) \({\widetilde {A}_{p}})=L_{geo}.\)

5 MCGDM technique based on L arm and L geo operators

MCGDM is a branch of operational research. In MCGDM technique, a group of expert/decision-makers are involve to select the best alternative from a given set of feasible alternatives with respect to some given criteria. Here, we have introduced an MCGDM technique by utilizing the operators Larm & Lgeo, scalar multiplication & addition of TNNs and its defuzzyfication method. In this technique, we have considered the influence of the decision makers weights in the decision making procedure. Here, we have considered the MCGDM technique as follows:

Let U = {U1, U2,⋯ , Uu} be the set of ‘u’ different alternatives and V = {V1, V2,⋯ , Vv} be the set of ‘v’ different attributes with the associated weight vectors ω = (ϕ1, ϕ2,⋯ , ϕv)T, where ϕt ≥ 0 and \(\displaystyle \sum \limits _{t=1}^{v}\phi _{t} =1\). Also, we take the set of decision-makers W = {W1, W2,⋯ , Ww} whose weight values are assumed as Ω = {Ω12,⋯ ,Ωw}, where Ωk ≥ 0, (k = 1, 2,⋯ , w) and satisfy the condition \(\displaystyle \sum \limits _{k=1}^{w}{\Omega }_{k}=1\). Here, the weight values of the decision-maker’s will be assumed according to ability of judgement, thinking ability, knowledge power, etc. According to the suitable judgement of the decision-makers, firstly we have constructed the decision matrices related with different alternatives. The evaluated values for the alternatives on the attributes are given as

$$ \begin{array}{lll} \widetilde{A}^{r}_{ij} &=&\left\langle \left( {a}^{r}_{ij},{b}^{r}_{ij}, {c}^{r}_{ij}, {d}^{r}_{ij}\right),\left( {l}^{r}_{ij}, {m}^{r}_{ij}, {n}^{r}_{ij},{p}^{r}_{ij}\right),\right.\\&& \left.\left( {x}^{r}_{ij}, {y}^{r}_{ij}, {v}^{r}_{ij}, {w}^{r}_{ij} \right)\right\rangle,~i=1, 2,\cdots,u,\\&&j=1, 2,\cdots,v,~r=1, 2,\cdots,w. \end{array} $$

The associated decision matrix (DM) is characterized as follows:

where r = 1, 2,⋯ , w.

Let the logarithmic base index for TNNs are given by \(\mu ^{r}_{ij}\) (i = 1, 2,⋯ , u), (j = 1, 2,⋯ , v) where \(0< \mu ^{r}_{ij} \leq \) min (aij, bij, cij, dij, 1 − lij, 1 − mij, 1 − nij, 1 − pij, 1 − xij, 1 − yij, 1 − vij, 1 − wij) < 1 which are summarised in the matrix form as follows:

where r = 1, 2,⋯ , w.

Now, our MCGDM technique under TN environment has been executed through the following steps:

  • Step 1: Firstly, we apply the Larm or Lgeo operator on every decision matrix DMr to get a new column matrix \(C_{u\times 1}^{r}\) as follows

    $$ \begin{array}{@{}rcl@{}} C_{u\times 1}^{r} &=& TNWEA (\widetilde{A}_{1}, \widetilde{A}_{2},\cdots,\widetilde{A}_{v})\\ &=& \begin{array}{ll} U_{1}\\ U_{2}\\ \vdots\\ U_{u} \end{array} \left( \begin{array}{ll} \widetilde{A}_{11}^{r}&\\ \widetilde{A}_{21}^{r}&\\ {\vdots} &\\ \widetilde{A}_{u1}^{r}& \end{array}\right), \end{array} $$

    where entities of column matrix \(C_{u\times 1}^{r}\) is the aggregated evaluation values with respect to different criterion (r = 1, 2, ⋯ , w).

  • Step 2: Here, we obtain overall attribute values \(\widetilde {A}_{s1}\) corresponding to the alternatives Us (s = 1, 2,⋯ , u) after utilizing decision-maker’s (Ωk) weights according to the relation \(\displaystyle \sum \limits _{k=1}^{w} {\Omega }_{k} C_{u\times 1}^{k}\) (scalar multiplication and addition of TNNs) in the form of final decision matrix (DM) as follows

    $$ \mathbf{DM} = \begin{array}{ll} U_{1}\\ U_{2}\\ \vdots\\ U_{u} \end{array} \left( \begin{array}{cc} \widetilde{A}_{11}\\ \widetilde{A}_{21}\\ {\vdots} \\ \widetilde{A}_{u1} \end{array}\right)~. $$
  • Step 3: We calculate \(D_{Neu}(\widetilde {A}_{s1})\) of the alternatives Us, (s = 1, 2, ⋯ , u) utilizing de-Neutrosophication technique according to the Definition 2.1.1.

  • Step 4: After getting all the de-Neutrosophication values of the corresponding alternatives, the alternatives have been ranked according to the Definition 2.1.2 and select the best one.

Remark 5.1

The steps of MCGDM technique have been shown pictorially in Fig. 2.

Fig. 2
figure 2

Flowchart of our MCGDM technique

6 Detection of most harmful virus by utilizing proposed MCGDM technique

Let us consider a realistic problem linked with medical domain due to presence of disjunctive kinds of virus in our environment. In this current era, we observed that, humans of our world are suffering from many diseases and they deal with disjunctive sort of symptoms in exclusive times. It is a burning issue to identify which virus is the most harmful virus for human in recent times. Peoples always went to the hospital or nursing home and meet the doctor’s for advice. Now the doctor’s always try to identify the fever according to lab test report and symptoms on the patient’s body. But, sometimes their minds are in dilemma about the virus and symptoms when they are so closely related to each other. Thus it is a problem of uncertainty domain in which neutrosophic components are present. People of our society are come to know about the virus and its effects according to the opinions of doctors. Now, our problem is to collect data’s from different doctors (Junior, Adult, Senior) related with virus and symptoms and create decision matrices in hesitation arena and focus to find out the most harmful virus in our environment. Thus, it becomes an MCGDM problem having three alternatives, three attributes and three types of decision-maker.

Let the alternatives are: U1 = Virus 1 (Ebola Virus), U2 = Virus 2 (Marburg Virus), U3 = Virus 3 (Corona Virus) and the corresponding attributes are V1 = Symptom 1 (Vomiting), V2 = Symptom 2 (Sore Throat Problem), V3 = Symptom 3 (cough and Red Eyes). Let us consider the decision-makers W1 = Junior Doctor, W2 = Adult Doctor, W3 = Senior Doctor having weight value D = {0.33, 0.37, 0.3} and the weight corresponding to the attribute function is taken as Δ = {0.32, 0.35, 0.33}. The three alternatives are to be evaluated under these three attributes and give their preferences in terms of TNNs by the decision-makers.

The evaluated information of the alternatives Ui, (i = 1, 2,⋯ , u) under the attribute Vj, (j = 1, 2,⋯ , v) are characterized in the following trapezoidal neutrosophic number decision matrices:

Furthermore, the logarithmic base matrices of corresponding decision matrices are characterized as

Now, we have used the proposed technique under TN environment as follows:

  • Step 1: Firstly, we use the Larm operator on each decision matrix DMr according to the equation (1) we have new column matrices \(C_{3\times 1}^{r} (r=1, 2, 3)\) as follows

    $$ \mathbf{C_{3\times 1}^{1}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.3918, 0.7694, 0.7743, 0.7893),(0.2910, 0.3385, 0.3476, 0.3871),(0.244, 0.3208, 0.1988, 0.3331) \rangle\\ \langle (0.3808, 0.5407, 0.5420, 0.6125),(0.5126, 0.6103, 0.6805, 0.7422),(0.0944, 0.3979, 0.4810, 0.5483)\rangle\\ \langle (0.6726, 0.7638, 0.7826, 0.7905),(0.3944, 0.5541, 0.5755 , 0.6033),(0.3667, 0.4647, 0.6541, 0.7051)\rangle \end{array}\right), $$
    $$\mathbf{C_{3\times 1}^{2}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.1775, 0.5579, 0.7214, 0.7517),(0.4421, 0.4886, 0.5037, 1.000),(0.3250, 0.7206, 0.7281, 0.7392) \rangle\\ \langle (0.4970, 0.5352, 0.696, 0.6406),(0.4562, 0.5515, 0.5803, 0.6293),(0.4089, 0.4185, 0.4271, 0.7255)\rangle\\ \langle (0.2484, 0.7406, 0.7768, 0.7573),(0.4227, 0.4427, 0.4623, 0.5397),(0.4269, 0.4952, 0.6144, 0.7539 )\rangle \end{array}\right), $$
    $$\mathbf{C_{3\times 1}^{3}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.5616, 0.7568, 0.8417, 0.8529),(0.2368, 0.3843, 0.6788, 0.7408),(0.5078, 0.6911, 0.7518, 0.8314) \rangle\\ \langle (0.6192, 0.6198, 0.6795, 0.7452),(0.3422, 0.6367, 0.6438, 0.7248),(0.6838, 0.6838, 0.8295, 0.8438)\rangle\\ \langle (0.2446, 0.3002, 0.395, 0.6889),(0.3986, 0.4046, 0.7998, 0.8819),(0.5960, 0.6107, 0.6146, 0.6787 )\rangle \end{array}\right). $$

    Again, if we utilize the operator Lgeo operator according to the (2) on every decision matrix DMr, we get new column matrices \(C_{3\times 1}^{r} (r=1, 2, 3)\) as follows:

    $$\mathbf{\left( C_{3\times 1}^{1}\right)_{geo}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.6082, 0.6306, 0.6357, 0.6407),(0.5090, 0.5215, 0.5524, 0.7129),(0.7560, 0.7792, 0.8012, 0.8669) \rangle\\ \langle (0.4192, 0.4513, 0.4580, 0.4875),(0.1874, 0.3897, 0.4195, 0.5780),(0.6056, 0.6210, 0.7190, 0.7517)\rangle\\ \langle (0.3274, 0.3362, 0.6174, 0.6395),(0.4056, 0.4459, 0.5245, 0.6967),(0.6333, 0.6353, 0.6459, 0.6949)\rangle \end{array}\right), $$
    $$\mathbf{\left( C_{3\times 1}^{2}\right)_{geo}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.3225, 0.4421, 0.4786, 0.4830),(0.5079, 0.5114, 0.5963, 1.000),(0.2675, 0.2694, 0.3819, 0.4608) \rangle\\ \langle (0.5030, 0.5648, 0.6040, 0.6594),(0.5438, 0.5485, 0.5697, 0.5707),(0.5411, 0.5915, 0.7290, 0.7745)\rangle\\ \langle (0.4516, 0.4594, 0.5232, 0.5427),(0.5773, 0.5573, 0.6377, 0.6603),(0.1731, 0.3048, 0.3856, 0.4461)\rangle \end{array}\right), $$
    $$\mathbf{\left( C_{3\times 1}^{3}\right)_{geo}=} \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle(0.4384, 0.4432, 0.4583, 0.4710),(.5632, 0.6157, 0.6212, 0.6592),(0.2922, 0.3089, 0.3482, 0.3686) \rangle\\ \langle (0.3808, 0.4842, 0.5205, 0.548),(0.4578, 0.5433, 0.5562, 0.5752),(0.3162, 0.3162, 0.3705, 0.3862)\rangle\\ \langle (0.7554, 0.7998, 0.8050, 0.8111),(0.4014, 0.5954, 0.6002, 0.6181),(0.4040, 0.4893, 0.5954, 0.6213)\rangle \end{array}\right). $$
  • Step 2: We now apply decision-maker’s weight maintaining the relation \(\displaystyle \sum \limits _{k=1}^{w} {\Omega }_{k} C_{u\times 1}^{k}\) (scalar multiplication and addition of TNNs) and we have overall attribute values \(\widetilde {A}_{s1}\) for the alternatives Us(s = 1, 2, 3) under the operator Larm as follows

    $$ \mathbf{\left( DM\right)_{arm}} = \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle (0.3836, 0.7019, 0.7575, 0.7615),(0.3193, 0.4179, 0.5298, 0.6054),(0.3380, 0.4476, 0.5765, 0.5886) \rangle\\ \langle (0.5044, 0.5680, 0.5910, 0.6677),(0.5063, 0.6009, 0.6350, 0.6995),(0.3069, 0.3727, 0.5138, 0.5494) \rangle\\ \langle (0.4278, 0.6474, 0.6789, 0.6819),(0.4059, 0.4902, 0.5858, 0.6171),(0.4731, 0.5164, 0.5533, 0.5951) \rangle \end{array}\right). $$

    On the other side, if we apply decision-makers weight under the operator Lgeo according to the relation \(\displaystyle \sum \limits _{k=1}^{w} \rho _{k} C_{u\times 1}^{k}\), we get overall attribute values \(\widetilde {A}_{s1}\) for the alternatives Us(s = 1, 2, 3) which is given as

    $$ \mathbf{\left( DM\right)_{geo}} = \begin{array}{cc} U_{1}\\ U_{2}\\ U_{3} \end{array} \left( \begin{array}{ll} \langle (0.3744, 0.4202, 0.4539, 0.5389),(0.4633, 0.5766, 0.6512, 0.6632 ),(0.3374, 0.3808, 0.4830, 0.5119) \rangle\\ \langle (0.5138, 0.5512, 0.6269, 0.6395),(0.2051, 0.3952, 0.4612, 0.4615),(0.4459, 0.4931, 0.5589, 0.5948) \rangle\\ \langle (0.6565, 0.6947, 0.6998, 0.7335),(0.337, 0.3570, 0.3698 , 0.3709),(0.3425, 0.4061, 0.4238, 0.4567) \rangle \end{array}\right). $$
  • Step 3: The de-Neutrosofication values of \(\widetilde {A}_{{s1}}\), (s = 1, 2, 3) corresponding to Larm operator are \(D_{Neu}(\widetilde {A}_{11})= 0.5208\), \(D_{Neu}(\widetilde {A}_{21}) = 0.4882\), \(D_{Neu}(\widetilde {A}_{31}) = 0.5837\). On the other hand, the de-Neutrosofication values of \(\widetilde {A}_{{s1}}\), (s = 1, 2, 3) corresponding to operator Lgeo are \(D_{Neu}(\widetilde {A}_{11})= 0.5156\), \(D_{Neu}(\widetilde {A}_{21})=0.51\), \(D_{Neu}(\widetilde {A}_{31})= 0.5601\).

  • Step 4: The ranking order of de-Neutrosofication values is \(D_{Neu}(\widetilde {A}_{31})> D_{Neu}(\widetilde {A}_{11})> D_{Neu}(\widetilde {A}_{21})\) for the operator Larm. Therefore, the ranking order of the alternatives is U3 > U1 > U2. Therefore, U3 is the best option. Again, under the operator Lgeo, the ranking order of the alternatives is U3 > U2 > U1. Therefore, U3 is the best option.

6.1 Sensitivity analysis

The logical approach of sensitivity analysis is performed by exchanging the weights of the decision-makers keeping the remainder of the term are unchanged. Here, we perform sensitivity analysis under the Larm and Lgeo operators to capture the influence of the decision-makers weight on the relative matrix and their ranking. The sensitivity analysis results are shown in the Tables 12 under the operators Larm and Lgeo respectively. In the Figs. 3 and 4, we have represented the corresponding weights values of different decision-makers and the ranking order of the alternatives respectively under Larm operator. Also, in the Figs. 5 and 6, we have presented the related weights values of different decision-makers and the ranking order of the alternatives respectively under Lgeo operator.

Fig. 3
figure 3

Different decision-maker’s weights under the operator Larm

Fig. 4
figure 4

Ranking order of the alternatives under the operator Larm

Fig. 5
figure 5

Different decision-maker’s weights under the operator Lgeo

Fig. 6
figure 6

Ranking order of the alternatives under the operator Lgeo

Table 1 Sensitivity analysis under Larm operator
Table 2 Sensitivity analysis under Lgeo operator

In Table 1, we consider different weight vectors of the decision-makers and get U3 is the best option in four cases and U1 is the best option for one case under the operator Larm. Again, in Table 2, we consider same weight vectors of the decision-makers as in Table 1 and get U3 is the best option in all cases under the operator Lgeo.

6.2 Comparative analysis

To demonstrate the efficiency and validity of our proposed method, we have presented a comparison study of our method with the existing methods in Table 3.

Table 3 Comparison with the existing methods

From the Table 3, we have observed that aggregation operator proposed by Ye [15] cannot be apply in our decision matrices as indeterminacy part is absent in this aggregation operator. Also Liang et al. [20], Biswas et al. [23], Pranab et al. [34], Pramanik & Mallick [35], Liu & Zhang [36] and Wu et al. [37] work on SVTNN environment which is different from general TNN [16] through its basic character. So we cannot apply this method in our decision matrices to execute the best alternative. Thus, we have applied the operators TNNWAA [16], TNNWGA [16], ITNNWAA [17] and ITNNWGA [17] on our data set and obtained the results. Interestingly, we have found that the ranking order under the different operators and our method are exactly same. On the other hand we have already checked the stability of our obtained results through sensitivity analysis. These phenomenons clearly show the efficiency & reliability of our proposed logarithmic operational law based MCGDM technique.

7 Conclusion

In this article, we have presented new logarithmic operational laws for TNNs which is a productive enhancement of existing operational laws. We have studied their mathematical Properties like boundedness, monotonicity etc. Moreover, we have proposed the logarithmic trapezoidal neutrosophic weighted arithmetic aggregation operator Larm and logarithmic trapezoidal neutrosophic weighted geometric aggregation operator Lgeo and presented an MCGDM technique in TN environment by using these aggregation operators. A numerical problem has been taken up to demonstrate the proposed MCGDM method. Also, we have discussed the usefulness and the utility of the proposed method through a sensitivity analysis. Finally, a comparison study of our proposed technique with existing methods has been presented to justify the rationality and efficiency of our proposed technique. From this article, we can conclude that our defined operational law and its corresponding MCGDM technique give a new direction to deal decision-making problems.

In the future work, the defined logarithmic operational law can be expanded to the other uncertain environments to enrich the decision-making procedure. Researchers can immensely apply these ideas of neutrosophic number in numerous flourishing research fields like mobile computing, pattern recognition, cloud computing, etc.