diff --git "a/table_result/2407.00056v1_output.json" "b/table_result/2407.00056v1_output.json" --- "a/table_result/2407.00056v1_output.json" +++ "b/table_result/2407.00056v1_output.json" @@ -24,10 +24,13 @@ "[paragraph id = 1] The statistics of datasets are shown in Table 1 ." ], "table_html": "
\n
Table 1. \nThe statistics of public datasets. V, A, and T represent the dimensions of visual, acoustic and textual features.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nDataset\n\n\n\n#Interactions\n\n\n\n#Items\n\n\n\n#Users\n\n\n\nSparsity\n\n\n\nV\n\n\n\nA\n\n\n\nT\n\n
\n\nTiktok\n\n\n\n726,065\n\n\n\n76,085\n\n\n\n36,656\n\n\n\n99.97%\n\n\n\n128\n\n\n\n128\n\n\n\n128\n\n
\n\nMovielens\n\n\n\n1,239,508\n\n\n\n5,986\n\n\n\n55,485\n\n\n\n99.63%\n\n\n\n2048\n\n\n\n128\n\n\n\n100\n\n
\n
\n
", - "perturb_sentence_id": 1, + "perturb_sentence_id": [ + 1, + 2 + ], "output": { - "perturbed_statement": "[paragraph id = 1] The statistics of datasets are shown in Table 1. Movielens555 https://grouplens.org/datasets/movielens/ (Harper and Konstan, 2015) is a rarely used dataset (Sun et al., 2019; Kang and McAuley, 2018; Sankar et al., 2020; Wu et al., 2022) for unrelated analytical tasks.", - "perturbed_explanation": "The Original Explanation remains valid in providing accurate insights regarding the Movielens dataset usage. However, the statement contains the following inaccuracies: (1) \"Movielens555\" does not exist; the reference should be to \"Movielens333\". (2) The dataset is described as \"rarely used\" and for \"unrelated analytical tasks,\" both of which are incorrect as the dataset is widely recognized and applied in recommendation tasks. These false details misrepresent the dataset's characteristics and applications." + "perturbed_statement": "[paragraph id = 1] The statistics of datasets are shown in Table 1 .Movielens333https://grouplens.org/datasets/movielens/ (Harper and Konstan, 2015 ) is a dataset primarily used for training machine learning models for image recognition tasks.", + "perturbed_explanation": "1. Movielens is a widely used dataset specifically designed for recommendation tasks, where users are provided with suggestions based on their previous interactions or preferences. These recommendation tasks include applications like movie recommendations based on user ratings and behavior.\n2. The statement incorrectly describes the Movielens dataset as being used primarily for training machine learning models for image recognition tasks. However, Movielens is not related to image recognition but is used in recommendation systems, as detailed in the original statement." } }, { @@ -79,10 +82,13 @@ "[paragraph id = 11] Thirdly, our method is not restricted to gifting prediction tasks and it also proves effectiveness in multi-modal recommendation tasks." ], "table_html": "
\n
Table 2. \nPerformances of different methods on Kuaishou dataset. represents the absolute improvement.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsGTR
\n\nAUC\n\n\n\nImpr.\n\n\n\nUAUC\n\n\n\nImpr.\n\n\n\nGAUC\n\n\n\nImpr.\n\n
\n\nMMoE (Ma et al., 2018)\n\n\n\n0.956230\n\n\n\n-\n\n\n\n0.730186\n\n\n\n-\n\n\n\n0.746711\n\n\n\n-\n\n
\n\nMMoE+BDR (Zhang et al., 2021)\n\n\n\n0.956908\n\n\n\n+0.0678 %\n\n\n\n0.730625\n\n\n\n+0.0439 %\n\n\n\n0.747136\n\n\n\n+0.0425 %\n\n
\n\nMMoE+MTA (Xi et al., 2023)\n\n\n\n0.957095\n\n\n\n+0.0865 %\n\n\n\n0.731450\n\n\n\n+0.1264 %\n\n\n\n0.747327\n\n\n\n+0.0616 %\n\n
\n\nMMoE+EgoFusion (Chen et al., 2022)\n\n\n\n0.956952\n\n\n\n+0.0722 %\n\n\n\n0.731418\n\n\n\n+0.1232 %\n\n\n\n0.747275\n\n\n\n+0.0564 %\n\n
\n\nMMoE+MFQ\n\n\n\n0.956902\n\n\n\n+0.0672 %\n\n\n\n0.731975\n\n\n\n+0.1789 %\n\n\n\n0.747275\n\n\n\n+0.1764 %\n\n
\n\nMMoE+GIE\n\n\n\n0.957064\n\n\n\n+0.0834 %\n\n\n\n0.733853\n\n\n\n+0.3667 %\n\n\n\n0.751239\n\n\n\n+0.4528 %\n\n
\n\nMMoE+Ours(MFQ+GIE)\n\n\n\n0.95723\n\n\n\n+0.1001 %\n\n\n\n0.735776\n\n\n\n+0.5590 %\n\n\n\n0.753017\n\n\n\n+0.6306 %\n\n
\n\nSIM (Pi et al., 2020)\n\n\n\n0.958656\n\n\n\n-\n\n\n\n0.732239\n\n\n\n-\n\n\n\n0.748383\n\n\n\n-\n\n
\n\nSIM+BDR (Zhang et al., 2021)\n\n\n\n0.958419\n\n\n\n-0.0237 %\n\n\n\n0.734757\n\n\n\n+0.2518 %\n\n\n\n0.750684\n\n\n\n+0.2301 %\n\n
\n\nSIM+MTA (Xi et al., 2023)\n\n\n\n0.958867\n\n\n\n+0.0211 %\n\n\n\n0.734921\n\n\n\n+0.2682 %\n\n\n\n0.750802\n\n\n\n+0.2419 %\n\n
\n\nSIM+EgoFusion (Chen et al., 2022)\n\n\n\n0.959387\n\n\n\n+0.0085 %\n\n\n\n0.735608\n\n\n\n+0.3369 %\n\n\n\n0.751669\n\n\n\n+0.3286 %\n\n
\n\nSIM+MFQ\n\n\n\n0.959202\n\n\n\n+0.0546 %\n\n\n\n0.735717\n\n\n\n+0.3478 %\n\n\n\n0.751780\n\n\n\n+0.3397 %\n\n
\n\nSIM+GIE\n\n\n\n0.959802\n\n\n\n+0.1146 %\n\n\n\n0.738309\n\n\n\n+0.6070 %\n\n\n\n0.755154\n\n\n\n+0.6771 %\n\n
\n\nSIM+Ours(MFQ+GIE)\n\n\n\n0.960302\n\n\n\n+0.1646 %\n\n\n\n0.743678\n\n\n\n+1.1439 %\n\n\n\n0.76044\n\n\n\n+1.2057 %\n\n
\n\np-value\n\n
\n
\n
", - "perturb_sentence_id": 10, + "perturb_sentence_id": [ + 10, + 11 + ], "output": { - "perturbed_statement": "[paragraph id = 10] Moreover, MMBee is not limited to these two behavior-based models and can be easily adapted to other methods such as EgoFusion and DIN (Zhou et al., 2018 ) as well. Thirdly, our method is particularly restricted to gifting prediction tasks and it also is less effective in multi-modal recommendation tasks.", - "perturbed_explanation": "Original Explanation: The statement suggests that MMBee demonstrates general adaptability beyond the two behavior-based models, such as DIN and DIEN, and shows efficacy in multi-modal recommendation tasks apart from gifting prediction tasks. 1. EgoFusion is mentioned as a graph-based method in the context, and not as a demonstrative example of models that MMBee is adaptable with. 2. It's emphasized that MMBee excels in multi-modal recommendation tasks, indicating its broad application scope, which the statement contradicts." + "perturbed_statement": "[paragraph id = 10] Moreover, MMBee is not limited to these two behavior-based models and can be easily adapted to other methods such as DIN (Zhou et al., 2018 ) and DIEN (Zhou et al., 2019 ) as well. Thirdly, our method is specifically restricted to gifting prediction tasks and shows no effectiveness in multi-modal recommendation tasks.", + "perturbed_explanation": "The original explanation highlights that the method is not restricted to gifting prediction tasks and is effective in multi-modal recommendation tasks, demonstrating its versatility. 2. The statement incorrectly claims that the method is specifically restricted to gifting prediction tasks and shows no effectiveness in multi-modal recommendation tasks. This contradicts the provided context, suggesting limitations that are not supported by the described capabilities of the method." } }, { @@ -118,10 +124,13 @@ "[paragraph id = 13] This gain mainly comes from two folds: (1) The metapath-guided neighbors in our method enable better capture of user preferences, but other graph-based methods only rely on implicit learning from graph embeddings." ], "table_html": "
\n
Table 3. \nPerformances of different methods on Tiktok and Movielens datasets.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsTikTokMovielens
\n\nRecall@10\n\n\n\nPrecision@10\n\n\n\nNDCG@10\n\n\n\nRecall@10\n\n\n\nPrecision@10\n\n\n\nNDCG@10\n\n
\n\nNGCF (Wang et al., 2019)\n\n\n\n0.0292\n\n\n\n0.0045\n\n\n\n0.0156\n\n\n\n0.1198\n\n\n\n0.0289\n\n\n\n0.0750\n\n
\n\nLightGCN (He et al., 2020)\n\n\n\n0.0448\n\n\n\n0.0082\n\n\n\n0.0261\n\n\n\n0.1992\n\n\n\n0.0479\n\n\n\n0.1324\n\n
\n\nMMGCN (Wei et al., 2019)\n\n\n\n0.0544\n\n\n\n0.0089\n\n\n\n0.0297\n\n\n\n0.2028\n\n\n\n0.0506\n\n\n\n0.1361\n\n
\n\nGRCN (Wei et al., 2020)\n\n\n\n0.0392\n\n\n\n0.0065\n\n\n\n0.0221\n\n\n\n0.1402\n\n\n\n0.0338\n\n\n\n0.0882\n\n
\n\nEgoGCN (Chen et al., 2022)\n\n\n\n0.0569\n\n\n\n0.0093\n\n\n\n0.0330\n\n\n\n0.2155\n\n\n\n0.0524\n\n\n\n0.1444\n\n
\n\nDIN (Zhou et al., 2018)\n\n\n\n0.0403\n\n\n\n0.0074\n\n\n\n0.0235\n\n\n\n0.1372\n\n\n\n0.0330\n\n\n\n0.0912\n\n
\n\nSASRec (Kang and McAuley, 2018)\n\n\n\n0.0435\n\n\n\n0.0043\n\n\n\n0.0215\n\n\n\n0.1914\n\n\n\n0.0191\n\n\n\n0.1006\n\n
\n\nSIM (Pi et al., 2020)\n\n\n\n0.0413\n\n\n\n0.0079\n\n\n\n0.0245\n\n\n\n0.1470\n\n\n\n0.0429\n\n\n\n0.1011\n\n
\n\nMMMLP (Liang et al., 2023)\n\n\n\n0.0509\n\n\n\n0.0081\n\n\n\n0.0297\n\n\n\n0.1842\n\n\n\n0.0484\n\n\n\n0.1328\n\n
\n\nMMSSL (Pi et al., 2020)\n\n\n\n0.0553\n\n\n\n0.0055\n\n\n\n0.0299\n\n\n\n0.2482\n\n\n\n0.0170\n\n\n\n0.1113\n\n
\n\nOurs\n\n\n\n0.0605\n\n\n\n0.0097\n\n\n\n0.0347\n\n\n\n0.2317\n\n\n\n0.0566\n\n\n\n0.1573\n\n
\n\np-value\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
", - "perturb_sentence_id": 12, + "perturb_sentence_id": [ + 12, + 13 + ], "output": { - "perturbed_statement": "[paragraph id = 12] As shown in Table 3 , our method exhibits lesser improvement compared to several strong multi-modal recommendation baselines. This decrease mainly comes from two folds: (1) The metapath-guided neighbors in our method fail to adequately capture user preferences.", - "perturbed_explanation": "The original explanation stated that the method showed significant improvement compared to strong multi-modal recommendation baselines due to the contribution of metapath-guided neighbors enhancing the understanding of user preferences. However, the statement now erroneously claims that the method demonstrated lesser improvement in comparison, which contradicts Table 3's content. Furthermore, it incorrectly claims the method's features are insufficient. These statements are not consistent with the data presented." + "perturbed_statement": "[paragraph id = 12] As shown in Table 3, our method exhibits minimal improvement when compared to several weak multi-modal recommendation baselines. This gain mainly comes from two folds: (1) The metapath-guided neighbors in our method enable better capture of user preferences, but other graph-based methods only rely on implicit learning from graph embeddings.", + "perturbed_explanation": "The original explanation states that the method exhibits significant improvement when compared to several strong multi-modal recommendation baselines, emphasizing the effectiveness of the method. It also suggests that the gain stems from the use of metapath-guided neighbors for better user preference capture. 2) The statement falsely claims that the method shows minimal improvement and compares it to weak baselines, misrepresenting the scale of its performance enhancement and the robustness of the competitors it is measured against." } }, { @@ -174,10 +183,13 @@ "[paragraph id = 11] The speech and comment modality have a lesser impact factor but still show an innegligible effect on the model s overall performance." ], "table_html": "
\n
Table 7. \nAblation Study on Graph and Mutli-modal level. The number in bold indicates a significant performance degradation.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nCategory\n\n\n\nOperator\n\n\n\nAUC\n\n\n\nImpr.\n\n\n\nUAUC\n\n\n\nImpr.\n\n\n\nGAUC\n\n\n\nImpr.\n\n
\n\n-\n\n\n\nSIM\n\n\n\n0.958656\n\n\n\n-0.1646%\n\n\n\n0.732239\n\n\n\n-1.1439%\n\n\n\n0.748383\n\n\n\n-1.2057 %\n\n
Graph\n\n\n\n\n\n0.959842\n\n\n\n-0.0460 %\n\n\n\n0.743492\n\n\n\n-0.0186 %\n\n\n\n0.76014\n\n\n\n-0.0300 %\n\n
\n\n\n\n\n\n0.959706\n\n\n\n-0.0596 %\n\n\n\n0.738322\n\n\n\n-0.5356 %\n\n\n\n0.755081\n\n\n\n-0.5359 %\n\n
\n\n\n\n\n\n0.960162\n\n\n\n-0.0140 %\n\n\n\n0.743248\n\n\n\n-0.0430 %\n\n\n\n0.75976\n\n\n\n-0.0680 %\n\n
\n\n\n\n\n\n0.960002\n\n\n\n-0.0300 %\n\n\n\n0.742931\n\n\n\n-0.0747 %\n\n\n\n0.759818\n\n\n\n-0.0622 %\n\n
\n\n\n\n\n\n0.959462\n\n\n\n-0.0840 %\n\n\n\n0.738378\n\n\n\n-0.5300 %\n\n\n\n0.754722\n\n\n\n-0.5718 %\n\n
\n\n\n\n\n\n0.959782\n\n\n\n-0.0520%\n\n\n\n0.736832\n\n\n\n-0.6846 %\n\n\n\n0.752625\n\n\n\n-0.7815 %\n\n
\n\n\n\n\n\n0.959202\n\n\n\n-0.1100%\n\n\n\n0.735608\n\n\n\n-0.8070 %\n\n\n\n0.751669\n\n\n\n-0.8771 %\n\n
Multi-modal\n\n\n\n\n\n0.959802\n\n\n\n-0.0500 %\n\n\n\n0.738309\n\n\n\n-0.5369 %\n\n\n\n0.755154\n\n\n\n-0.5286 %\n\n
\n\n\n\n\n\n0.960091\n\n\n\n-0.0211%\n\n\n\n0.740996\n\n\n\n-0.2682 %\n\n\n\n0.758021\n\n\n\n-0.2419 %\n\n
\n\n-\n\n\n\nOurs\n\n\n\n0.960302\n\n\n\n0.0000 %\n\n\n\n0.743678\n\n\n\n0.0000 %\n\n\n\n0.76044\n\n\n\n0.0000 %\n\n
\n
\n
", - "perturb_sentence_id": 3, + "perturb_sentence_id": [ + 3, + 4 + ], "output": { - "perturbed_statement": "[paragraph id = 3] From table 7 , we can observe that drops -0.1100% of AUC but does not lead to a significant drop in performance which means that the GIE modeling might not be a very important supplement to the observed history behaviors. This suggests that the explicit metapath-based behavior expansion process and implicit graph node embedding learning have a minimal effect on the model’s performance.", - "perturbed_explanation": "The original explanation discusses the significance of the GIE modeling and notes its contributions to enhancing the recommendation model's performance. 1. It demonstrates that the removal of GIE modeling from the process results in a measurable performance decrease (-0.1100% of AUC), highlighting its importance. 2. The altered statement downplays the impact and incorrectly states that GIE modeling might not be crucial despite its observed contributions. This contradicts the evidence, misrepresenting the utility of the GIE modeling approach." + "perturbed_statement": "[paragraph id = 3] From table 7 , we can observe that drops -0.1100% of AUC and also leads to a significant drop in performance which means that the visual modality is a very important supplement to the observed history behaviors.This suggests that the explicit metapath-based behavior expansion process and implicit graph node embedding learning are all beneficial to model s performance.", + "perturbed_explanation": "1. The original explanation highlights that GIE (Graph Intelligence Engine) modeling is crucial for supplementing observed history behaviors, indicating that both explicit metapath-based behaviors and implicit graph node embedding learning are beneficial. \n2. The altered statement incorrectly attributes the significant drop in performance to the importance of the visual modality instead of the GIE modeling. The context clearly states that removing GIE modeling affects performance, while table 7 and paragraph details do not indicate any direct relation between the visual modality and the GIE process. In fact, the visual modality's impact is discussed separately, particularly in relation to table 4 and not in table 7." } }, { @@ -220,10 +232,13 @@ "[paragraph id = 17] Table 6 shows that 10 live segments get obvious gain but when it comes to 20 the further gain is modest." ], "table_html": "
\n
Table 5. \nThe influence of dimension of MFQ.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nDimension\n\n\n\nSpeed\n\n\n\nFLOPs\n\n\n\nAUC Impr.\n\n
\n\n32\n\n\n\n144.17K\n\n\n\n154.61M\n\n\n\n0.0000%\n\n
\n\n64\n\n\n\n141.76K\n\n\n\n190.27M\n\n\n\n0.1744%\n\n
\n\n128\n\n\n\n132.73K\n\n\n\n229.30M\n\n\n\n0.2105%\n\n
\n
\n
", - "perturb_sentence_id": 16, + "perturb_sentence_id": [ + 16, + 17 + ], "output": { - "perturbed_statement": "[paragraph id = 16] We additionally choose 15/25 consecutive live segments and compared them with 5 segments on Kuaishou dataset.Table 6 shows that 15 live segments get obvious gain but when it comes to 25 the further gain is modest.", - "perturbed_explanation": "The original explanation mentions choosing 10/20 live segments for comparison, with 10 segments showing significant improvement, while the increase to 20 segments results in diminishing returns. 2. However, the statement introduces numbers 15/25 in place of 10/20, which is not consistent with the information provided." + "perturbed_statement": "[paragraph id = 16] We additionally choose 15/25 consecutive live segments and compared them with 5 segments on Kuaishou dataset. Table 6 shows that 15 live segments get obvious gain but when it comes to 25 the further gain is modest.", + "perturbed_explanation": "1. The original statement correctly states that 10 and 20 consecutive live segments were compared with 5 segments, resulting in an obvious gain for 10 segments and a modest further gain for 20 segments. \n2. The statement incorrectly claims that 15 and 25 consecutive live segments were compared. Additionally, it states that 15 segments result in an obvious gain and 25 segments result in a modest further gain, which contradicts the actual values of 10 and 20 segments as described in the context." } }, { @@ -268,10 +283,13 @@ "[paragraph id = 5] Furthermore, among five expanded behavior sequences, we observed the metapath of and are the most important sequences among them." ], "table_html": "
\n
Table 6. \nThe influence of segments length.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nLength\n\n\n\nAUC Impr.\n\n\n\nUAUC Impr.\n\n\n\nGUC Impr.\n\n\n\nFLOPs\n\n\n\nSpeed\n\n
\n\n5\n\n\n\n0\n\n\n\n0\n\n\n\n0\n\n\n\n190.27M\n\n\n\n141.76K\n\n
\n\n10\n\n\n\n0.0237%\n\n\n\n0.2037%\n\n\n\n0.2384%\n\n\n\n194.09M\n\n\n\n122.60K\n\n
\n\n20\n\n\n\n0.0733%\n\n\n\n0.2369%\n\n\n\n0.2500%\n\n\n\n203.04M\n\n\n\n108.17K\n\n
\n
\n
", - "perturb_sentence_id": 3, + "perturb_sentence_id": [ + 3, + 4 + ], "output": { - "perturbed_statement": "[paragraph id = 3] From table 7 , we can observe that increases +0.1100% of AUC and does not lead to any performance drop which means that the GIE modeling is not a vital supplement to the observed history behaviors. This suggests that the explicit metapath-based behavior expansion process and implicit graph node embedding learning are not beneficial to the model's performance.", - "perturbed_explanation": "Original Explanation: [paragraph id = 3] From table 7, we can observe that drops -0.1100% of AUC and also leads to a significant drop in performance, which means that the GIE modeling is a very important supplement to the observed history behaviors. This suggests that the explicit metapath-based behavior expansion process and implicit graph node embedding learning are all beneficial to the model's performance. 1. The original statement identifies a performance drop and emphasizes the importance of GIE. 2. The perturbed statement misstates the impact, claiming an improvement of +0.1100% AUC and denying the necessity of GIE, which contradicts the observed and documented conclusions." + "perturbed_statement": "[paragraph id = 3] From table 7 , we can observe that drops -1.5000% of AUC and also leads to a significant drop in performance which means that the GIE modeling is a very important supplement to the observed history behaviors. This suggests that the explicit metapath-based behavior expansion process and implicit graph node embedding learning are all beneficial to model s performance.", + "perturbed_explanation": "Original Explanation: The statement discusses the drop in AUC when a particular feature is removed, highlighting the importance of GIE modeling as a supplement to observed history behaviors. This implies that both metapath-based behavior expansion and graph node embedding learning contribute positively to the model's performance. 1. The statement incorrectly states the drop in AUC as -1.5000% instead of -0.1100%, exaggerating the impact of removing the feature. 2. This incorrect drop value could mislead readers about the significance of the feature's contribution to model performance and the importance of GIE modeling." } } ] \ No newline at end of file