diff --git "a/table_result/2407.00082v1_output.json" "b/table_result/2407.00082v1_output.json" --- "a/table_result/2407.00082v1_output.json" +++ "b/table_result/2407.00082v1_output.json" @@ -98,10 +98,13 @@ "[paragraph id = 7] [ tabular=cccc, table head= Density H@10 M@10 , late after line= , late after last line= ]data/exp_hyperedge.txt\\csvlinetotablerow Non-Noise Add 10% Noise Add 20% Noise Add 50% Noise The validity of hypergraph wavelet filter In BISTRO, we design a novel hypergraph wavelet learning method." ], "table_html": "
\n
Table 5. Results under different graph constructions.
\n
\\csvreader
\n
\n
\n

[\ntabular=cccc,\ntable head=

\n
\n
\n
\n

Density H@10 M@10

\n
\n
\n
\n

,\nlate after line=\n
,\nlate after last line= \n
]data/exp_hyperedge.txt\\csvlinetotablerow\n

\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\"Refer\"Refer\"Refer\"Refer\"Refer\"Refer\"Refer\"Refer\"Refer\"Refer\"Refer\"ReferBISTROGeneral Spectral GNNRandom Walk\n
\n

Non-Noise

\n
\n
\n

Add 10% Noise

\n
\n
\n

Add 20% Noise

\n
\n
\n

Add 50% Noise

\n
\n
\n
\n
\n
\n

\n\"Refer\n

\n
\n
\n
Figure 4. Results of denoising performance.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n

The validity of hypergraph wavelet filter   In BISTRO, we design a novel hypergraph wavelet learning method.\nIn this learning method, a wavelet filter is deployed for data denoising as well as fine-grained job preference feature extraction.\nAs shown in Figure LABEL:fig:exp_filter_line, the curves illustrate the results of three models, which have different filtering settings, under different percentages of noise in the data.\nWe can also visualize from it that our method, BISTRO, has the smoothest decrease in model performance as the proportion of noise in the data increases.

\n
\n
\n
\n

To vividly show the denoising capability of the proposed hypergraph wavelet filter, we randomly select a user who is active in a week, filter the 50 most recent interactions from three job categories, and construct an interaction graph.\nIn this graph, each node represents a job the user has engaged with, interconnected by grey dotted lines, while the interaction sequence of the user is depicted with grey edges.\nOn this basis, we introduce noisy jobs (marked with orange crosses) and their corresponding interactions (denoted by orange edges and dotted lines) to mimic the effect of a user accidentally clicking on unrelated job types.\nGiven that each model generates job preference representations for diverse jobs, we visualize the connections between the user and jobs, as well as the relationships among jobs themselves, as shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_###.\nWe eliminate edges whose cosine similarity between job representation pairs fell below a uniform threshold and remove links between isolated jobs and the user.\nConsequently, a graph with more orange lines indicates lower model performance.\nNotably, as data noise levels escalated, the comparative models demonstrated diminished noise filtering effectiveness relative to our proposed approach. Specifically, the random walk-based method significantly underperformed compared to the spectral GCN method, primarily due to the ability of spectral graph neural networks to filter out irrelevant interaction features. Furthermore, our approach employs a wavelet kernel to create a set of sub-filters, adeptly denoising by dynamically selecting appropriate filters for the user’s evolving characteristics.

\n
\n
\n
\n
\n

\n5.5. Parametric Study (RQ4)\n

\n
\n

The size of user (job) groups   The size of user and job groups are two hyperparameters that need to be predefined.\nTherefore, we choose 500:1, 1000:1, and 2000:1 as the ratios of the total number of users and the number of user groups , and 100:1, 500:1, 1000:1 as the ratios of the total number of jobs and the number of job groups for our experiments respectively, as shown in Figure LABEL:fig:para_user and LABEL:fig:para_job.\nWe can easily observe that our model achieves best when 1000:1 and 500:1.

\n
\n
\n

The order of Chevbyshev approximation   The order of Chevbyshev approximation greatly impacts the performance of hypergraph wavelet neural networks.\nTo find the best order, we test our model with and , and the results are shown in Figure LABEL:fig:para_p.\nWe can see that the performance of the model remains constant when is greater than or equal to 3. Notice that as increases, the computational overhead of the model will also increase, so we choose as the hyperparameter of our model.

\n
\n
\n

The average length of a session   The length of the session is another hyperparameter that affects the performance of the model.\nIn Figure LABEL:fig:session_len_a, we can see that the average length of each session is 19-37 on average, and such short behavioral sequences in job recommendations (In job recommendations, the average number of interactions for users to find a suitable job is more than 80 times) are easily interfered with by noisy interactions.\nTherefore, we further compared our proposed framework with the top-2 baselines under different session length, as illustrated in Figure LABEL:fig:session_len_b.\nIt can be seen that when the session length is relatively short, noise has a huge negative impact on the accuracy of all models. However, as the session length decreases, our framework is more robust than the other two methods and can better resist noise interference.

\n
\n
\n
\n
\n

\n6. Conclusion

\n
\n

This study introduces BISTRO, an innovative framework designed to navigate the challenges of job preference drift and the subsequent data noise.\nThe framework is structured around three modules: a coarse-grained semantic clustering module, a fine-grained job preference extraction module, and a personalized top- job recommendation module.\nSpecifically, a hypergraph is constructed to deal with the preference drift issue and a novel hypergraph wavelet learning method is proposed to filter the noise in interactions when extracting job preferences.\nThe effectiveness and clarity of BISTRO are validated through experiments conducted with both offline and online environments.\nLooking ahead, we aim to continue refining BISTRO to enhance its applicability in broader contexts, particularly in scenarios characterized by anomalous data.

\n
\n
\n

References

\n
    \n
  • \n(1)\n\n\n
  • \n
  • \nBian et al. (2019)\n\nShuqing Bian, Wayne Xin Zhao, Yang Song, Tao Zhang, and Ji-Rong Wen. 2019.\n\n\nDomain adaptation for person-job fit with transferable deep global match network. In Proc. of EMNLP.\n\n\n\n\n
  • \n
  • \nBoyd (2001)\n\nJohn P Boyd. 2001.\n\n\nChebyshev and Fourier spectral methods.\n\n\nCourier Corporation.\n\n\n\n\n
  • \n
  • \nBruna et al. (2014)\n\nJoan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2014.\n\n\nSpectral Networks and Locally Connected Networks on Graphs. In Proc. of ICLR.\n\n\n\n\n
  • \n
  • \nBu et al. (2010)\n\nJiajun Bu, Shulong Tan, Chun Chen, Can Wang, Hao Wu, Lijun Zhang, and Xiaofei He. 2010.\n\n\nMusic recommendation by unified hypergraph: combining social media information and music content. In Proc. of ACM MM.\n\n\n\n\n
  • \n
  • \nChen et al. (2020)\n\nLei Chen, Le Wu, Richang Hong, Kun Zhang, and Meng Wang. 2020.\n\n\nRevisiting graph based collaborative filtering: A linear residual graph convolutional network approach. In Proc. of AAAI.\n\n\n\n\n
  • \n
  • \nCooper et al. (2014)\n\nColin Cooper, Sang Hyuk Lee, Tomasz Radzik, and Yiannis Siantos. 2014.\n\n\nRandom walks in recommender systems: exact computation and simulations. In Proc. of WWW.\n\n\n\n\n
  • \n
  • \nCremonesi et al. (2010)\n\nPaolo Cremonesi, Yehuda Koren, and Roberto Turrin. 2010.\n\n\nPerformance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems.\n\n\n\n\n
  • \n
  • \nDai et al. (2021)\n\nEnyan Dai, Charu Aggarwal, and Suhang Wang. 2021.\n\n\nNrgnn: Learning a label noise resistant graph neural network on sparsely and noisily labeled graphs. In Proc. of KDD.\n\n\n\n\n
  • \n
  • \nDang et al. (2023)\n\nYizhou Dang, Enneng Yang, Guibing Guo, Linying Jiang, Xingwei Wang, Xiaoxiao Xu, Qinghui Sun, and Hong Liu. 2023.\n\n\nUniform Sequence Better: Time Interval Aware Data Augmentation for Sequential Recommendation. In Proc. of AAAI.\n\n\n\n\n
  • \n
  • \nHe et al. (2020)\n\nXiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020.\n\n\nLightgcn: Simplifying and powering graph convolution network for recommendation. In Proc. of SIGIR.\n\n\n\n\n
  • \n
  • \nHidasi et al. (2016)\n\nBalázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2016.\n\n\nSession-based Recommendations with Recurrent Neural Networks. In Proc. of ICLR.\n\n\n\n\n
  • \n
  • \nHu et al. (2018)\n\nGuangneng Hu, Yu Zhang, and Qiang Yang. 2018.\n\n\nCoNet: Collaborative Cross Networks for Cross-Domain Recommendation. In Proc. of CIKM.\n\n\n\n\n
  • \n
  • \nHu et al. (2023)\n\nXiao Hu, Yuan Cheng, Zhi Zheng, Yue Wang, Xinxin Chi, and Hengshu Zhu. 2023.\n\n\nBOSS: A Bilateral Occupational-Suitability-Aware Recommender System for Online Recruitment. In Proc. of KDD.\n\n\n\n\n
  • \n
  • \nHuang (2021)\n\nChao Huang. 2021.\n\n\nRecent Advances in Heterogeneous Relation Learning for Recommendation. In Proc. of IJCAI.\n\n\n\n\n
  • \n
  • \nHuang et al. (2021)\n\nYuzhen Huang, Xiaohan Wei, Xing Wang, Jiyan Yang, Bor-Yiing Su, Shivam Bharuka, Dhruv Choudhary, Zewei Jiang, Hai Zheng, and Jack Langman. 2021.\n\n\nHierarchical training: Scaling deep recommendation models on large cpu clusters. In Proc. of KDD.\n\n\n\n\n
  • \n
  • \nInsights ([n. d.])\n\nFortune Business Insights. [n. d.].\n\n\nOnline Recruitment Technology Market: Forecast Report 2030.\n\n\n\n\n\n\n
  • \n
  • \nJian et al. (2024)\n\nLing Jian, Chongzhi Rao, and Xiao Gu. 2024.\n\n\nYour Profile Reveals Your Traits in Talent Market: An Enhanced Person-Job Fit Representation Learning.\n\n\n(2024).\n\n\n\n\n
  • \n
  • \nJiang et al. (2023)\n\nYangqin Jiang, Chao Huang, and Lianghao Huang. 2023.\n\n\nAdaptive graph contrastive learning for recommendation. In Proc. of KDD.\n\n\n\n\n
  • \n
  • \nKannout et al. (2023)\n\nEyad Kannout, Michal Grodzki, and Marek Grzegorowski. 2023.\n\n\nTowards addressing item cold-start problem in collaborative filtering by embedding agglomerative clustering and FP-growth into the recommendation system.\n\n\nComput. Sci. Inf. Syst. (2023).\n\n\n\n\n
  • \n
  • \nKoren et al. (2009)\n\nYehuda Koren, Robert Bell, and Chris Volinsky. 2009.\n\n\nMatrix factorization techniques for recommender systems.\n\n\nComputer (2009).\n\n\n\n\n
  • \n
  • \nLi et al. (2018)\n\nCheng-Te Li, Chia-Tai Hsu, and Man-Kwan Shan. 2018.\n\n\nA Cross-Domain Recommendation Mechanism for Cold-Start Users Based on Partial Least Squares Regression.\n\n\nACM Trans. Intell. Syst. Technol. (2018).\n\n\n\n\n
  • \n
  • \nLi and Li (2013)\n\nLei Li and Tao Li. 2013.\n\n\nNews recommendation via hypergraph learning: encapsulation of user behavior and news content. In Proc. of WSDM.\n\n\n\n\n
  • \n
  • \nLi et al. (2023)\n\nMuyang Li, Zijian Zhang, Xiangyu Zhao, Wanyu Wang, Minghao Zhao, Runze Wu, and Ruocheng Guo. 2023.\n\n\nAutomlp: Automated mlp for sequential recommendations. In Proceedings of the ACM Web Conference 2023.\n\n\n\n\n
  • \n
  • \nLi et al. (2022)\n\nXinhang Li, Zhaopeng Qiu, Xiangyu Zhao, Zihao Wang, Yong Zhang, Chunxiao Xing, and Xian Wu. 2022.\n\n\nGromov-wasserstein guided representation learning for cross-domain recommendation. In Proc. of CIKM.\n\n\n\n\n
  • \n
  • \nLiang et al. (2018)\n\nDawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. 2018.\n\n\nVariational autoencoders for collaborative filtering. In Proc. of WWW.\n\n\n\n\n
  • \n
  • \nLiao et al. (2021)\n\nShu-Hsien Liao, Retno Widowati, and Yu-Chieh Hsieh. 2021.\n\n\nInvestigating online social media users’ behaviors for social commerce recommendations.\n\n\nTechnology in Society (2021).\n\n\n\n\n
  • \n
  • \nLin et al. (2017)\n\nHao Lin, Hengshu Zhu, Yuan Zuo, Chen Zhu, Junjie Wu, and Hui Xiong. 2017.\n\n\nCollaborative company profiling: Insights from an employee’s perspective. In Proc. of AAAI.\n\n\n\n\n
  • \n
  • \nLin et al. (2022)\n\nZihan Lin, Changxin Tian, Yupeng Hou, and Wayne Xin Zhao. 2022.\n\n\nImproving graph collaborative filtering with neighborhood-enriched contrastive learning. In Proceedings of the ACM Web Conference 2022.\n\n\n\n\n
  • \n
  • \nLiu et al. (2022)\n\nBulou Liu, Bing Bai, Weibang Xie, Yiwen Guo, and Hao Chen. 2022.\n\n\nTask-optimized user clustering based on mobile app usage for cold-start recommendations. In Proc. of KDD.\n\n\n\n\n
  • \n
  • \nLiu et al. (2023b)\n\nQidong Liu, Fan Yan, Xiangyu Zhao, Zhaocheng Du, Huifeng Guo, Ruiming Tang, and Feng Tian. 2023b.\n\n\nDiffusion augmentation for sequential recommendation. In Proc. of CIKM.\n\n\n\n\n
  • \n
  • \nLiu et al. (2018)\n\nQiao Liu, Yifu Zeng, Refuoe Mokhosi, and Haibin Zhang. 2018.\n\n\nSTAMP: Short-Term Attention/Memory Priority Model for Session-based Recommendation. In Proc. of KDD.\n\n\n\n\n
  • \n
  • \nLiu et al. (2021)\n\nZhiwei Liu, Yongjun Chen, Jia Li, Philip S. Yu, Julian J. McAuley, and Caiming Xiong. 2021.\n\n\nContrastive Self-supervised Sequential Recommendation with Robust Augmentation.\n\n\nCoRR (2021).\n\n\n\n\n
  • \n
  • \nLiu et al. (2023a)\n\nZiru Liu, Jiejie Tian, Qingpeng Cai, Xiangyu Zhao, Jingtong Gao, Shuchang Liu, Dayou Chen, Tonghao He, Dong Zheng, Peng Jiang, et al. 2023a.\n\n\nMulti-task recommendations with reinforcement learning. In Proceedings of the ACM Web Conference 2023.\n\n\n\n\n
  • \n
  • \nNing and Karypis (2011)\n\nXia Ning and George Karypis. 2011.\n\n\nSlim: Sparse linear methods for top-n recommender systems. In Proc. of ICDM.\n\n\n\n\n
  • \n
  • \nPang et al. (2022)\n\nYitong Pang, Lingfei Wu, Qi Shen, Yiming Zhang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long, and Jian Pei. 2022.\n\n\nHeterogeneous global graph neural networks for personalized session-based recommendation. In Proc. of WSDM.\n\n\n\n\n
  • \n
  • \nPaudel et al. (2016)\n\nBibek Paudel, Fabian Christoffel, Chris Newell, and Abraham Bernstein. 2016.\n\n\nUpdatable, accurate, diverse, and scalable recommendations for interactive applications.\n\n\nACM Transactions on Interactive Intelligent Systems (TiiS) (2016).\n\n\n\n\n
  • \n
  • \nQin et al. (2023)\n\nChuan Qin, Le Zhang, Rui Zha, Dazhong Shen, Qi Zhang, Ying Sun, Chen Zhu, Hengshu Zhu, and Hui Xiong. 2023.\n\n\nA comprehensive survey of artificial intelligence techniques for talent analytics.\n\n\narXiv preprint arXiv:2307.03195 (2023).\n\n\n\n\n
  • \n
  • \nReusens et al. (2018)\n\nMichael Reusens, Wilfried Lemahieu, Bart Baesens, and Luc Sels. 2018.\n\n\nEvaluating recommendation and search in the labor market.\n\n\nKnowledge-Based Systems (2018).\n\n\n\n\n
  • \n
  • \nShao et al. (2023)\n\nTaihua Shao, Chengyu Song, Jianming Zheng, Fei Cai, Honghui Chen, et al. 2023.\n\n\nExploring internal and external interactions for semi-structured multivariate attributes in job-resume matching.\n\n\nInternational Journal of Intelligent Systems (2023).\n\n\n\n\n
  • \n
  • \nSloan and Smith (1980)\n\nIan H Sloan and William E Smith. 1980.\n\n\nProduct integration with the Clenshaw-Curtis points: implementation and error estimates.\n\n\nNumer. Math. (1980).\n\n\n\n\n
  • \n
  • \nSteck (2019)\n\nHarald Steck. 2019.\n\n\nEmbarrassingly shallow autoencoders for sparse data. In Proc. of WWW.\n\n\n\n\n
  • \n
  • \nSun et al. (2019)\n\nFei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019.\n\n\nBERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proc. of CIKM.\n\n\n\n\n
  • \n
  • \nTrefethen (2008)\n\nLloyd N Trefethen. 2008.\n\n\nIs Gauss quadrature better than Clenshaw–Curtis?\n\n\nSIAM review (2008).\n\n\n\n\n
  • \n
  • \nWang et al. (2022)\n\nChenyang Wang, Yuanqing Yu, Weizhi Ma, Min Zhang, Chong Chen, Yiqun Liu, and Shaoping Ma. 2022.\n\n\nTowards representation alignment and uniformity in collaborative filtering. In Proc. of KDD.\n\n\n\n\n
  • \n
  • \nWang et al. (2020b)\n\nChao Wang, Hengshu Zhu, Chen Zhu, Chuan Qin, and Hui Xiong. 2020b.\n\n\nSetrank: A setwise bayesian approach for collaborative ranking from implicit feedback. In Proc. of AAAI.\n\n\n\n\n
  • \n
  • \nWang et al. (2006)\n\nJun Wang, Arjen P De Vries, and Marcel JT Reinders. 2006.\n\n\nUnifying user-based and item-based collaborative filtering approaches by similarity fusion. In Proc. of SIGIR.\n\n\n\n\n
  • \n
  • \nWang et al. (2020a)\n\nJianling Wang, Kaize Ding, Liangjie Hong, Huan Liu, and James Caverlee. 2020a.\n\n\nNext-item recommendation with sequential hypergraphs. In Proc. of SIGIR.\n\n\n\n\n
  • \n
  • \nWang et al. (2019)\n\nXiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019.\n\n\nNeural graph collaborative filtering. In Proc. of SIGIR.\n\n\n\n\n
  • \n
  • \nWang et al. (2023)\n\nYuhao Wang, Ha Tsz Lam, Yi Wong, Ziru Liu, Xiangyu Zhao, Yichao Wang, Bo Chen, Huifeng Guo, and Ruiming Tang. 2023.\n\n\nMulti-Task Deep Recommender Systems: A Survey.\n\n\nCoRR (2023).\n\n\n\n\n
  • \n
  • \nWu et al. (2019)\n\nFelix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019.\n\n\nSimplifying graph convolutional networks. In Proc. of ICML.\n\n\n\n\n
  • \n
  • \nWu et al. (2021)\n\nJiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021.\n\n\nSelf-supervised graph learning for recommendation. In Proc. of SIGIR.\n\n\n\n\n
  • \n
  • \nWu et al. (2024)\n\nLikang Wu, Zhaopeng Qiu, Zhi Zheng, Hengshu Zhu, and Enhong Chen. 2024.\n\n\nExploring large language model for graph data understanding in online job recommendations. In Proc. of AAAI.\n\n\n\n\n
  • \n
  • \nWu et al. (2016)\n\nYao Wu, Christopher DuBois, Alice X Zheng, and Martin Ester. 2016.\n\n\nCollaborative denoising auto-encoders for top-n recommender systems. In Proc. of WSDM.\n\n\n\n\n
  • \n
  • \nXia et al. (2021)\n\nXin Xia, Hongzhi Yin, Junliang Yu, Qinyong Wang, Lizhen Cui, and Xiangliang Zhang. 2021.\n\n\nSelf-supervised hypergraph convolutional networks for session-based recommendation. In Proc. of AAAI.\n\n\n\n\n
  • \n
  • \nXie et al. (2022)\n\nXu Xie, Fei Sun, Zhaoyang Liu, Shiwen Wu, Jinyang Gao, Jiandong Zhang, Bolin Ding, and Bin Cui. 2022.\n\n\nContrastive learning for sequential recommendation. In 2022 IEEE 38th international conference on data engineering (ICDE).\n\n\n\n\n
  • \n
  • \nYadalam et al. (2020)\n\nTanya V Yadalam, Vaishnavi M Gowda, Vanditha Shiva Kumar, Disha Girish, and M Namratha. 2020.\n\n\nCareer recommendation systems using content based filtering. In 2020 5th International Conference on Communication and Electronics Systems (ICCES).\n\n\n\n\n
  • \n
  • \nYang et al. (2017)\n\nShuo Yang, Mohammed Korayem, Khalifeh AlJadda, Trey Grainger, and Sriraam Natarajan. 2017.\n\n\nCombining content-based and collaborative filtering for job recommendation system: A cost-sensitive Statistical Relational Learning approach.\n\n\nKnowledge-Based Systems (2017).\n\n\n\n\n
  • \n
  • \nYang et al. (2022)\n\nYuhao Yang, Chao Huang, Lianghao Xia, and Chenliang Li. 2022.\n\n\nKnowledge graph contrastive learning for recommendation. In Proc. of SIGIR.\n\n\n\n\n
  • \n
  • \nYao et al. (2021)\n\nTiansheng Yao, Xinyang Yi, Derek Zhiyuan Cheng, Felix Yu, Ting Chen, Aditya Menon, Lichan Hong, Ed H Chi, Steve Tjoa, Jieqi Kang, et al. 2021.\n\n\nSelf-supervised learning for large-scale item recommendations. In Proc. of CIKM.\n\n\n\n\n
  • \n
  • \nZhang et al. (2023)\n\nChi Zhang, Rui Chen, Xiangyu Zhao, Qilong Han, and Li Li. 2023.\n\n\nDenoising and Prompt-Tuning for Multi-Behavior Recommendation. In Proceedings of the ACM Web Conference 2023, WWW 2023, Austin, TX, USA, 30 April 2023 - 4 May 2023.\n\n\n\n\n
  • \n
  • \nZhang et al. (2020)\n\nMengqi Zhang, Shu Wu, Meng Gao, Xin Jiang, Ke Xu, and Liang Wang. 2020.\n\n\nPersonalized graph neural networks with attention mechanism for session-aware recommendation.\n\n\nIEEE Transactions on Knowledge and Data Engineering (2020).\n\n\n\n\n
  • \n
  • \nZhao et al. (2016)\n\nXiangyu Zhao, Tong Xu, Qi Liu, and Hao Guo. 2016.\n\n\nExploring the choice under conflict for social event participation. In Proc. of DASFAA.\n\n\n\n\n
  • \n
  • \nZhao et al. (2018)\n\nXiangyu Zhao, Liang Zhang, Zhuoye Ding, Long Xia, Jiliang Tang, and Dawei Yin. 2018.\n\n\nRecommendations with negative feedback via pairwise deep reinforcement learning. In Proc. of KDD.\n\n\n\n\n
  • \n
  • \nZheng et al. (2021)\n\nJiawei Zheng, Qianli Ma, Hao Gu, and Zhenjing Zheng. 2021.\n\n\nMulti-view denoising graph auto-encoders on heterogeneous information networks for cold-start recommendation. In Proc. of KDD.\n\n\n\n\n
  • \n
  • \nZheng et al. (2024)\n\nZhi Zheng, Xiao Hu, Zhaopeng Qiu, Yuan Cheng, Shanshan Gao, Yang Song, Hengshu Zhu, and Hui Xiong. 2024.\n\n\nBilateral Multi-Behavior Modeling for Reciprocal Recommendation in Online Recruitment.\n\n\nIEEE Transactions on Knowledge and Data Engineering (2024).\n\n\n\n\n
  • \n
  • \nZheng et al. (2022)\n\nZhi Zheng, Zhaopeng Qiu, Tong Xu, Xian Wu, Xiangyu Zhao, Enhong Chen, and Hui Xiong. 2022.\n\n\nCBR: context bias aware recommendation for debiasing user modeling and click prediction. In Proceedings of the ACM Web Conference 2022.\n\n\n\n\n
  • \n
\n
\n
\n

\nAppendix A Notations

\n
\n

We summarize all notations in this paper and list them in Table A ###reference_###.

\n
\n
\n
Table 6. Notations in this paper.
\n
\\csvreader
\n
\n
\n

[\nseparator=semicolon,\ntabular=cc,\ntable head= Notation Description \n
,\nlate after last line= \n
]data/notation.txt\\csvlinetotablerow\n

\n
\n
\n
\n
\n

\nAppendix B Model Complexity

\n
\n

Our proposed framework, BISTRO, is efficient. We will analyze it from two aspects: theoretical [from to ] and application (less than per sample) level.

\n
\n
\n

Theoretical level

\n
\n
\n

Among all modules in BISTRO, the graph neural network (GNN) is considered very time-consuming. Actually, graph learning-based recommender systems have computational limitations in practice. To address this issue, we cluster users (jobs) based on the semantic information in their resumes (requirements) using the K-Means algorithm and use a simple RNN to extract the personalized preference for a person in the user group. The extraction of preference features based on user (job) groups reduces the computational overhead of GNNs. Noting that eigendecomposition in GNNs is resource-intensive, we place it by using the Chebyshev polynomial estimation, and the Chebyshev coefficient in the polynomial can be computed by Fast Fourier Transform, which reduces the computational complexity from exponential complexity to . Therefore, our algorithm is efficient.

\n
\n
\n

Application level

\n
\n
\n

In practice, the training of all models is performed offline. For example, we use spark cluster to calculate the clustering center of each group and use HGNN to learn the corresponding representation of groups. For new or updated users and jobs, we assign them to the nearest group based on semantic clustering. Only the RNN module operates online, inferring personalized user representations within groups. In the online experiment, the 99% Response Time of BISTRO is less than .

\n
\n
\n

\nAppendix C Experiment Detail

\n
\n

\nC.1. Baselines Detail

\n
\n

The details of these baselines are as follows:

\n
\n
\n

BasicMF (Koren et al., 2009 ###reference_b21### ###reference_b21### ###reference_b21### ###reference_b21### ###reference_b21###): A model that combines matrix factorization with a Multilayer Perceptron (MLP) for recommendations.

\n
\n
\n

ItemKNN (Wang et al., 2006 ###reference_b47### ###reference_b47### ###reference_b47### ###reference_b47### ###reference_b47###): A recommender that utilizes item-based collaborative filtering.

\n
\n
\n

PureSVD (Cremonesi et al., 2010 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###): An approach that applies Singular Value Decomposition for recommendation tasks.

\n
\n
\n

SLIM (Ning and Karypis, 2011 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###): A recommendation method known as the Sparse Linear Method.

\n
\n
\n

DAE (Wu et al., 2016 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###): Stands for Collaborative Denoising Auto-Encoder, used in recommendation systems.

\n
\n
\n

MultVAE (Liang et al., 2018 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###): A model extending Variational Autoencoders to collaborative filtering for implicit feedback.

\n
\n
\n

EASE (Steck, 2019 ###reference_b42### ###reference_b42### ###reference_b42### ###reference_b42### ###reference_b42###): A recommendation technique called Embarrassingly Shallow Autoencoders for Sparse Data.

\n
\n
\n

P3a (Cooper et al., 2014 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###): A method that uses ordering rules from random walks on a user-item graph.

\n
\n
\n

RP3b (Paudel et al., 2016 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###): A recommender that re-ranks items based on 3-hop random walk transition probabilities.

\n
\n
\n

NGCF (Wang et al., 2019 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###): Employs graph embedding propagation layers to generate user/item representations.

\n
\n
\n

LightGCN (He et al., 2020 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###): Utilizes neighborhood information in the user-item interaction graph.

\n
\n
\n

SLRec (Yao et al., 2021 ###reference_b60### ###reference_b60### ###reference_b60### ###reference_b60### ###reference_b60###): A method using contrastive learning among node features.

\n
\n
\n

SGL (Wu et al., 2021 ###reference_b52### ###reference_b52### ###reference_b52### ###reference_b52### ###reference_b52###): Enhances LightGCN with self-supervised contrastive learning.

\n
\n
\n

GCCF (Chen et al., 2020 ###reference_b6### ###reference_b6### ###reference_b6### ###reference_b6### ###reference_b6###): A multi-layer graph convolutional network for recommendation.

\n
\n
\n

NCL (Lin et al., 2022 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###): Enhances recommendation models with neighborhood-enriched contrastive learning.

\n
\n
\n

DirectAU (Wang et al., 2022 ###reference_b45### ###reference_b45### ###reference_b45### ###reference_b45### ###reference_b45###): Focuses on the quality of representation based on alignment and uniformity.

\n
\n
\n

HG-GNN (Pang et al., 2022 ###reference_b36### ###reference_b36### ###reference_b36### ###reference_b36### ###reference_b36###): Constructs a heterogeneous graph with both user nodes and item nodes and uses a graph neural network to learn the embedding of nodes as a potential representation of users or items.

\n
\n
\n

A-PGNN (Zhang et al., 2020 ###reference_b62### ###reference_b62### ###reference_b62### ###reference_b62### ###reference_b62###): Uses GNN to extract session representations for intra-session interactions and uses an attention mechanism to learn features between sessions.

\n
\n
\n

AdaGCL (Jiang et al., 2023 ###reference_b19### ###reference_b19### ###reference_b19### ###reference_b19### ###reference_b19###): Combines a graph generator and a graph denoising model for contrastive views.

\n
\n
\n

MvDGAE (Zheng et al., 2021 ###reference_b65### ###reference_b65### ###reference_b65### ###reference_b65### ###reference_b65###): Stands for Multi-view Denoising Graph AutoEncoders.

\n
\n
\n

STAMP (Liu et al., 2018 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###): A model based on the attention mechanism to model user behavior sequence data.

\n
\n
\n

GRU4Rec (Hidasi et al., 2016 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###): Utilizes Gated Recurrent Units for session-based recommendations.

\n
\n
\n

BERT4Rec (Sun et al., 2019 ###reference_b43### ###reference_b43### ###reference_b43### ###reference_b43### ###reference_b43###): A model for the sequence-based recommendation that handles long user behavior sequences.

\n
\n
\n

CL4Rec (Xie et al., 2022 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###): An improved version of BERT4Rec with locality-sensitive hashing for faster item retrieval.

\n
\n
\n

CoScRec (Liu et al., 2021 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###): It explores an innovative recommendation approach that enhances sequential recommendation systems through robust data augmentation and contrastive self-supervised learning techniques.

\n
\n
\n

TiCoSeRec (Dang et al., 2023 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###): A method based on CoSeRec, utilizing data augmentation algorithms for sequence recommendation improvement.

\n
\n
\n
\n

\nC.2. The number of recommended jobs

\n
\n

The hyperparameter, , also has a critical impact on experimental results.\nWe set and to conduct experiments respectively.\nThe experimental results are shown in Table C.2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nIt can be seen our model performs well under all settings.

\n
\n
\n
Table 7. Results under different settings of .
\n
\\csvreader
\n
\n
\n

[\ntabular=cccc,\ntable head=

\n
\n
\n
\n

\n
\n
\n
\n

,\nlate after line=\n
,\nlate after last line= \n
]data/exp_para_k.txt\\csvlinetotablerow\n

\n
\n
\n
\n
\n

\nC.3. Case Study

\n
\n
Skill Req: bb+ccDegree Req: ddExperience: hhLocation: ggJID***872JID***994Skill Req: ee+ffDegree Req: ddExperience: hhLocation: ggJID***265UID***175UID***175JID***523UID***479Age: aaSkill: bb, ccDegree: ddAge: aaSkill: ee, ffDegree: ddUID***013New JobsReviseResume\n
\n

Expect

\n
\n
\n

Expect

\n
ClickRec.\n
\n

Click

\n
\n
\n

Click

\n
\n
\n

Rec.

\n
\n
\n

Rec.

\n
\n
\n
Figure 5. A real-life scenario for the job recommendation.
\n
\n
\n

Beyond its effectiveness in performance, BISTRO also boasts considerable interpretability.\nTo demonstrate how the framework mitigates both the job preference drift and data noise problems, we present a real-life scenario to illustrate the logic behind the suggestions made by BISTRO, as shown in Figure 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.

\n
\n
\n

In this figure, jobs with IDs ***872 and ***994 are two job positions that are newly posted in the online recruitment system, while IDs ***265 and ***523 are two job positions that a large number of users interact with frequently.\nAmong them, ***872 and ***265, as well as ***994 and ***523, have similar occupational demand descriptions respectively.\nAlso, the user with ID ***175 shared a similar resume with user ID ***479 before ***175 modified the resume, and after his resume was changed, ***175 had a similar content with user ***013.\nRecommendations in this scenario can be divided into three examples:

\n
\n
\n

Example 1 (Recommendation for a dynamically changing user)  Consider the user represented by ID ***175, BISTRO addresses this challenge by deploying content-based analysis.\nThe framework utilizes the user’s social network and a set of resume attributes collected to create a composite feature profile to identify users with similar tastes.\nSubsequently, it recommends a job with ID ***523 favored by a like-minded user with ID ***013 to him.

\n
\n
\n

Example 2 (Recommendation for a new job)  A newly posted job with ID ***872 lacks any user interaction data, complicating the generation of a meaningful representation for it.\nBISTRO, however, overcomes this by incorporating auxiliary information such as skill requirements and working experience, and then associated tags to locate similar content.\nBy leveraging this approach combined with the user’s expectations, BISTRO acquires a rich and informative embedding for the job, enabling it to recommend the job to users who have shown an interest in comparable jobs.

\n
\n
\n

Example 3 (Recommend a new job to a dynamically changing user)  Combining both two situations illustrated above, BISTRO deals with this complex challenge by utilizing a wavelet graph denoising filter and graph representation method.\nIn this way, it can recommend the latest jobs with similar job content to users with the same real-time needs as well as similar user content characteristics.

\n
\n
\n
Table 8. Results of different job recommender systems.
\n
\\csvreader
\n
\n
\n

[\ntabular=ccccccc,\ntable head=

\n
\n
\n
\n

Shenzhen Shanghai Beijing

\n
\n
\n
\n

H@10 M@10 H@10 M@10 H@10 M@10

\n
\n
\n
\n

,\nlate after line=\n
,\nlate after last line= \n
]data/app_case.txt

\n
\n
\n
\\csvlinetotablerow
\n
\n
\n

Bold indicates the statistically significant improvements \n
(i.e., two-sided t-test with p ¡ 0.05) over the best baseline (underlined).

\n
\n
\n
\n

For all metrics: the higher, the better.

\n
\n
\n
\n

In addition, we also compare the proposed framework, BISTRO, with multiple job state-of-the-art recommender systems, i.e., InEXIT (Shao et al., 2023 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###), DGMN (Bian et al., 2019 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###), and APJFMF (Jian et al., 2024 ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18###).\nThe result can be found in Table C.3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nWe can see that our framework acheves the best among all baselines, which verify the effectiveness of our method.

\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
", - "perturb_sentence_id": 7, + "perturb_sentence_id": [ + 7, + 8 + ], "output": { - "perturbed_statement": "[paragraph id = 6] As shown in Table 5.4, the density of the graph is halved when we add all types of hyperedges and the experimental results have also not varied notably due to the optimization of the structure in the graph.", - "perturbed_explanation": "1. The statement outlines the effect of introducing all types of hyperedges on the graph's density and experimental outcomes. 2. It's stated that the graph density is reduced and no significant experimental improvement occurs; however, this directly contradicts the original passage, where it's evident the density doubles and improvements are noted." + "perturbed_statement": "[paragraph id = 7] [ tabular=cccc, table head= Density H@5 M@10 , late after line= , late after last line= ]data/exp_hyperedge.txt\\csvlinetotablerow Non-Noise Add 10% Noise Add 20% Noise Add 50% Noise The validity of hypergraph wavelet filter In BISTRO, we design a novel hypergraph wavelet learning method.In this learning method, a wavelet filter is deployed for data denoising as well as fine-grained job preference feature extraction.", + "perturbed_explanation": "The comparison results are shown in Table 5.4 and the experimental results are improved through optimizing the graph structure. As shown in Table 5.4, the graph density doubles with the addition of all hyperedges, which improves the experimental results. In the statement, replacing H@10 with H@5 introduces an inconsistency because there's no mention of H@5 in the context, and the change in density doubling as described would correspond with H@10, not H@5." } } ] \ No newline at end of file