sample_id
stringlengths
55
57
image
imagewidth (px)
34
1.22k
dataset_name
stringclasses
1 value
parsing_results
stringlengths
136
162k
parsing_queries
stringclasses
10 values
caption
stringlengths
4
1.62k
caption_instructions
stringclasses
10 values
2210.07987v3_train__8006cbca-9955-4dd0-9b71-5b0e6721c0a5
arxiv_table_to_tex
\begin{table}[htbp]\begin{table}[htbp]\caption{Time series modeling: root mean of squared errors (RMSE) and log-likelihood (LL) values at MAP estimates by GP, Besov and Q-EP prior models.}\centering\begin{tabular}{l|lll|lll}\toprule& \multicolumn{3}{c|}{ root mean squared errors (RMSE)} & \multicolumn{3}{|c}{log-likelihood (LL)} \\\cmidrule{1-4} \cmidrule{5-7}Data Sets & GP & Besov & Q-EP & GP & Besov & Q-EP\\\midrulesimulation (jumps) & 1.2702 & 2.1603 & {\bf 1.1083} & -31.4582 & -89.8549 & -74.0590 \\simulation (turnings) & 1.4270 & 2.4556 & {\bf 0.9987} & -39.8234 & -56.7874 & -87.3124 \\\midruleTesla stocks & 180.3769 & 136.8769 & {\bf 51.2236} & -488.6458 & -281.3796 & -39.4070 \\Google stocks & 44.4236 & 39.4809 & {\bf 36.8686} & -386.1546 & -305.0058 & -265.9790 \\\bottomrule\end{tabular}\label{tab:ts_rmse}\end{table}
Could you please provide me the LaTeX code to generate this table?
Time series modeling: root mean of squared errors (RMSE) and log-likelihood (LL) values at MAP estimates by GP, Besov and Q-EP prior models.
I need a caption for this LaTeX table, please.
2210.07987v3_train__e231f49d-0a40-471f-83ba-8e6266d8b35b
arxiv_table_to_tex
\begin{table*}[htbp]\begin{table*}[htbp]\caption{MAP estimates for CT of human head and torso by GP, Besov and Q-EP prior models:relative error, $\textrm{RLE}:=\Vert \hat u-u^\dagger\Vert/\Vert u^\dagger\Vert$ of MAP ($\hat u =u^*$), log-likelihood (LL), PSNR, SSIM and HarrPSI.}\centering\begin{tabular}{l|lll|lll}\toprule\multicolumn{4}{c|}{Head} & \multicolumn{3}{|c}{Torso} \\\cmidrule{1-4} \cmidrule{5-7}& GP & Besov & Q-EP & GP & Besov & Q-EP \\\midruleRLE & 0.2999 & 0.2241 & {\bf 0.2224} & 0.2611 & 0.2177 & {\bf 0.2153} \\LL & -4.05e+5 & -1.12e+4 & -1.17e+4 & -3.30e+5 & -3.86e+3 & -4.37e+3 \\PSNR & 24.2321 & 26.7633 & {\bf 26.8281} & 23.6450 & 25.2231 & {\bf 25.3190} \\SSIM & 0.7010 & 0.7914 & {\bf 0.8096} & 0.5852 & {\bf 0.6983} & 0.6982 \\HaarPSI & 0.0525 & {\bf 0.0593} & 0.0587 & 0.0666 & {\bf 0.0732} & 0.07190 \\\bottomrule\end{tabular}\label{tab:CT_human}\end{table*}
Provide LaTeX code for this table.
MAP estimates for CT of human head and torso by GP, Besov and Q-EP prior models:relative error, $\textrm{RLE
I need a caption for this LaTeX table, please.
2312.04066v2_train__aa24127c-f14d-4d8f-b39a-e4857adfc45f
arxiv_table_to_tex
\begin{table*}\begin{table*}\caption{Accuracy results on DomainNet dataset. ZS indicates zero-shot results of CLIP.}\label{tab:domainNet}\centering\resizebox{\textwidth}{!}{\begin{tabular}{c|ccccccc||c|ccccccc||c|ccccccc}\topruleMCD \cite{MCD} & clp & inf & pnt & qdr & rel & skt & Avg & CGDM \cite{CGDM} & clp & inf & pnt & qdr &rel & skt & Avg & MDD \cite{MDD} & clp & inf & pnt & qdr & rel & skt & Avg \\\midruleclp & - & 15.4 & 25.5 & 3.3 & 44.6 & 31.2 & 24.0 & clp & - & 16.9 & 35.3 & 10.8 & 53.5 & 36.9 & 30.7 & clp & - & 20.5 & 40.7 & 6.2 & 52.5 & 42.1 & 32.4 \\inf & 24.1 & - & 24.0 & 1.6 & 35.2 & 19.7 & 20.9 & inf & 27.8 & - & 28.2 & 4.4 & 48.2 & 22.5 & 26.2 & inf & 33.0 & - & 33.8 & 2.6 & 46.2 & 24.5 & 28.0 \\pnt & 31.1 & 14.8 & - & 1.7 & 48.1 & 22.8 & 23.7 & pnt & 37.7 & 14.5 & - & 4.6 & 59.4 & 33.5 & 30.0 & pnt & 43.7 & 20.4 & - & 2.8 & 51.2 & 41.7 & 32.0 \\qdr & 8.5 & 2.1 & 4.6 & - & 7.9 & 7.1 & 6.0 & qdr & 14.9 & 1.5 & 6.2 & - & 10.9 & 10.2 & 8.7 & qdr & 18.4 & 3 & 8.1 & - & 12.9 & 11.8 & 10.8 \\rel & 39.4 & 17.8 & 41.2 & 1.5 & - & 25.2 & 25.0 & rel & 49.4 & 20.8 & 47.2 & 4.8 & - & 38.2 & 32.0 & rel & 52.8 & 21.6 & 47.8 & 4.2 & - & 41.2 & 33.5 \\skt & 37.3 & 12.6 & 27.2 & 4.1 & 34.5 & - & 23.1 & skt & 50.1 & 16.5 & 43.7 & 11.1 & 55.6 & - & 35.4 & skt & 54.3 & 17.5 & 43.1 & 5.7 & 54.2 & - & 35 \\Avg & 28.1 & 12.5 & 24.5 & 2.4 & 34.1 & 21.2 & \cellcolor[gray]{0.8} 20.5 & Avg & 36 & 14 & 32.1 & 7.1 & 45.5 & 28.3 & \cellcolor[gray]{0.8} 27.2 & Avg & 40.4 & 16.6 & 34.7 & 4.3 & 43.4 & 32.3 & \cellcolor[gray]{0.8} 28.6 \\\midrule\midruleSCDA \cite{SCDA} & clp & inf & pnt & qdr & rel & skt & Avg & CDTrans \cite{CDTrans} & clp & inf & pnt & qdr & rel & skt & Avg & SSRT \cite{SSRT} & clp & inf & pnt & qdr & rel & skt & Avg \\\midruleclp & - & 18.6 & 39.3 & 5.1 & 55 & 44.1 & 32.4 & clp & - & 29.4 & 57.2 & 26 & 72.6 & 58.1 & 48.7 & clp & - & 33.8 & 60.2 & 19.4 & 75.8 & 59.8 & 49.8 \\inf & 29.6 & - & 34 & 1.4 & 46.3 & 25.4 & 27.3 & inf & 57 & - & 54.4 & 12.8 & 69.5 & 48.4 & 48.4 & inf & 55.5 & - & 54 & 9 & 68.2 & 44.7 & 46.3 \\pnt & 44.1 & 19 & - & 2.6 & 56.2 & 42 & 32.8 & pnt & 62.9 & 27.4 & - & 15.8 & 72.1 & 53.9 & 46.4 & pnt & 61.7 & 28.5 & - & 8.4 & 71.4 & 55.2 & 45 \\qdr & 30.0 & 4.9 & 15 & - & 25.4 & 19.8 & 19 & qdr & 44.6 & 8.9 & 29 & - & 42.6 & 28.5 & 30.7 & qdr & 42.5 & 8.8 & 24.2 & - & 37.6 & 33.6 & 29.3 \\rel & 54 & 22.5 & 51.9 & 2.3 & - & 42.5 & 34.6 & rel & 66.2 & 31 & 61.5 & 16.2 & - & 52.9 & 45.6 & rel & 69.9 & 37.1 & 66 & 10.1 & - & 58.9 & 48.4 \\skt & 55.6 & 18.5 & 44.7 & 6.4 & 53.2 & - & 35.7 & skt & 69 & 29.6 & 59 & 27.2 & 72.5 & - & 51.5 & skt & 70.6 & 32.8 & 62.2 & 21.7 & 73.2 & - & 52.1 \\Avg & 42.6 & 16.7 & 37 & 3.6 & 47.2 & 34.8 & \cellcolor[gray]{0.8} 30.3 & Avg & 59.9 & 25.3 & 52.2 & 19.6 & 65.9 & 48.4 & \cellcolor[gray]{0.8} 45.2 & Avg & 60 & 28.2 & 53.3 & 13.7 & 65.3 & 50.4 & \cellcolor[gray]{0.8} 45.2 \\\midrule\midrulePMTrans \cite{PMTrans} & clp & inf & pnt & qdr & rel & skt & Avg & ZS \cite{CLIP} & clp & inf & pnt & qdr & rel & skt & Avg & SKD (Ours) & clp & inf & pnt & qdr & rel & skt & Avg \\\midruleclp & - & 34.2 & 62.7 & 32.5 & 79.3 & 63.7 & 54.5 & clp & - & 48.3 & 68.5 & 14.0 & 84.6 & 65.8 & 56.3 & clp & - & 52.7 & 72.0 & 36.3 & 85.1 & 71.5 & 63.5 \\inf & 67.4 & - & 61.1 & 22.2 & 78 & 57.6 & 57.3 & inf & 73.1 & - & 68.5 & 14.0 & 84.6 & 65.8 & 61.2 & inf & 78.5 & - & 72.2 & 30.4 & 85.0 & 69.5 & 67.1 \\pnt & 69.7 & 33.5 & - & 23.9 & 79.8 & 61.2 & 53.6 & pnt & 73.1 & 48.3 & - & 14.0 & 84.6 & 65.8 & 57.2 & pnt & 78.7 & 49.8 & - & 29.6 & 84.5 & 69.7 & 62.4 \\qdr & 54.6 & 17.4 & 38.9 & - & 49.5 & 41 & 40.3 & qdr & 73.1 & 48.3 & 68.5 & - & 84.6 & 65.8 & 68.1 & qdr & 70.7 & 22.9 & 50.9 & - & 82.4 & 62.3 & 57.8 \\rel & 74.1 & 35.3 & 70 & 25.4 & - & 61.1 & 53.2 & rel & 73.1 & 48.3 & 68.5 & 14.0 & - & 65.8 & 54.0 & rel & 80.3 & 53.5 & 74.2 & 30.2 & - & 71.1 & 61.9 \\skt & 73.8 & 33.0 & 62.6 & 30.9 & 77.5 & - & 55.6 & skt & 73.1 & 48.3 & 68.5 & 14.0 & 84.6 & - & 57.7 & skt & 81.0 & 52.0 & 73.0 & 36.6 & 84.6 & - & 65.4 \\Avg & 67.9 & 30.7 & 59.1 & 27 & 72.8 & 56.9 & \cellcolor[gray]{0.8} 52.4 & Avg & 73.1 & 48.3 & 68.5 & 14.0 & 84.6 & 65.8 & \cellcolor[gray]{0.8} 59.1 & Avg & 77.9 & 46.2 & 68.5 & 32.6 & 84.3 & 68.8 & \cellcolor[gray]{0.8} \textbf{63.0} \\\bottomrule\end{tabular}}\end{table*}
I need the LaTeX code to create this table, please.
Accuracy results on DomainNet dataset. ZS indicates zero-shot results of CLIP.
Could you generate a caption for this LaTeX table?
2312.04066v2_train__6b67ec37-234a-4972-a8c5-6bcec9f037b7
arxiv_table_to_tex
\begin{table*}\begin{table*}\caption{Contributions of different parts of our algorithm. Aug means using strongly augmented images alongside the weakly augmented ones, while GSDE uses only the prediction scores of the previous run, GSDE+ adds zero-shot prediction scores according to eq. \ref{eq:GSDE}.}\label{tab:losses}\centering\begin{tabular}{lllll|ll|ll}\multicolumn{5}{c|}{} & \multicolumn{2}{c|}{ResNet} & \multicolumn{2}{c}{ViT} \\$L_{AD}$ & $L_{KD}$ & Aug & GSDE & GSDE+ & VisDA & OH & VisDA & OH \\\midrule$\checkmark$ & & & & & 72.73 & 65.99 & 74.25 & 81.74 \\& $\checkmark$ & & & & 87.65 & 69.06 & 90.28 & 83.27 \\$\checkmark$ & $\checkmark$ & & & & 88.21 & 73.43 & 90.44 & 87.66 \\$\checkmark$ & $\checkmark$ & $\checkmark$ & & & 88.62 & 75.48 & 90.37 & 87.97 \\$\checkmark$ & $\checkmark$ & $\checkmark$ &$\checkmark$ & & 88.90 & 78.72 & \textbf{91.22} & 89.08 \\$\checkmark$ & $\checkmark$ & $\checkmark$ & & $\checkmark$ & \textbf{89.09} & \textbf{78.78} & 91.20 & \textbf{89.24} \\\bottomrule\end{tabular}\end{table*}
Please create LaTeX code to produce this table.
Contributions of different parts of our algorithm. Aug means using strongly augmented images alongside the weakly augmented ones, while GSDE uses only the prediction scores of the previous run, GSDE+ adds zero-shot prediction scores according to eq. \ref{eq:GSDE
Please write a caption that describes this LaTeX table.
2210.16898v1_train__8c72db1d-81f3-4e32-b471-6557d11e7acd
arxiv_table_to_tex
\begin{table*}[t]\begin{table*}[t]\centering\caption{Performance comparison of the proposed method against the SOTA approaches on skin lesion segmentation task.} \label{tab:quantitative}\resizebox{\textwidth}{!}{\begin{tabular}{l||cccc||cccc||cccc}\hline\multirow{2}{*}{\textbf{Methods}} & \multicolumn{4}{c||}{\textit{ISIC 2017}} & \multicolumn{4}{c||}{\textit{ISIC 2018}} & \multicolumn{4}{c}{\textit{PH$^2$}} \\ \cline{2-5} \cline{6-9} \cline{10-13}& \textbf{DSC} & \textbf{SE} & \textbf{SP}&\textbf{ACC} & \textbf{DSC} & \textbf{SE} & \textbf{SP}&\textbf{ACC} & \textbf{DSC} & \textbf{SE} & \textbf{SP}&\textbf{ACC} \\ \hlineU-Net \cite{ronneberger2015u} & 0.8159 & 0.8172 & 0.9680 & 0.9164 & 0.8545 & 0.8800 & 0.9697 & 0.9404 & 0.8936 & 0.9125 & 0.9588 & 0.9233 \\Att U-Net \cite{oktay2018attention} & 0.8082 & 0.7998 & 0.9776 & 0.9145& 0.8566 & 0.8674 & \textbf{0.9863} & 0.9376& 0.9003 & 0.9205 & 0.9640 & 0.9276 \\TransUNet \cite{chen2021transunet} &0.8123&0.8263&0.9577&0.9207 & 0.8499 & 0.8578 & 0.9653 & 0.9452 & 0.8840&0.9063&0.9427&0.9200 \\MCGU-Net \cite{asadi2020multi} & 0.8927 & 0.8502 & 0.9855 & 0.9570 & 0.895 & 0.848 & 0.986 & 0.955 & 0.9263 & 0.8322 & 0.9714 & 0.9537 \\MedT \cite{valanarasu2021medical} & 0.8037 & 0.8064 & 0.9546 & 0.9090 & 0.8389 & 0.8252 & 0.9637 & 0.9358& 0.9122 & 0.8472 & 0.9657 & 0.9416\\FAT-Net \cite{wu2022fat} &0.8500 & 0.8392 & 0.9725 & 0.9326& 0.8903 & \textbf{0.9100} & 0.9699 & 0.9578& 0.9440 & \textbf{0.9441} & 0.9741 & 0.9703 \\TMU-Net \cite{reza2022contextual} & 0.9164 & 0.9128 & 0.9789 & 0.9660 & 0.9059 & 0.9038 & 0.9746 & 0.9603 & 0.9414 & 0.9395 & 0.9756 & 0.9647 \\Swin\,U-Net \cite{cao2021swin}& 0.9183 & 0.9142 & 0.9798 & \textbf{0.9701} & 0.8946 & 0.9056 & 0.9798 & 0.9645 & 0.9449 & 0.9410 & 0.9564 & 0.9678 \\TransNorm \cite{azad2022transnorm} & 0.8933 & 0.8532 & \textbf{0.9859} & 0.9582 & 0.8951 & 0.8750 & 0.9790 & 0.9580 & 0.9437 & 0.9438 & \textbf{0.9810} & \textbf{0.9723} \\\hline\textbf{Proposed Method} & \textbf{0.9240} & \textbf{0.9246} & 0.9794 & 0.9656 & \textbf{0.9105} & 0.9089 & 0.9807 & \textbf{0.9668} & \textbf{0.9504} & 0.9439 & 0.9576 & 0.9685 \\ \hline\end{tabular}}\end{table*}
Create LaTeX code to produce this table.
Performance comparison of the proposed method against the SOTA approaches on skin lesion segmentation task.
Could you provide a caption for this LaTeX table?
2002.11612v2_train__5fa229e4-dacc-4455-9f91-5a2c2c383919
arxiv_table_to_tex
\begin{table}\begin{table}\caption{Biological significance for predicted pathways with the best methods, GCC-10 and PathLinker-RWR.}\begin{tabular}{p{7mm}|c|c|c|c|c|c|c|c|c|c|}\cline{2-11}& \multicolumn{10}{ |c| }{k} \\ \cline{1-11}\multicolumn{1}{ |c| }{Method}& 10 & 20 & 30 & 40 & 50 & 60 & 70 & 80 & 90 &100 \\ \cline{1-11}\multicolumn{1}{ |c| }{GCC-10} & 1.00 & 1.00 & 0.94 & 0.83 & 0.84 & 0.85 & 0.88 & 0.88 & 0.84 & 0.81 \\\cline{1-11}\multicolumn{1}{ |p{1.6cm}| }{PathLinker-RWR} & 0.90& 0.88& 0.81& 0.78& 0.82& 0.73& 0.74& 0.76& 0.77 & 0.71\\\hline\end{tabular}\label{enrich}\end{table}
Create LaTeX code to produce this table.
Biological significance for predicted pathways with the best methods, GCC-10 and PathLinker-RWR.
Please write a caption that describes this LaTeX table.
2208.02402v2_train__a33a9102-70b9-4655-974f-aec9ad4f4e89
arxiv_table_to_tex
\begin{table}[htbp]\begin{table}[htbp]% hline/vline hacks to add spacing between blocks% these commands shouldn't be visible beyond this float\newcommand{\acell}{ \cellcolor{DimGray} }\newcommand{\bcell}{ \cellcolor{Gainsboro} }\newcommand{\eol}{ \arrayrulecolor{white} \specialrule{10pt}{0pt}{0pt} }\newcommand{\eoc}{\color{white}\vline width 1pt}\resizebox{\linewidth}{!}{%\renewcommand*{\arraystretch}{0.0}\begin{tabular}{lc!\eoc c!\eoc c!\eoc c!\eoc c!\eoc c!\eoc c!\eoc c!\eoc c!\eoc c!\eoc c!\eoc c!\eoc}\toprule& & & \footnotesize 25\% & & & \footnotesize 50\% & & & \footnotesize 75\% & $\downarrow$ & & \footnotesize 100\% \\& Moctezuma & was & the & son & of & the & emperor & Huitzilihuitl & and & \texttt{\_\_\_} & \texttt{\_\_\_} & \texttt{\_\_\_} \\\cmidrule{2-13}Right crop 0\% & & & & & & & & & \\ \eolRight crop 25\% & \acell & \acell & \acell \\ \eolRight crop 50\% & \acell & \acell & \acell & \acell & \acell & \acell \\ \eolRight crop 75\% & \acell & \acell & \acell & \acell & \acell & \acell & \acell & \acell & \acell \\ \eolRight crop 100\% & \acell & \acell & \acell & \acell & \acell & \acell & \acell & \acell & \acell & \bcell & \bcell & \bcell \\ \eolLeft crop 0\% & \acell & \acell & \acell & \acell & \acell & \acell & \acell & \acell & \acell & \bcell & \bcell & \bcell\\ \eolLeft crop 25\% & & & \acell & \acell & \acell & \acell & \acell & \acell & \acell & \bcell & \bcell & \bcell \\ \eolLeft crop 50\% & & & & & & \acell & \acell & \acell & \acell & \bcell & \bcell & \bcell \\ \eolLeft crop 75\% & & & & & & & & & \acell & \bcell & \bcell & \bcell \\ \eolLeft crop 100\% & & & & & & & & & & & & \bcell \\\arrayrulecolor{black} \bottomrule\end{tabular}%}\caption{Consecutive dark grey rectangles indicate substring of which sentence representation was computed and fused. The representation is not computed beyond the to-be-predicted token ( $\downarrow$ ). Light grey words are cropped because they are not in the left context. Right crop 0\% and left crop 100\% correspond to no fusion while right crop 100\% and left crop 0\% correspond to full fusion without any cropping.}\label{tab:sub_feeders}\end{table}
Provide LaTeX code for this table.
Consecutive dark grey rectangles indicate substring of which sentence representation was computed and fused. The representation is not computed beyond the to-be-predicted token ( $\downarrow$ ). Light grey words are cropped because they are not in the left context. Right crop 0\% and left crop 100\% correspond to no fusion while right crop 100\% and left crop 0\% correspond to full fusion without any cropping.
May I have a caption for this LaTeX table?
2104.13122v6_train__f2e9cc37-e727-47bd-9d7b-77b4db92301e
arxiv_table_to_tex
\begin{table}[h]\begin{table}[h]\begin{tabular}{|c|c|}\hlineFormulae to be defined & Intuitive meaning \\ \hline$\phitype{k}$ & $\anode_0$ is of type $k$ \\ \hline$\phifirst{k}$ & $\semnumber{}{\anode_0} = 0$ \\ \hline$\philast{k}$ & $\semnumber{}{\anode_0} = \tow(k+1,n)-1$ \\ \hline$\phiunique{k}$ & $\forall \anode, \anode' \ ((\anode_0 E \anode) \ \& \ (\anode_0 E \anode') \ \& \\anode \neq \anode') \ \rightarrow $ \\& $\semnumber{}{\anode} \neq \semnumber{}{\anode'}$ \\ \hline$\phipopulate{k}$ & $\forall \anode \ ((\anode_0 E \anode) \ \& \ \semnumber{}{\anode} < \tow(k,n)-1)\ \rightarrow$\\& $\exists \ \anode' \ (\anode_0 E \anode') \ \wedge \ \semnumber{}{\anode'} = \semnumber{}{\anode} + 1$ \\ \hline$\eqk{k}{\bar{\anominal}}{\bar{\anominalbis}}$ &$\semnumber{}{\anode_d} = \semnumber{}{\anode_d'}$\\ \hline$\succk{k}{\bar{\anominal}}{\bar{\anominalbis}}$ &$\semnumber{}{\anode_d'} = 1 + \semnumber{}{\anode_d}$\\ \hline$\gk{k}{\bar{\anominal}}{\bar{\anominalbis}}$ &$\semnumber{}{\anode_d} < \semnumber{}{\anode_d'}$\\ \hline\end{tabular}\caption{Family of auxiliary formulae.}\label{table-auxiliary-formulae}\end{table}
Provide LaTeX code for this table.
Family of auxiliary formulae.
Can you suggest a caption for this LaTeX table?
2306.17181v4_train__eb938154-e6d6-4df0-9bf3-87ebdf510f3e
arxiv_table_to_tex
\begin{table*}[t]\begin{table*}[t]\centering\small\begin{tabular}{|l|l|l|}\hline\textbf{TESGAN (17-epoch, DailyDialog)} & \textbf{P-TESGAN (10-epoch, DailyDialog)} & \textbf{Random Noise} \\\hlineI'm so glad you finally got on the train. & Hello, Mr. Smith. I'm Mary. & Anything I have called three weeks\\I just lost my job. & I just want to tell you the truth. & Is Is Is Is Is Is Is\\Yeah. You mean the network connection? & It's the end of the world. & Left and go to go to go to go\\What happened? & What do you want to do in this company? & Mr Moon, Mr Moon\ldots Mr Moon\ldots\\So you have to wait for a while. & He just broke up with Ann. & are you have finished 6 items?\\\hline\hline\multicolumn{3}{|l|}{\textbf{TESGAN (18-epoch, IMDb)}} \\\hline\multicolumn{3}{|l|}{This is probably one of the best of the best of the series.} \\\multicolumn{3}{|l|}{I was bored to think about how stupid this movie was.} \\\multicolumn{3}{|l|}{"The Deadly Loved One" is the story of a rebellious college basketball} \\\multicolumn{3}{|l|}{I have to say, this is the worst film I have ever seen.} \\\multicolumn{3}{|l|}{I was very excited to see it, anticipating Christmas eve.} \\\multicolumn{3}{|l|}{This movie was one of the best of the year for me.} \\\hline\end{tabular}\caption{Example of unconditionally synthesized sentences. P-TESGAN denotes the perturbed TESGAN.}\label{table3}\end{table*}
Create a LaTeX script to illustrate this table.
Example of unconditionally synthesized sentences. P-TESGAN denotes the perturbed TESGAN.
Could you generate a caption for this LaTeX table?
2306.17181v4_train__c16fc5d5-1fea-4865-9d1c-07af62221637
arxiv_table_to_tex
\begin{table*}[htb!]\begin{table*}[htb!]\centering\begin{tabular}{cc|ccccccc}\hline\multicolumn{2}{c|}{Activation} & FBD $\downarrow$ & MSJ2 $\uparrow$ & MSJ3 $\uparrow$ & MSJ4 $\uparrow$ & MSJ5 $\uparrow$ & DSR $\uparrow$ & LM$\ast$ $\downarrow$ \\\hline\multirow{3}{*}{TESGAN} & None & 47.261 & 0.108 & 0.060 & 0.032 & 0.016 & \textbf{0.982} & - \\& Tanh & 9.780 & 0.110 & 0.057 & 0.030 & 0.015 & 0.871 & 5.402 \\& Sigmoid & \textbf{2.899} & \textbf{0.150} & \textbf{0.079} & \textbf{0.042} & \textbf{0.021} & 0.967 & \textbf{4.236} \\\hline\multirow{3}{*}{P-TESGAN} & None & 54.002 & 0.111 & 0.059 & 0.030 & \textbf{0.015} & \textbf{0.958} & - \\& Tanh & 20.158 & 0.118 & 0.061 & 0.031 & \textbf{0.015} & 0.937 & - \\& Sigmoid & \textbf{2.274} & \textbf{0.131} & \textbf{0.066} & \textbf{0.032} & 0.014 & 0.841 & \textbf{3.642} \\\hline\end{tabular}\caption{Performance according to the activation functions of the generator.$\ast$ denotes a metric not considered when selecting the best model.- denotes that the confidence of the result is low because the quality of the synthesized sentence is poor.}\label{table4}\end{table*}
Provide LaTeX code for this table.
Performance according to the activation functions of the generator.$\ast$ denotes a metric not considered when selecting the best model.- denotes that the confidence of the result is low because the quality of the synthesized sentence is poor.
Please write a caption that describes this LaTeX table.
2306.17181v4_train__b421e229-1208-48b2-8cd3-efdd763f26b7
arxiv_table_to_tex
\begin{table*}[htb!]\begin{table*}[htb!]\centering\begin{tabular}{|l|l|}\hline\textbf{TESGAN with Tanh} & \textbf{P-TESGAN with Tanh}\\\hlineYou are a little & You ’ re a book?\\I ’ m sorry to see you off. You ’ Ve come & You are late.\\I ’ m sorry. & I ’ m doing ’ t β€œ all day ’ s\\You ’ d like a tour to see the dentist. & I don't know what time it is?\\You are late. & I ’ m sorry to hear this!\\\hline\hline\textbf{TESGAN without activation} & \textbf{P-TESGAN without activation}\\\hlineI ’ d like to say it! I ’ d like to & I ’ s a big, that ’ s right.\\Yes, do you want to buy? & I like the back ones. They look like a shop.\\I ’ s right over there? & I ’, this ’, this ’! be real, \\What's the matter? & I have a problem with my English textbooks.\\I got a bite the food? & I ’ s faster, George. I ’ d like to go \\\hline\end{tabular}\caption{Synthesized sentences by tanh and non-use cases in Table~\ref{table4}. P-TESGAN denotes the perturbed TESGAN.}\label{table5}\end{table*}
Please generate the necessary LaTeX script to draw this table.
Synthesized sentences by tanh and non-use cases in Table~\ref{table4
May I have a caption for this LaTeX table?
2003.13320v1_train__e4a883a7-32a8-45f9-a57f-86df19a390cd
arxiv_table_to_tex
\begin{table}[tp]\begin{table}[tp]\centering\caption{Split polar spectrum example for $N=16$} \label{split_polar_spectrum}\begin{tabular}{|c|c|c|}\hline index $i$ & weight $\mathbf{d}$ & $A_{16}^{(i)}(\mathbf{d})$ \\\hline 1 & \tabincell{c}{(1, 0), (1, 8), (7, 0), (7, 8)\\(0, 1), (8, 1), (0, 7), (8, 7)}& 8\\\hline 1 & \tabincell{c}{(3, 0), (3, 8), (5, 0), (5, 8)\\(0, 3), (8, 3), (0, 5), (8, 5)}& 56 \\\hline 1 & \tabincell{c}{(2, 1), (2, 7), (6, 1), (6, 7)\\(1, 2), (7, 2), (1, 6), (7, 6)}& 224\\\hline 1 & \tabincell{c}{(4, 1), (4, 7), (1, 4), (7, 4)}& 560 \\\hline 1 & \tabincell{c}{(3, 2), (3, 6), (5, 2), (5, 6)\\(2, 3), (6, 3), (2, 5), (6, 5)} & 1568 \\\hline 1 & \tabincell{c}{(4, 3), (4, 5), (3, 4), (5, 4)} & 3920 \\\hline 2 & \tabincell{c}{(2, 0), (2, 8), (6, 0), (6, 8)\\(0, 2), (8, 2), (0, 6), (8, 6)} & 16 \\\hline 2 & \tabincell{c}{(4, 0), (4, 8), (0, 4), (8, 4)} &32 \\\hline 2 & \tabincell{c}{(1, 1), (1, 7), (7, 1), (7, 7)\\} & 32 \\\hline 2 & \tabincell{c}{(3, 1), (3, 7), (5, 1), (5, 7)\\(1, 3), (7, 3), (1, 5), (7, 5)} & 224 \\\hline 2 & \tabincell{c}{(2, 2), (2, 6), (6, 2), (6, 6)} & 384 \\\hline 2 & \tabincell{c}{(4, 2), (4, 6), (2, 4), (6, 4)} & 992 \\\hline 2 & \tabincell{c}{(3, 3), (3, 5), (5, 3), (5, 5)} & 1568 \\\hline 2 & \tabincell{c}{(4, 4)} & 2432 \\\hline 3 & \tabincell{c}{(2, 0), (2, 8), (6, 0), (6, 8)\\(0, 2), (8, 2), (0, 6), (8, 6)} &8 \\\hline 3 & \tabincell{c}{(4, 0), (4, 8), (0, 4), (8, 4)} & 16 \\\hline 3 & \tabincell{c}{(1, 1), (1, 7), (7, 1), (7, 7)} & 16 \\\hline 3 & \tabincell{c}{(3, 1), (3, 7), (5, 1), (5, 7)\\(1, 3), (7, 3), (1, 5), (7, 5)} & 112 \\\hline 3 & \tabincell{c}{(2, 2), (2, 6), (6, 2), (6, 6)} & 192 \\\hline 3 & \tabincell{c}{(4, 2), (4, 6), (2, 4), (6, 4)} & 496 \\\hline 3 & \tabincell{c}{(3, 3), (3, 5), (5, 3), (5, 5)} & 784 \\\hline 3 & \tabincell{c}{(4, 4)} & 1216 \\\hline ... & ... & ... \\\hline 9 & (1, 1), (7, 7) & 8 \\\hline 9 & (3, 3), (5, 5) & 56 \\\hline 10 & (2, 2), (6, 6) & 16 \\\hline 10 & (4, 4) & 32 \\\hline 11 & (2, 2), (6, 6) & 8 \\\hline 11 & (4, 4) & 16 \\\hline 12 & (4, 4) & 16 \\\hline 13 & (2, 2), (6, 6) & 4 \\\hline 14 & (4, 4) & 4 \\\hline 15 & (4, 4) & 2 \\\hline 16 & (8, 8) & 1 \\\hline\end{tabular}\end{table}
Generate the LaTeX code to recreate this table.
Split polar spectrum example for $N=16$
Please give a caption for the LaTeX table.
2309.07330v1_train__b2813891-e7c9-424a-ad17-e48bf310aeb8
arxiv_table_to_tex
\begin{table*}[t]\begin{table*}[t]\centering\small\caption{Comparison of one vs.\ two-model segmentation approaches, in terms of IoU.}\begin{tabular}{|c|c|c|c|c|c|c|}\hline{\bf Approach} & Gallbladder & Liver & Cystic Duct & Cystic artery & Cystic plate & Instrument\\\hline{\bf Single model} & 0.8964 & \textbf{0.9244} & 0.4978 & 0.0& 0.4229 & \textbf{0.8989} \\\hline{\bf Two-stream} & \textbf{0.9139} & 0.8913 & \textbf{0.6833} & \textbf{0.4484} & \textbf{0.5713} & 0.8433\\\hline\end{tabular}\label{tab:twostream}\end{table*}
Convert this table into LaTeX code.
Comparison of one vs.\ two-model segmentation approaches, in terms of IoU.
Please create a caption for this LaTeX table.
2309.07330v1_train__0e39b170-c789-4c1a-86c3-a9e5c1964826
arxiv_table_to_tex
\begin{table}\begin{table}\centering\caption{Comparison of Sobel loss based segmentation and the baseline method.}\begin{tabular}{|c|c|c|c|}\hlineModel/Metric & mIoU & Acc. & Dice\\\hlineBaseline & 0.7270 & \textbf{0.9372} & 0.8247 \\\hlineBaseline+Sobel loss & \textbf{0.7454} & 0.9323 & \textbf{0.8427} \\\hline\end{tabular}\label{tab:segres}\end{table}
Create a LaTeX script to illustrate this table.
Comparison of Sobel loss based segmentation and the baseline method.
Please give a caption for the LaTeX table.
2309.07330v1_train__16486327-92f8-4d71-aa85-d6c004bf2455
arxiv_table_to_tex
\begin{table}\begin{table}\small\centering\caption{Results of CVS assessment compared to DeepCVS.}\scalebox{0.85}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline& \multicolumn{2}{|c|}{C1}& \multicolumn{2}{|c|}{C2} & \multicolumn{2}{|c|}{C3} & \multicolumn{2}{|c|}{CVS} \\\hline\multirow{4}{*}{DeepCVS}& Acc. & 0.72 & Acc. & 0.39 & Acc. & 0.54 & Acc. & 0.92 \\& Bacc. & 0.48 & Bacc. & 0.49 & Bacc. & 0.53 & Bacc. & 0.49 \\& PPV & 0.14 & PPV & 0.18 & PPV & 0.53 & PPV & NaN\footnotemark \\& NPV & 0.75 & NPV & 0.80 & NPV & 0.54 & NPV & 0.93\\% \hline% \multirow{3}{*}{CNN}% & Acc. & 0.746 & Acc. & 0.245 & Acc. & 0.482 & Acc. & \textbf{0.930} \\% & PPV & 0.204 & PPV & 0.189 & PPV & 0.468 & PPV & 0.070\\% & NPV & 0.761 & NPV & 0.854 & NPV & 0.494 & NPV & 0.938\\\hline\multirow{4}{*}{Ours}& Acc. & \textbf{0.76} & Acc.& \textbf{0.79} & Acc. & \textbf{0.69} & Acc. & 0.92 \\& Bacc. & \textbf{0.57} & Bacc. & \textbf{0.65} & Bacc. & \textbf{0.69} & Bacc. & \textbf{0.54}\\& PPV & \textbf{0.49} & PPV. & \textbf{0.43} & PPV & \textbf{0.72} & PPV & \textbf{0.23}\\& NPV & \textbf{0.79} & NPV & \textbf{0.86} & NPV & \textbf{0.67} & NPV & \textbf{0.94}\\\hline\end{tabular}}\label{cvs1}\end{table}
Please create LaTeX code to produce this table.
Results of CVS assessment compared to DeepCVS.
Could you generate a caption for this LaTeX table?
2402.01649v1_train__8f20e0f8-abc6-4ad5-8481-f147de87f6f3
arxiv_table_to_tex
\begin{table}\begin{table}\small\addtolength{\tabcolsep}{-5pt}\caption{Article Data source}\label{database}\footnotesize\begin{tabular}{l|l|l|ll}\hline\textbf{ID} & \textbf{Database} & \textbf{Link} & \textbf{Number of Articles}& \\ \hlineD1 & \cellcolor[HTML]{C0C0C0}IEEE Xplore & { http://ieeexplore.ieee.org/} & 16 & \\ \hlineD2 & \cellcolor[HTML]{C0C0C0}Scopus & { https://www.scopus.com/home.uri/} & 18 & \\ \hlineD3 & \cellcolor[HTML]{C0C0C0}Web of Science & {http://www.webofscience.com/} & 286 & \\ \hlineD4 & \cellcolor[HTML]{C0C0C0}Springer Link & { http://link.springer.com/} & 171& \\ \hline\end{tabular}\end{table}
Please create LaTeX code to produce this table.
Article Data source
Please give a caption for the LaTeX table.
2109.06859v1_train__99bcbefb-6af2-4bb0-94a8-4535fe2adf14
arxiv_table_to_tex
\begin{table}[h!]\begin{table}[h!]\begin{center}\footnotesize\begin{tabular}{l|c |c c c c}\hline& & Accuracy (\%) $\uparrow$ & NA (\%) $\uparrow$ & F1-open $\uparrow$ & AUROC $\uparrow$\\\cline{3-6} Method & Arch. & \multicolumn{4}{c}{1-shot}\\ \hlineGaussE~\cite{liu2020few} + OpenMax~\cite{bendale2016towards}$^1$ & Res18 & $57.89\pm0.59$ & - & - & $ 0.589\pm0.006$\\GaussE~\cite{liu2020few} + Counterfactual~\cite{neal2018open}$^1$ & Res18 & $57.89\pm0.59$ & - & - & $0.522\pm0.006$\\GaussE~\cite{liu2020few}$^1$ & Res18 & $57.89\pm0.59$ & - & - & $0.587\pm0.006$\\PEELER~\cite{liu2020few}$^1$ & Res18 & $58.31\pm0.58$ & - & - & $0.617\pm0.006$ \\PEELER~\cite{liu2020few} + threshold & Res18 & $56.98\pm0.88$ & $44.77\pm0.70$ & $0.357\pm0.009$ & $0.626\pm0.008$\\PEELER~\cite{liu2020few} + Entropic Loss~\cite{dhamija2018reducing} & Res18 & $56.10\pm0.86$ & $48.36\pm0.69$ & $0.367\pm0.008$ & $0.631\pm0.008$\\PEELER~\cite{liu2020few} + Objectosphere~\cite{dhamija2018reducing} & Res18 & $52.63\pm0.86$ & $27.31\pm0.37$ & $0.060\pm0.007$ & $0.548\pm0.008$\\PEELER~\cite{liu2020few} + \MetaBCE~[ours] & Res18 & $56.98\pm0.88$ & $36.94\pm0.44$ & $0.252\pm0.007$ & $0.606\pm0.008$ \\PEELER~\cite{liu2020few} + \OCML~[ours] & Res18 & $56.98\pm0.88$ & $57.60\pm0.76$ & $0.380\pm0.006$ & $0.607\pm0.007$ \\FEAT~\cite{ye2020fewshot} + threshold & Res18 & $\bf{66.42\pm0.65}$ & $56.53\pm0.89$ & $\bf{0.442\pm0.005}$ & $\bf{0.686\pm0.008}$\\FEAT~\cite{ye2020fewshot} + \MetaBCE~[ours] & Res18 & $\bf{66.42\pm0.65}$ & $34.70\pm0.31$ & $0.232\pm0.006$ & $0.624\pm0.006$\\FEAT~\cite{ye2020fewshot} + \OCML~[ours] & Res18 & $\bf{66.42\pm0.65}$ & $\bf{59.79\pm0.54}$ & $\bf{0.440\pm0.005}$ & $0.623\pm0.006$\\ \hlinePEELER~\cite{liu2020few} + threshold & Conv64 & $51.38\pm0.81$ & $42.48\pm0.75$ & $0.310\pm0.009$ & $0.568\pm0.007$\\PEELER~\cite{liu2020few} + Entropic Loss~\cite{dhamija2018reducing} & Conv64 & $51.11\pm0.82$ & $41.81\pm0.68$ & $0.299\pm0.008$ & $0.594\pm0.008$\\PEELER~\cite{liu2020few} + Objectosphere~\cite{dhamija2018reducing} & Conv64 & $49.01\pm0.85$ & $33.79\pm0.68$ & $0.204\pm0.009$ & $0.519\pm0.009$\\PEELER~\cite{liu2020few} + \MetaBCE~[ours] & Conv64 & $51.38\pm0.81$ & $36.11\pm0.41$ & $0.233\pm0.007$ & $0.600\pm0.008$ \\PEELER~\cite{liu2020few} + \OCML~[ours] & Conv64 & $51.38\pm0.81$ & $\bf{55.80\pm0.78}$ & $0.349\pm0.006$ & $0.613\pm0.008$ \\FEAT~\cite{ye2020fewshot} + threshold & Conv64 & $\bf{55.01\pm0.62}$ & $38.80\pm0.40$ & $0.279\pm0.005$ & $0.557\pm0.006$\\FEAT~\cite{ye2020fewshot} + \MetaBCE~[ours] & Conv64 & $\bf{55.01\pm0.62}$ & $33.64\pm0.30$ & $0.205\pm0.006$ & $0.616\pm0.006$\\FEAT~\cite{ye2020fewshot} + \OCML~[ours] & Conv64 & $\bf{55.01\pm0.62}$ & $\bf{55.81\pm0.53}$ & $\bf{0.373\pm0.005}$ & $\bf{0.626\pm0.006}$\\\hline& & \multicolumn{4}{c}{5-shot}\\ \hlineGaussE~\cite{liu2020few} + OpenMax~\cite{bendale2016towards}$^1$ & Res18 & $75.31\pm0.76$ & - & - & $0.675\pm0.007$\\GaussE~\cite{liu2020few} + Counterfactual~\cite{neal2018open}$^1$ & Res18 & $75.31\pm0.76$ & - & - & $0.533\pm0.006$\\GaussE~\cite{liu2020few}$^1$ & Res18 & $75.31\pm0.76$ & - & - & $0.665\pm0.007$\\PEELER~\cite{liu2020few}$^1$ & Res18 & $75.08\pm0.72$ & - & - & $0.699\pm0.007$\\PEELER~\cite{liu2020few} + threshold & Res18 & $73.04\pm0.68$ & $52.78\pm0.83$ & $0.472\pm0.009$ & $0.677\pm0.008$\\PEELER~\cite{liu2020few} + Entropic Loss~\cite{dhamija2018reducing} & Res18 & $70.98\pm0.64$ & $59.26\pm0.69$ & $0.491\pm0.006$ & $0.688\pm0.007$\\PEELER~\cite{liu2020few} + Objectosphere~\cite{dhamija2018reducing} & Res18 & $66.27\pm0.77$ & $26.29\pm0.19$ & $0.034\pm0.005$ & $0.564\pm0.008$\\PEELER~\cite{liu2020few} + \MetaBCE~[ours] & Res18 & $73.04\pm0.68$ & $70.98\pm0.58$ & $0.495\pm0.004$ & $0.646\pm0.005$ \\PEELER~\cite{liu2020few} + \OCML~[ours] & Res18 & $73.04\pm0.68$ & $65.64\pm0.49$ & $0.498\pm0.004$ & $0.661\pm0.006$ \\FEAT~\cite{ye2020fewshot} + threshold & Res18 & $\bf{80.26\pm0.45}$ & $63.89\pm1.02$ & $0.537\pm0.003$ & $\bf{0.727\pm0.007}$\\FEAT~\cite{ye2020fewshot} + \MetaBCE~[ours] & Res18 & $\bf{80.26\pm0.45}$ & $\bf{73.32\pm0.49}$ & $\bf{0.553\pm0.003}$ & $0.667\pm0.005$\\FEAT~\cite{ye2020fewshot} + \OCML~[ours] & Res18 & $\bf{80.26\pm0.45}$ & $64.36\pm0.48$ & $0.544\pm0.004$ & $0.682\pm0.006$\\ \hlinePEELER~\cite{liu2020few} + threshold & Conv64 & $66.92\pm0.66$ & $47.24\pm0.83$ & $0.399\pm0.009$ & $0.608\pm0.007$\\PEELER~\cite{liu2020few} + Entropic Loss~\cite{dhamija2018reducing} & Conv64 & $65.06\pm0.71$ & $47.77\pm0.80$ & $0.386\pm0.009$ & $0.625\pm0.008$\\PEELER~\cite{liu2020few} + Objectosphere~\cite{dhamija2018reducing} & Conv64 & $62.04\pm0.71$ & $35.10\pm0.73$ & $0.219\pm0.010$ & $0.526\pm0.009$\\PEELER~\cite{liu2020few} + \MetaBCE~[ours] & Conv64 & $66.92\pm0.66$ & $\bf{68.74\pm0.56}$ & $0.460\pm0.004$ & $0.645\pm0.005$ \\PEELER~\cite{liu2020few} + \OCML~[ours] & Conv64 & $66.92\pm0.66$ & $63.57\pm0.48$ & $0.466\pm0.004$ & $0.666\pm0.006$ \\FEAT~\cite{ye2020fewshot} + threshold & Conv64 & $\bf{70.70\pm0.50}$ & $45.85\pm0.46$ & $0.389\pm0.005$ & $0.595\pm0.006$\\FEAT~\cite{ye2020fewshot} + \MetaBCE~[ours] & Conv64 & $\bf{70.70\pm0.50}$ & $\bf{69.10\pm0.45}$ & $\bf{0.487\pm0.004}$ & $0.669\pm0.005$\\FEAT~\cite{ye2020fewshot} + \OCML~[ours] & Conv64 & $\bf{70.70\pm0.50}$ & $61.29\pm0.48$ & $\bf{0.490\pm0.004}$ & $\bf{0.680\pm0.006}$\\\hline\end{tabular}\end{center}\caption{Experimental results on miniImageNet dataset for few-shot $5$-way open-set classification with $5$ open-set categories. The best results are shown in {\bf bold}. $^1$ Results from Liu~\etal~\cite{liu2020few}}\label{table:fsos_mini_val}\end{table}
Please generate the necessary LaTeX script to draw this table.
Experimental results on miniImageNet dataset for few-shot $5$-way open-set classification with $5$ open-set categories. The best results are shown in {\bf bold
Please provide an appropriate caption for this LaTeX table.
2109.06859v1_train__5f881ec5-e982-4026-b8b1-13a3e3173a20
arxiv_table_to_tex
\begin{table}[h!]\begin{table}[h!]\begin{center}\footnotesize\begin{tabular}{l|c | c c c}\hline& & Accuracy (\%) $\uparrow$ & F1-score $\uparrow$ & AUROC $\uparrow$\\\cline{3-5} Method & Arch. & \multicolumn{3}{c}{1-shot}\\ \hlineProto Net~\cite{snell2017prototypical} + DeepSVDD~\cite{pmlr-v80-ruff18a}& Conv64 & - & - & $0.603\pm0.007$\\Proto Net~\cite{snell2017prototypical} + DeepAnomaly~\cite{golan2018deep}& Conv64 & - & - & $0.655\pm0.011$\\Proto Net~\cite{snell2017prototypical} + SVDD~\cite{tax2004support}& Conv64 & $50.00\pm0.00$ & $0.000\pm0.000$ & $0.599\pm0.030$\\Proto Net~\cite{snell2017prototypical} + OCSVM~\cite{scholkopf2000support}& Conv64 & $50.00\pm0.00$ & $0.000\pm0.000$ & $0.633\pm0.015$\\Proto Net~\cite{snell2017prototypical} + Threshold& Conv64 & $50.05\pm0.04$ & $0.002\pm0.001$ & $0.697\pm0.005$\\CLEAR~\cite{kozerawski2018clear} & Conv64 & $50.24\pm0.36$ & $0.008\pm0.012$ & $0.620\pm0.120$\\Proto Net~\cite{snell2017prototypical} + \MetaBCE~[ours]& Conv64 & $54.72\pm1.29$ & $0.169\pm0.044$ & $0.774\pm0.009$ \\Proto Net~\cite{snell2017prototypical} + \OCML~[ours]& Conv64 & $\bf{72.14\pm0.56}$ & $\bf{0.701\pm0.016}$ & $\bf{0.801\pm0.006}$ \\\hline Upper-bound (supervised FEAT~\cite{ye2020fewshot})$^1$& Conv64 & $84.25\pm0.89$ & - & -\\\hline & & \multicolumn{3}{c}{5-shot}\\ \hlineProto Net~\cite{snell2017prototypical} + DeepSVDD~\cite{pmlr-v80-ruff18a}& Conv64 & - & - & $0.656\pm0.007$ \\Proto Net~\cite{snell2017prototypical} + DeepAnomaly~\cite{golan2018deep}& Conv64 & - & - & $0.762\pm0.010$\\Proto Net~\cite{snell2017prototypical} + SVDD~\cite{tax2004support}& Conv64 & $50.61\pm0.41$ & $0.036\pm0.021$ & $0.609\pm0.036$ \\Proto Net~\cite{snell2017prototypical} + OCSVM~\cite{scholkopf2000support}& Conv64 & $52.40\pm0.45$ & $0.130\pm0.019$ & $0.656\pm0.019$ \\Proto Net~\cite{snell2017prototypical} + Threshold& Conv64 & $64.92\pm1.29$ & $0.590\pm0.063$ & $0.725\pm0.003$ \\Proto Net~\cite{snell2017prototypical} + \MetaBCE~[ours]& Conv64 & $76.16\pm0.58$ & $\bf{0.783\pm0.007}$ & $0.839\pm0.006$ \\Proto Net~\cite{snell2017prototypical} + \OCML~[ours]& Conv64 & $\bf{78.74\pm1.07}$ & $0.774\pm0.019$ & $\bf{0.874\pm0.010}$\\\hline Upper-bound (supervised FEAT~\cite{ye2020fewshot})$^1$& Conv64 & $91.26\pm0.56$ & - & -\\\hline\end{tabular}\end{center}\caption{Experimental results on CUB-200-2011 dataset for few-shot one-class classification. The best results are shown in {\bf bold}. $^1$ Supervised two-class classification.}\label{table:fsoc_cub_val}\end{table}
I need the LaTeX code to create this table, please.
Experimental results on CUB-200-2011 dataset for few-shot one-class classification. The best results are shown in {\bf bold
Please create a caption for this LaTeX table.
2109.06859v1_train__93b7bf71-e8b0-4189-b4d4-b791ce3a81e1
arxiv_table_to_tex
\begin{table}[h!]\begin{table}[h!]\begin{center}\footnotesize\begin{tabular}{l|c | c c c}\hline& & Accuracy (\%) $\uparrow$ & F1-score $\uparrow$ & AUROC $\uparrow$\\\cline{3-5} Method & Arch. & \multicolumn{3}{c}{1-shot}\\ \hlineProto Net~\cite{snell2017prototypical} + DeepSVDD~\cite{pmlr-v80-ruff18a}& Conv64 & - & - & $0.656\pm0.004$\\Proto Net~\cite{snell2017prototypical} + DeepAnomaly~\cite{golan2018deep}& Conv64 & - & - & $0.603\pm0.012$\\Proto Net~\cite{snell2017prototypical} + SVDD~\cite{tax2004support}& Conv64 & $50.00\pm0.00$ & $0.000\pm0.000$ & $0.669\pm0.003$\\Proto Net~\cite{snell2017prototypical} + OCSVM~\cite{scholkopf2000support}& Conv64 & $50.00\pm0.00$ & $0.000\pm0.000$ & $0.676\pm0.003$\\Proto Net~\cite{snell2017prototypical} + Threshold& Conv64 & $50.48\pm0.11$ & $0.017\pm0.003$ & $0.704\pm0.008$\\CLEAR~\cite{kozerawski2018clear} & Conv64 & $50.53\pm0.53$ & $0.552\pm0.006$ & $0.509\pm0.008$ \\Proto Net~\cite{snell2017prototypical} + \MetaBCE~[ours]& Conv64 & $55.87\pm0.38$ & $0.208\pm0.010$ & $0.755\pm0.009$\\Proto Net~\cite{snell2017prototypical} + \OCML~[ours]& Conv64 & $\bf{72.13\pm0.33}$ & $\bf{0.724\pm0.003}$ & $\bf{0.800\pm0.004}$\\\hline Upper-bound (supervised FEAT~\cite{ye2020fewshot})$^1$& ResNet12 & $88.54\pm0.81$ & - & -\\\hline & & \multicolumn{3}{c}{5-shot}\\ \hlineProto Net~\cite{snell2017prototypical} + DeepSVDD~\cite{pmlr-v80-ruff18a}& Conv64 & - & - & $0.678\pm0.003$ \\Proto Net~\cite{snell2017prototypical} + DeepAnomaly~\cite{golan2018deep}& Conv64 & - & - & $0.719\pm0.011$\\Proto Net~\cite{snell2017prototypical} + SVDD~\cite{tax2004support}& Conv64 & $50.52\pm0.04$ & $0.021\pm0.001$ & $0.724\pm0.003$ \\Proto Net~\cite{snell2017prototypical} + OCSVM~\cite{scholkopf2000support}& Conv64 & $52.37\pm0.09$ & $0.106\pm0.003$ & $0.712\pm0.003$ \\Proto Net~\cite{snell2017prototypical} + Threshold& Conv64 & $61.90\pm0.54$ & $0.426\pm0.011$ & $0.741\pm0.008$\\Proto Net~\cite{snell2017prototypical} + \MetaBCE~[ours]& Conv64 & $75.62\pm0.65$ & $0.774\pm0.005$ & $0.831\pm0.007$\\Proto Net~\cite{snell2017prototypical} + \OCML~[ours]& Conv64 & $\bf{78.89\pm0.67}$ & $\bf{0.803\pm0.006}$ & $\bf{0.878\pm0.007}$\\\hline Upper-bound (supervised FEAT~\cite{ye2020fewshot})$^1$& ResNet12 & $94.56\pm0.51$ & - & -\\\hline\end{tabular}\end{center}\caption{Experimental results on tieredImageNet dataset for few-shot one-class classification. The best results are shown in {\bf bold}. $^1$ Supervised two-class classification.}\label{table:fsoc_tier_val}\end{table}
Create LaTeX code to produce this table.
Experimental results on tieredImageNet dataset for few-shot one-class classification. The best results are shown in {\bf bold
Please provide an appropriate caption for this LaTeX table.
2109.06859v1_train__003e019a-4ff8-479b-8f5d-2037e0a5d1cd
arxiv_table_to_tex
\begin{table}[h!]\begin{table}[h!]\begin{center}\footnotesize\begin{tabular}{l|c |c c c c}\hline& & Accuracy (\%) $\uparrow$ & NA (\%) $\uparrow$ & F1-open $\uparrow$ & AUROC $\uparrow$\\\cline{3-6} Method & Arch. & \multicolumn{4}{c}{1-shot}\\ \hlinePEELER~\cite{liu2020few} & Conv64 & $52.56\pm0.73$ & - & - & $0.549\pm0.003$\\PEELER~\cite{liu2020few} + threshold & Conv64 & $52.56\pm0.73$ & $26.51\pm1.48$ & $0.042\pm0.036$ & $0.605\pm0.007$\\Proto Nets~\cite{snell2017prototypical} + threshold & Conv64 & $41.72\pm1.03$ & $25.03\pm0.02$ & $0.001\pm0.001$ & $0.566\pm0.009$\\Proto Nets~\cite{snell2017prototypical} + \MetaBCE~[ours] & Conv64 & $41.72\pm1.03$ & $35.66\pm4.85$ & $0.229\pm0.060$ & $0.613\pm0.019$\\Proto Nets~\cite{snell2017prototypical} + \OCML~[ours] & Conv64 & $41.72\pm1.03$ & $48.61\pm5.72$ & $0.371\pm0.035$ & $\bf{0.654\pm0.010}$\\FEAT~\cite{ye2020fewshot} + threshold & Conv64 & $\bf{63.16\pm0.74}$ & $32.69\pm0.29$ & $0.199\pm0.005$ & $0.537\pm0.007$\\FEAT~\cite{ye2020fewshot} + \MetaBCE~[ours] & Conv64 & $\bf{63.16\pm0.74}$ & $32.70\pm0.26$ & $0.190\pm0.005$ & $0.648\pm0.006$\\FEAT~\cite{ye2020fewshot} + \OCML~[ours] & Conv64 & $\bf{63.16\pm0.74}$ & $\bf{60.21\pm0.80}$ & $\bf{0.414\pm0.005}$ & $0.556\pm0.005$\\\hline& & \multicolumn{4}{c}{5-shot}\\ \hlineProto Nets~\cite{snell2017prototypical} + OpenMax~\cite{bendale2016towards} & Conv64 & $70.97\pm0.99$ & $25.60\pm0.15$ & $0.02\pm0.005$ & $0.574\pm0.011$\\PEELER~\cite{liu2020few} & Conv64 & $71.65\pm0.79$ & - & - & $0.637\pm0.004$ \\PEELER~\cite{liu2020few} + threshold & Conv64 & $71.65\pm0.79$ & $42.79\pm6.17$ & $0.348\pm0.062$ & $0.665\pm0.010$\\Proto Nets~\cite{snell2017prototypical} + threshold & Conv64 & $70.97\pm0.99$ & $37.62\pm1.56$ & $0.298\pm0.022$ & $0.606\pm0.009$\\Proto Nets~\cite{snell2017prototypical} + \MetaBCE~[ours] & Conv64 & $70.97\pm0.99$ & $68.59\pm2.64$ & $0.509\pm0.009$ & $0.661\pm0.031$\\Proto Nets~\cite{snell2017prototypical} + \OCML~[ours] & Conv64 & $70.97\pm0.99$ & $47.70\pm9.15$ & $0.426\pm0.103$ & $\bf{0.736\pm0.007}$\\FEAT~\cite{ye2020fewshot} + threshold & Conv64 & $\bf{77.47\pm0.55}$ & $35.31\pm0.34$ & $0.261\pm0.005$ & $0.568\pm0.007$\\FEAT~\cite{ye2020fewshot} + \MetaBCE~[ours] & Conv64 & $\bf{77.47\pm0.55}$ & $\bf{72.04\pm0.43}$ & $\bf{0.541\pm0.004}$ & $0.713\pm0.005$\\FEAT~\cite{ye2020fewshot} + \OCML~[ours] & Conv64 & $\bf{77.47\pm0.55}$ & $66.95\pm0.60$ & $0.513\pm0.004$ & $0.573\pm0.005$\\\hline\end{tabular}\end{center}\caption{Experimental results on CUB-200-2011 dataset for few-shot $5$-way open-set classification with $5$ open-set categories. The best results are shown in {\bf bold}. }\label{table:fsos_cub_val}\end{table}
Please create LaTeX code to produce this table.
Experimental results on CUB-200-2011 dataset for few-shot $5$-way open-set classification with $5$ open-set categories. The best results are shown in {\bf bold
May I have a caption for this LaTeX table?
2109.06859v1_train__ccbf422f-7098-4270-815a-20ca70e45210
arxiv_table_to_tex
\begin{table}[h!]\begin{table}[h!]\begin{center}\footnotesize\begin{tabular}{l|c |c c c c}\hline& & Accuracy (\%) $\uparrow$ & NA (\%) $\uparrow$ & F1-open $\uparrow$ & AUROC $\uparrow$\\\cline{3-6} Method & Arch. & \multicolumn{4}{c}{1-shot}\\ \hlinePEELER~\cite{liu2020few} & Conv64 & $38.55\pm0.43$ & - & - & $0.546\pm0.004$\\Proto Nets~\cite{snell2017prototypical} + threshold & Conv64 & $43.77\pm0.47$ & $25.38\pm0.05$ & $0.011\pm0.001$ & $0.576\pm0.004$\\Proto Nets~\cite{snell2017prototypical} + \MetaBCE~[ours] & Conv64 & $43.77\pm0.47$ & $33.74\pm0.21$ & $0.339\pm0.004$ & $0.623\pm0.004$\\Proto Nets~\cite{snell2017prototypical} + \OCML~[ours] & Conv64 & $43.77\pm0.47$ & $56.06\pm0.47$ & $0.299\pm0.003$ & $0.659\pm0.004$\\FEAT~\cite{ye2020fewshot} + threshold & ResNet12 & $\bf{70.88\pm0.72}$ & $50.97\pm0.06$ & $0.463\pm0.007$ & $\bf{0.697\pm0.006}$\\FEAT~\cite{ye2020fewshot} + \MetaBCE~[ours] & ResNet12 & $\bf{70.88\pm0.72}$ & $39.26\pm0.36$ & $0.308\pm0.006$ & $0.592\pm0.006$\\FEAT~\cite{ye2020fewshot} + \OCML~[ours] & ResNet12 & $\bf{70.88\pm0.72}$ & $\bf{67.76\pm0.86}$ & $\bf{0.478\pm0.005}$ & $0.610\pm0.006$\\\hline& & \multicolumn{4}{c}{5-shot}\\ \hlineProto Nets~\cite{snell2017prototypical} + OpenMax~\cite{bendale2016towards} & Conv64 & $71.04\pm0.40$ & $25.73\pm0.04$ & $0.024\pm0.001$ & $0.585\pm0.003$\\PEELER~\cite{liu2020few} & Conv64 & $67.71\pm0.42$ & - & - & $0.635\pm0.004$\\Proto Nets~\cite{snell2017prototypical} + threshold & Conv64 & $71.04\pm0.40$ & $38.08\pm0.25$ & $0.302\pm0.004$ & $0.615\pm0.004$\\Proto Nets~\cite{snell2017prototypical} + \MetaBCE~[ours] & Conv64 & $71.04\pm0.40$ & $70.46\pm0.37$ & $0.495\pm0.003$ & $0.672\pm0.004$\\Proto Nets~\cite{snell2017prototypical} + \OCML~[ours] & Conv64 & $71.04\pm0.40$ & $72.87\pm0.36$ & $0.509\pm0.003$ & $0.727\pm0.004$\\FEAT~\cite{ye2020fewshot} + threshold & ResNet12 & $\bf{84.22\pm0.53}$ & $67.91\pm0.05$ & $0.545\pm0.005$ & $\bf{0.768\pm0.005}$\\FEAT~\cite{ye2020fewshot} + \MetaBCE~[ours] & ResNet12 & $\bf{84.22\pm0.53}$ & $76.71\pm0.73$ & $0.570\pm0.004$ & $0.636\pm0.005$\\FEAT~\cite{ye2020fewshot} + \OCML~[ours] & ResNet12 & $\bf{84.22\pm0.53}$ & $\bf{77.96\pm0.63}$ & $\bf{0.577\pm0.003}$ & $0.657\pm0.006$\\\hline\end{tabular}\end{center}\caption{Experimental results on tieredImageNet dataset for few-shot $5$-way open-set classification with $5$ open-set categories. The best results are shown in {\bf bold}. }\label{table:fsos_tier_val}\end{table}
Create a LaTeX script to illustrate this table.
Experimental results on tieredImageNet dataset for few-shot $5$-way open-set classification with $5$ open-set categories. The best results are shown in {\bf bold
May I have a caption for this LaTeX table?
2109.06859v1_train__7dddf328-bfe6-4f14-b713-181a02ba36d1
arxiv_table_to_tex
\begin{table}[h!]\begin{table}[h!]\begin{center}\begin{tabular}{|l|l|l|}\hline$g_{\theta}$ architecture name & Layer 1 & Layer 2\\ \hline1 layer & FC layer (1600, 1600) & -\\2 layers, middle dim 100 & FC layer (1600, 100) & FC layer (100, 1600)\\2 layers, middle dim 500 & FC layer (1600, 500) & FC layer (500, 1600)\\2 layers, middle dim 1000 & FC layer (1600, 1000) & FC layer (1000, 1600)\\\hline\end{tabular}\end{center}\caption{Tested architectures for the transfer learning module $g_{\theta}$ for \OCML~approach.}\label{table:ocml_architecture_table}\end{table}
Convert this table into LaTeX code.
Tested architectures for the transfer learning module $g_{\theta
Please give a caption for the LaTeX table.
2109.06859v1_train__040f4d38-1c33-496b-bdd1-17cfedd5aa1f
arxiv_table_to_tex
\begin{table}[h!]\begin{table}[h!]\begin{center}\footnotesize\begin{tabular}{l|c |c c c c}\hline& & Accuracy (\%) $\uparrow$ & NA (\%) $\uparrow$ & F1-open $\uparrow$ & AUROC $\uparrow$\\\cline{3-6} Method & Arch. & \multicolumn{4}{c}{1-shot}\\ \hlineProto Nets~\cite{snell2017prototypical} + \MetaBCE~[ours] & Conv64 & $41.72\pm1.03$ & $35.66\pm4.85$ & $0.229\pm0.060$ & $0.613\pm0.019$\\Proto Nets~\cite{snell2017prototypical} + \MetaBCE$_C$~[ours] & Conv64 & $41.72\pm1.03$ & $28.66\pm3.78$ & $0.080\pm0.077$ & $0.629\pm0.009$\\\hline& & \multicolumn{4}{c}{5-shot}\\ \hlineProto Nets~\cite{snell2017prototypical} + \MetaBCE~[ours] & Conv64 & $70.97\pm0.99$ & $68.59\pm2.64$ & $0.509\pm0.009$ & $0.661\pm0.031$\\Proto Nets~\cite{snell2017prototypical} + \MetaBCE$_C$~[ours] & Conv64 & $70.97\pm0.99$ & $37.13\pm1.53$ & $0.233\pm0.162$ & $0.684\pm0.011$\\\hline\end{tabular}\end{center}\caption{Experimental results on CUB-200-2011 dataset for few-shot $5$-way open-set classification with $5$ open-set categories.}\label{table:fsos_mbce_ablation}\end{table}
Create a LaTeX script to illustrate this table.
Experimental results on CUB-200-2011 dataset for few-shot $5$-way open-set classification with $5$ open-set categories.
Can you suggest a caption for this LaTeX table?
2312.02428v2_train__162c6f2d-cf21-48ff-a737-e30ebcc3979c
arxiv_table_to_tex
\begin{table*}[t]\begin{table*}[t]\centering\footnotesize\renewcommand{\arraystretch}{1.35} % 控刢葌高\setlength{\tabcolsep}{4.0mm} % ζŽ§εˆΆεˆ—ι—΄θ·{{\begin{tabular}{l|p{80pt}|cc|cc|cc|cc}\toprule[1.5pt]\multirow{2}{*}{\textbf{\#}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{Image\textbf{$\rightarrow$}Text} & \multicolumn{2}{c|}{Sketch\textbf{$\rightarrow$}Text} & \multicolumn{2}{c|}{Art\textbf{$\rightarrow$}Text} & \multicolumn{2}{c}{Low-Res\textbf{$\rightarrow$}Text} \\\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}\cmidrule(rl){9-10}& & {R@1} & {R@5} & {R@1} & {R@5} & {R@1} & {R@5} & {R@1} & {R@5} \\\noalign{\hrule height 1.5pt}1& CLIP$^{*}$ & 55.2 & 90.8 & 48.4 & 87.9 & 64.5 & 96.8 & 42.6 & 81.8 \\2& BLIP$^{*}$ & 71.4 & 94.9 & 55.5 & 87.0 & 81.0 & 98.8 & 49.2 & 81.8\\3& VPT & 52.2 & 91.7 & 45.2 & 87.7 & 52.7 & 94.6 & 44.3 & 84.6\\4& ImageBind & 73.5 & 96.5 & 56.1 & 88.4 & 82.7 & 99.0 & 42.4 & 73.8\\5& LanguageBind & 80.5 & 98.3 & 63.9 & 91.6 & 87.2 & 99.7 & 56.9 & 86.8\\\midrule\rowcolor{aliceblue!60} 6& FreestyleRet-CLIP & 71.6 & 98.0 & 66.7 & \textbf{96.7} & 74.4 & 99.1 & 64.1 & 94.8\\\rowcolor{aliceblue!60} 7& FreestyleRet-BLIP & \textbf{82.8} & \textbf{99.0} & \textbf{71.0} & 96.4 & \textbf{86.6} & \textbf{99.7} & \textbf{69.5} & \textbf{96.9}\\\bottomrule[1.5pt]\end{tabular}}}% \vspace{-1pt}\caption{\textbf{The Text-Retrieval performance of our FreestyleRet and baseline models.}}\label{tab:result_text}% \vspace{-1pt}\end{table*}
Create LaTeX code to produce this table.
\textbf{The Text-Retrieval performance of our FreestyleRet and baseline models.
May I have a caption for this LaTeX table?
2312.02428v2_train__c534b938-3572-46de-9609-1d9829a21dec
arxiv_table_to_tex
\begin{table*}[h]\begin{table*}[h]\centering\footnotesize\renewcommand{\arraystretch}{1.35} % 控刢葌高\setlength{\tabcolsep}{4.0mm} % ζŽ§εˆΆεˆ—ι—΄θ·{{\begin{tabular}{l|p{80pt}|cc|cc|cc|cc}\toprule[1.5pt]\multirow{2}{*}{\textbf{\#}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{Image\textbf{$\rightarrow$}Art} & \multicolumn{2}{c|}{Sketch\textbf{$\rightarrow$}Art} & \multicolumn{2}{c|}{Text\textbf{$\rightarrow$}Art} & \multicolumn{2}{c}{Low-Res\textbf{$\rightarrow$}Art} \\\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}\cmidrule(rl){9-10}& & {R@1} & {R@5} & {R@1} & {R@5} & {R@1} & {R@5} & {R@1} & {R@5} \\\noalign{\hrule height 1.5pt}1& CLIP$^{*}$ & 63.0 & 94.7 & 61.2 & 92.7 & 75.5 & 98.2 & 51.9 & 87.9 \\2& BLIP$^{*}$ & 57.1 & 88.5 & 44.8 & 82.8 & 82.8 & 98.7 & 39.4 & 79.3\\3& VPT & 67.4 & 95.5 & 60.3 & 93.1 & 61.6 & 96.5 & 44.3 & 84.6\\4& ImageBind & 46.4 & 80.5 & 28.7 & 60.8 & 82.6 & 98.9 & 57.8 & 89.6\\5& LanguageBind & 65.8 & 93.2 & 41.1 & 77.7 & 86.7 & 99.2 & 34.8 & 72.0\\\midrule\rowcolor{aliceblue!60} 6& FreestyleRet-CLIP & 72.9 & \textbf{97.8} & \textbf{66.5} & \textbf{96.2} & 85.0 & 99.6 & \textbf{62.8} & \textbf{94.1}\\\rowcolor{aliceblue!60} 7& FreestyleRet-BLIP & \textbf{73.6} & 97.4 & 63.1 & 94.4 & \textbf{90.2} & \textbf{99.7} & 60.1 & 92.2\\\bottomrule[1.5pt]\end{tabular}}}% \vspace{-1pt}\caption{\textbf{The Art-Retrieval performance of our FreestyleRet and baseline models.}}\label{tab:result_art}% \vspace{-1pt}\end{table*}
Provide LaTeX code for this table.
\textbf{The Art-Retrieval performance of our FreestyleRet and baseline models.
Please write a caption that describes this LaTeX table.
2312.02428v2_train__af17f515-2c25-4829-a5e8-6448ced6d85a
arxiv_table_to_tex
\begin{table*}[htb]\begin{table*}[htb]\centering\footnotesize\renewcommand{\arraystretch}{1.35} % 控刢葌高\setlength{\tabcolsep}{4.0mm} % ζŽ§εˆΆεˆ—ι—΄θ·{{\begin{tabular}{l|p{80pt}|cc|cc|cc|cc}\toprule[1.5pt]\multirow{2}{*}{\textbf{\#}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{Image\textbf{$\rightarrow$}Sketch} & \multicolumn{2}{c|}{Art\textbf{$\rightarrow$}Sketch} & \multicolumn{2}{c|}{Text\textbf{$\rightarrow$}Sketch} & \multicolumn{2}{c}{Low-Res\textbf{$\rightarrow$}Sketch} \\\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}\cmidrule(rl){9-10}& & {R@1} & {R@5} & {R@1} & {R@5} & {R@1} & {R@5} & {R@1} & {R@5} \\\noalign{\hrule height 1.5pt}1& CLIP$^{*}$ & 70.5 & 96.1 & 60.5 & 92.9 & 55.0 & 90.8 & 60.4 & 90.9 \\2& BLIP$^{*}$ & 69.8 & 93.5 & 47.6 & 82.8 & 58.6 & 89.8 & 52.3 & 82.8\\3& VPT & 71.7 & 96.2 & 62.3 & 92.9 & 49.4 & 88.6 & 63.3 & 91.5\\4& ImageBind & 54.0 & 81.8 & 38.3 & 71.6 & 56.1 & 88.4 & 26.2 & 52.5\\5& LanguageBind & 74.6 & 96.1 & 57.5 & 87.0 & 65.7 & 94.0 & 54.5 & 83.8\\\midrule\rowcolor{aliceblue!60} 6& FreestyleRet-CLIP & 77.8 & \textbf{98.1} & 66.5 & \textbf{96.2} & 72.3 & 97.4 & 68.7 & \textbf{95.1}\\\rowcolor{aliceblue!60} 7& FreestyleRet-BLIP & \textbf{80.5} & 97.7 & \textbf{66.8} & 94.9 & \textbf{76.6} & \textbf{97.7} & \textbf{71.1} & 94.3\\\bottomrule[1.5pt]\end{tabular}}}% \vspace{-1pt}\caption{\textbf{The Sketch-Retrieval performance of our FreestyleRet and baseline models.}}\label{tab:result_sketch}% \vspace{-1pt}\end{table*}
I need the LaTeX code to create this table, please.
\textbf{The Sketch-Retrieval performance of our FreestyleRet and baseline models.
Please give a caption for the LaTeX table.
2312.02428v2_train__dce1840f-5673-4130-832a-b7238f312557
arxiv_table_to_tex
\begin{table*}[htb]\begin{table*}[htb]\centering\footnotesize\renewcommand{\arraystretch}{1.35} % 控刢葌高\setlength{\tabcolsep}{4.0mm} % ζŽ§εˆΆεˆ—ι—΄θ·{{\begin{tabular}{l|p{80pt}|cc|cc|cc|cc}\toprule[1.5pt]\multirow{2}{*}{\textbf{\#}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{Image\textbf{$\rightarrow$}Low-Res} & \multicolumn{2}{c|}{Art\textbf{$\rightarrow$}Low-Res} & \multicolumn{2}{c|}{Text\textbf{$\rightarrow$}Low-Res} & \multicolumn{2}{c}{Sketch\textbf{$\rightarrow$}Low-Res} \\\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}\cmidrule(rl){9-10}& & {R@1} & {R@5} & {R@1} & {R@5} & {R@1} & {R@5} & {R@1} & {R@5} \\\noalign{\hrule height 1.5pt}1& CLIP$^{*}$ & 79.3 & 97.2 & 53.0 & 89.2 & 46.0 & 82.3 & 59.5 & 92.4 \\2& BLIP$^{*}$ & \textbf{89.0} & 40.8 & 73.9 & 87.0 & 51.5 & 84.4 & 51.4 & 82.3\\3& VPT & 75.5 & 95.7 & 56.7 & 90.3 & 45.6 & 85.7 & 61.9 & 91.6\\4& ImageBind & 59.9 & 83.1 & 25.2 & 49.8 & 42.4 & 73.8 & 30.7 & 56.8\\5& LanguageBind & 81.0 & 97.6 & 47.3 & 81.2 & 58.5 & 87.9 & 55.5 & 85.6\\\midrule\rowcolor{aliceblue!60} 6& FreestyleRet-CLIP & 80.2 & 97.5 & 62.6 & \textbf{95.2} & 68.7 & 96.6 & 67.4 & \textbf{95.3}\\\rowcolor{aliceblue!60} 7& FreestyleRet-BLIP & 88.4 & \textbf{98.6} & \textbf{63.9} & 94.1 & \textbf{76.0} & \textbf{97.5} & \textbf{71.3} & 94.3\\\bottomrule[1.5pt]\end{tabular}}}% \vspace{-1pt}\caption{\textbf{The Low-Resolution Image Retrieval performance of our FreestyleRet and baseline models.}}\label{tab:result_Low_Res}% \vspace{-1pt}\end{table*}
Could you please provide me the LaTeX code to generate this table?
\textbf{The Low-Resolution Image Retrieval performance of our FreestyleRet and baseline models.
Could you generate a caption for this LaTeX table?
2402.01939v1_train__8323aa04-11df-4236-8ad5-dd8c55252329
arxiv_table_to_tex
\begin{table}[!t]\begin{table}[!t]\centering\begin{tabular}{r|c|c}\toprule\multicolumn{3}{c}{\textbf{Ablations on Scottish Gaelic-English}} \\\midrule& \textbf{Ours} & \textbf{Naive} \\\midrule5K & \textbf{13.32} & 13.05 \\5K Number & 12.76 & \textbf{13.13} \\5K Length & 12.58 & 12.28 \\5K Align & 12.14 & 12.62 \\\bottomrule\end{tabular}\caption{Ablation result of the importance of different components of our method. If we don't use one of the components, the BLEU score drops significantly.}\label{tab:abl}\end{table}
I need the LaTeX code to create this table, please.
Ablation result of the importance of different components of our method. If we don't use one of the components, the BLEU score drops significantly.
Please create a caption for this LaTeX table.
2402.01939v1_train__037a258f-e344-472c-86f1-2205d6442d47
arxiv_table_to_tex
\begin{table*}[!t]\centering\begin{table*}[!t]\centering\small\begin{tabular}{r|c|ccc|c|ccc}\toprule& \multicolumn{4}{c|}{\textbf{Scottish Gaelic-English Ours}} & \multicolumn{4}{|c}{\textbf{Scottish Gaelic-English Naive}} \\%\cmidrule& & \multirow{1}{*}{\# ENG} & \multirow{1}{*}{\# GLA} & \multirow{1}{*}{\# seed} & & \multirow{1}{*}{\# ENG} & \multirow{1}{*}{\# GLA} & \multirow{1}{*}{\# seed}\\& BLEU & types & types & sentences & BLEU & types & types & sentences \\\midrule0K &9.08 &11077 &13826 &0 &9.08 &11077 &13836 &0 \\5K (one) &12.79 &11269 &14020 &5 &12.58 &12588 &15883 &5 \\5K &\textbf{13.32} &12763 &15847 &1511 &13.05 &12057 &15082 &1512 \\5K (half) &13.13 &12367 &15216 &1527 &12.57 &11811 &14740 &1508 \\5K (remove) &12.91 &12528 &15702 &1909 &\textbf{13.29} &12511 &15694 &1909 \\\bottomrule\end{tabular}\caption{Ablation about the number of new vocabulary introduced and the number of seed sentences used to create five thousand synthetic data. Using as little as five seed sentences boosts the 3.71 BLEU score.}\label{tab:abl_1}\end{table*}
Generate the LaTeX code to recreate this table.
Ablation about the number of new vocabulary introduced and the number of seed sentences used to create five thousand synthetic data. Using as little as five seed sentences boosts the 3.71 BLEU score.
Could you generate a caption for this LaTeX table?
1810.05726v3_train__32342ee0-a9b3-47b1-a9a9-af9dc10a0def
arxiv_table_to_tex
\begin{table}[h]\begin{table}[h]\setlength\aboverulesep{0pt}\setlength\belowrulesep{0pt}\centering\caption{The average test classification accuracy, recall rate, precision and false positive rate across all five cross validated folds for both Inception-v3 and ResNet-50. The statistic from the best performing network is emboldened for each species. Equations for the computation of each metric are provided below. \label{DLPrecision}}\begin{tabular}{|l|c|c|c|c|c|c|}\toprule\multicolumn{1}{|c|}{\multirow{2}[4]{*}{Species}} & \multicolumn{2}{c|}{Top-1 accuracy (\%)} & \multicolumn{2}{c|}{Precision (\%)} & \multicolumn{2}{c|}{False positive rate (\%)} \\\cmidrule{2-7} & Inception-v3 & ResNet-50 & Inception-v3 & ResNet-50 & Inception-v3 & ResNet-50 \\\midrule\textit{Chinee Apple} & 85.3 & \textbf{88.5} & \textbf{92.7} & 91.0 & \textbf{0.48} & 0.61 \\\textit{Lantana} & 94.4 & \textbf{95.0} & 90.9 & \textbf{91.7} & 0.62 & \textbf{0.55} \\\textit{Parkinsonia} & 96.8 & \textbf{97.2} & 95.6 & \textbf{97.9} & 0.29 & \textbf{0.13} \\\textit{Parthenium} & 94.9 & \textbf{95.8} & 95.8 & \textbf{96.7} & 0.26 & \textbf{0.21} \\\textit{Prickly Acacia} & 92.8 & \textbf{95.5} & \textbf{93.4} & 93.0 & \textbf{0.43} & 0.46 \\\textit{Rubber Vine} & \textbf{93.1} & 92.5 & \textbf{99.2} & 99.1 & \textbf{0.05} & 0.05 \\\textit{Siam Weed} & \textbf{97.6} & 96.5 & 94.4 & \textbf{97.2} & 0.38 & \textbf{0.18} \\\textit{Snake Weed} & 88.0 & \textbf{88.8} & 86.9 & \textbf{90.9} & 0.82 & 0.55 \\\textit{Negatives} & 97.2 & \textbf{97.6} & 96.5 & \textbf{96.7} & 3.77 & 3.59 \\\midruleWeighted average & 95.1 & \textbf{95.7} & 95.1 & \textbf{95.7} & 2.16 & \textbf{2.04} \\\bottomrule\end{tabular}%\end{table}
Generate the LaTeX code to recreate this table.
The average test classification accuracy, recall rate, precision and false positive rate across all five cross validated folds for both Inception-v3 and ResNet-50. The statistic from the best performing network is emboldened for each species. Equations for the computation of each metric are provided below. \label{DLPrecision
Can you suggest a caption for this LaTeX table?
1810.05726v3_train__83e1b5d8-4c1b-4fa4-87bd-1ddccaa05014
arxiv_table_to_tex
\begin{table*}[h]\begin{table*}[h]\setlength\aboverulesep{0pt}\setlength\belowrulesep{0pt}\centering\caption{The confusion matrix (\%) achieved by the ResNet-50 model on the test subsets for the five cross validated folds.\label{DLConfusion}}\begin{tabular}{|c|ccccccccc|}\toprule& \multicolumn{1}{p{3em}}{\textit{Chinee apple}} & \multicolumn{1}{p{3em}}{\textit{Lantana}} & \multicolumn{1}{p{4.5em}}{\textit{Parkinsonia}} & \multicolumn{1}{p{4.5em}}{\textit{Parthenium}} & \multicolumn{1}{p{3em}}{\textit{Prickly acacia}} & \multicolumn{1}{p{3em}}{\textit{Rubber vine}} & \multicolumn{1}{p{3em}}{\textit{Siam\newline{}weed}} & \multicolumn{1}{p{3em}}{\textit{Snake weed}} & \multicolumn{1}{p{4.5em}|}{\textit{Negatives}} \\\midrule\textit{Chinee apple} & \textbf{88.5} & 1.78 & 0.00 & 0.44 & 0.18 & 0.18 & 0.27 & 3.37 & 5.33 \\\textit{Lantana} & 0.56 & \textbf{95.0} & 0.00 & 0.00 & 0.00 & 0.09 & 0.28 & 0.94 & 3.10 \\\textit{Parkinsonia} & 0.10 & 0.00 & \textbf{97.2} & 0.10 & 1.26 & 0.00 & 0.00 & 0.00 & 1.36 \\\textit{Parthenium} & 0.10 & 0.20 & 0.10 & \textbf{95.8} & 0.88 & 0.10 & 0.00 & 0.29 & 2.54 \\\textit{Prickly acacia} & 0.00 & 0.00 & 0.56 & 0.66 & \textbf{95.5} & 0.00 & 0.00 & 0.09 & 3.20 \\\textit{Rubber vine} & 0.79 & 0.50 & 0.10 & 0.10 & 0.00 & \textbf{92.5} & 0.20 & 0.40 & 5.45 \\\textit{Siam weed} & 0.00 & 0.19 & 0.00 & 0.00 & 0.00 & 0.00 & \textbf{96.5} & 0.09 & 3.26 \\\textit{Snake weed} & 4.13 & 1.77 & 0.00 & 0.30 & 0.20 & 0.10 & 0.30 & \textbf{88.8} & 4.43 \\\textit{Negatives} & 0.46 & 0.48 & 0.14 & 0.20 & 0.55 & 0.03 & 0.21 & 0.37 & \textbf{97.6} \\\bottomrule\end{tabular}%\end{table*}
Provide LaTeX code for this table.
The confusion matrix (\%) achieved by the ResNet-50 model on the test subsets for the five cross validated folds.\label{DLConfusion
May I have a caption for this LaTeX table?
2003.04628v1_train__eaf8f315-801c-41ef-b4aa-6b6b689e2280
arxiv_table_to_tex
\begin{table*}[ht!]\begin{table*}[ht!]\centering\resizebox{\textwidth}{!}{\begin{tabular}{r c@{\hspace*{2mm}}c c@{\hspace*{2mm}}c c@{\hspace*{2mm}}c | c@{\hspace*{2mm}}c c@{\hspace*{2mm}}c c@{\hspace*{2mm}}c | c@{\hspace*{2mm}}c c@{\hspace*{2mm}}c c@{\hspace*{2mm}}c}~ &\multicolumn{6}{c}{\textit{Scientific articles}} &\multicolumn{6}{c}{\textit{Paper abstracts}} &\multicolumn{6}{c}{\textit{News articles}}\\\cmidrule(lr){2-7} \cmidrule(lr){8-13} \cmidrule(lr){14-19}~ &\multicolumn{2}{c}{\textbf{PubMed}} &\multicolumn{2}{c}{\textbf{ACM}} &\multicolumn{2}{c}{\textbf{SemEval}} &\multicolumn{2}{c}{\textbf{Inspec}} &\multicolumn{2}{c}{\textbf{WWW}} &\multicolumn{2}{c}{\textbf{KP20k}} &\multicolumn{2}{c}{\textbf{DUC-2001}} &\multicolumn{2}{c}{\textbf{KPCrowd}} &\multicolumn{2}{c}{\textbf{KPTimes}} \\\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}\cmidrule(lr){8-9} \cmidrule(lr){10-11} \cmidrule(lr){12-13}\cmidrule(lr){14-15} \cmidrule(lr){16-17} \cmidrule(lr){18-19}\\[-1.5em]\textbf{Model} &\small{$\text{F}@10$} & \small{MAP} & \small{$\text{F}@10$} & \small{MAP} & \small{$\text{F}@10$} & \small{MAP} &\small{$\text{F}@10$} & \small{MAP} & \small{$\text{F}@10$} & \small{MAP} & \small{$\text{F}@10$} & \small{MAP} &\small{$\text{F}@10$} & \small{MAP} & \small{$\text{F}@10$} & \small{MAP} & \small{$\text{F}@10$} & \small{MAP} \\%[-.2em] % originally -.2em but for testings sake\midruleFirstPhrases &15.4 & 14.7 & 13.6 & 13.5 & 13.8 & 10.5 &29.3 & 27.9 & 10.2 & \phantom{0}9.8 & 13.5 & 12.6 &24.6 & 22.3 & 17.1 & 16.5 & \phantom{0}9.2 & \phantom{0}8.4 \\TextRank &\phantom{0}1.8 & \phantom{0}1.8 & \phantom{0}2.5 & \phantom{0}2.4 & \phantom{0}3.5 & \phantom{0}2.3 &35.8 & 31.4 & \phantom{0}8.4 & \phantom{0}5.6 & 10.2 & \phantom{0}7.4 &21.5 & 19.4 & \phantom{0}7.1 & \phantom{0}9.5 & \phantom{0}2.7 & \phantom{0}2.5 \\TF$\times$IDF &16.7 & 16.9 & 12.1 & 11.4 & 17.7 & 12.7 &\textbf{36.5} & \textbf{34.4} & \phantom{0}9.3 & 10.1 & 11.6 & 12.3 &23.3 & 21.6 & 16.9 & 15.8 & \phantom{0}9.6 & \phantom{0}9.4 \\\midrulePositionRank &\phantom{0}4.9 & \phantom{0}4.6 & \phantom{0}5.7 & \phantom{0}4.9 & \phantom{0}6.8 & \phantom{0}4.1 &34.2 & 32.2 & 11.6\da & \phantom{0}8.4 & 14.1\da & 11.2 &28.6\da & \textbf{28.0}\da & 13.4 & 12.7 & \phantom{0}8.5 & \phantom{0}6.6 \\MPRank &15.8 & 15.0 & 11.6 & 11.0 & 14.3 & 10.6 &30.5 & 29.0 & 10.8\da & 10.4 & 13.6\da & 13.3\da &25.6 & 24.9\da & \textbf{18.2} & \textbf{17.0} & 11.2\da & 10.1\da \\EmbedRank &\phantom{0}3.7 & \phantom{0}3.2 & \phantom{0}2.1 & \phantom{0}2.1 & \phantom{0}2.5 & \phantom{0}2.0 &35.6 & 32.5 & 10.7\da & \phantom{0}7.7 & 12.4 & 10.0 &\textbf{29.5}\da & 27.5\da & 12.4 & 12.4 & \phantom{0}4.0 & \phantom{0}3.3 \\\midruleKea &18.6\da & 18.6\da & 14.2\da & 13.3 & 19.5\da & \textbf{14.7}\da &34.5 & 33.2 & 11.0\da & 10.9\da & 14.0\da & 13.8\da &26.5\da & 24.5\da & 17.3 & 16.7 & 11.0\da & 10.8\da \\CopyRNN &\textbf{24.2}\da & \textbf{25.4}\da & \textbf{24.4}\da & \textbf{26.3}\da & \textbf{20.3}\da & 13.8 &28.2 & 26.4 & \textbf{22.2}\da & \textbf{24.9}\da & \textbf{25.4}\da & \textbf{28.7}\da &% Trained on KP20k% 12.7 & \phantom{0}9.7 & 15.5 & 11.1 & 11.0 & 10.6 \\% Trained on KPTimes10.5 & \phantom{0}7.2 & \phantom{0}8.4 & \phantom{0}4.2 & \textbf{39.3}\da & \textbf{50.9}\da \\CorrRNN &20.8\da & 19.4\da & 21.1\da & 20.5\da & 19.4 & 10.9 &27.9 & 23.6 & 19.9\da & 20.3\da & 21.8\da & 22.7 &% Trained on KP20k%17.0 & 11.5 & 11.5 & \phantom{0}5.7 & \phantom{0}9.7 & \phantom{0}8.0 \\% Trained on KPtimes10.5 & \phantom{0}6.5 & \phantom{0}7.8 & \phantom{0}3.2 & 20.5\da & 20.3\da \\%\midrule%Kea (KP20k) &%\sign{18.9} & \sign{18.5} &%\best[s]{13.7} & \best[s]{13.1} &%\sign{19.1} & \sign{14.1} \\%CopyRNN (extr.) &%n/a$^*$ & n/a$^*$ &%\best{30.6} & \best{28.1} &%\best{24.2} & \best{25.4} &%\best{24.8} & \best{26.6} &%n/a$^*$ & n/a$^*$ &%\best{21.0} & \best{14.2} \\%CopyRNN (extr.) &%\best[s]{24.6} & \best[s]{26.2} &%\best[s]{23.0} & \best[s]{24.3} &%\best[s]{21.7} & \best{14.7} \\\bottomrule\end{tabular}}\caption{Performance of keyphrase extraction models.$^\dagger$~indicates significance over the baselines.}\label{tab:results}\end{table*}
I need the LaTeX code to create this table, please.
Performance of keyphrase extraction models.$^\dagger$~indicates significance over the baselines.
Could you generate a caption for this LaTeX table?
2103.13837v2_train__a3928166-7ccd-4bcb-b3c5-b71159c0c3b6
arxiv_table_to_tex
\begin{table}\begin{table}\centering\setlength{\tabcolsep}{8pt}\begin{tabular}{| l | l | l | l | l |}\hline$N$ & $\Rs$ & Steps & Sweeps/step & Samples \\\hline30 & $10^4$ & 3018 & 1600 & 300 \\\hline60 & $3\times10^4$ & 6056 & 800 & 110 \\\hline\end{tabular}\caption{Simulation parameters for vibrational entropy measurements using the shell method. The number of steps was different for $N=30$ and $N=60$ in order to keep the rate of shell contraction constant.}\label{table:vibrational}\end{table}
Could you please provide me the LaTeX code to generate this table?
Simulation parameters for vibrational entropy measurements using the shell method. The number of steps was different for $N=30$ and $N=60$ in order to keep the rate of shell contraction constant.
I need a caption for this LaTeX table, please.
2103.13837v2_train__0f98c1b1-95a2-42ef-be50-4c5a228b757b
arxiv_table_to_tex
\begin{table}\begin{table}\centering\setlength{\tabcolsep}{8pt}\begin{tabular}{| l | l | l | l | l | }\hline$\Svib$ & $\mu$ & $\sigma$ & skewness & kurtosis \\\hlineshell & -2.65 & 0.11 & -0.90 & 1.29 \\\hlineRTI & -2.65 & 0.09 & -0.25 & 2.76 \\\hline\end{tabular}\caption{Statistics of the probability distribution functions of the vibrational entropy measured by the shell and RTI methods for $N=30$, $\phi=0.60$.}\label{table:distributions}\end{table}
Generate the LaTeX code to recreate this table.
Statistics of the probability distribution functions of the vibrational entropy measured by the shell and RTI methods for $N=30$, $\phi=0.60$.
Could you provide a caption for this LaTeX table?
2312.06348v2_train__ceabffd5-b1ff-4654-b80e-82258a3c5b22
arxiv_table_to_tex
\begin{table*}[htp]\begin{table*}[htp]\centering\begin{tabular}{c|c|c|c|c}\topruleTask & Hopper & HalfCheetah & Ant & Walker2d \\\midruleExpert &3402 & 4463 & 4228 & 6717 \\BC & 2376.22 Β± 754.80 & 2781.85 Β± 1143.11 & 2942.24 Β± 344.62 & 1091.46 Β± 187.60 \\Gail & 2915.67 Β± 328.12 & 4383.44 Β± 62.08 & 4530.43 Β± 133.30 & 1942.81 Β± 957.68 \\Valuedice & 2455.25 Β± 658.43 & 4734.59 Β± 229.67 & 3781.56 Β± 571.15 & 4606.50 Β± 927.80 \\CFIL & 3234.61 Β± 266.62 & 4976.73 Β± 60.41 & 4222.07 Β± 201.32 & 5039.77 Β± 307.17 \\\midrule\textbf{DiffAIL}& \textbf{3382.03 Β± 142.86} & \textbf{5362.25 Β± 96.92} & \textbf{5142.60 Β± 90.05} & \textbf{6292.28 Β± 97.65} \\\bottomrule\end{tabular}\caption{Learned policy best performance during training for different sota imitation learning algorithms with 1 trajectory over 5 seeds in the standard state-action setting.}\label{table_1_traj}\end{table*}
Generate the LaTeX code to recreate this table.
Learned policy best performance during training for different sota imitation learning algorithms with 1 trajectory over 5 seeds in the standard state-action setting.
Please create a caption for this LaTeX table.
2312.06348v2_train__49d90938-c650-42e7-9c03-c2b7db7a9e61
arxiv_table_to_tex
\begin{table}[t]\begin{table}[t]\centering\begin{tabular}{c|c|c}\topruleEnv & Gail & DiffAIL \\\midruleHopper & 83.3\% Β± 1.2\% & \textbf{92.8\% Β± 2.5\%} \\HalfCheetah & 83.8\% Β± 0.9\% & \textbf{94.5\% Β± 0.6\%} \\Walker2d & 90.5\% Β± 0.8\% & \textbf{96.1\% Β± 0.6\%} \\Ant & 81.7\% Β± 1.6\% & \textbf{98.8\% Β± 0.1\%} \\\bottomrule\end{tabular}\caption{A comparison of the ability to distinguish expert demonstrations out of data for Gail and DiffAIL with four trajectories over five seeds. It suggests that DiffAIL has a significant improvement over Gail's discriminator. }\label{table_percent}\end{table}
Please generate the necessary LaTeX script to draw this table.
A comparison of the ability to distinguish expert demonstrations out of data for Gail and DiffAIL with four trajectories over five seeds. It suggests that DiffAIL has a significant improvement over Gail's discriminator.
May I have a caption for this LaTeX table?
2312.06348v2_train__23487d7f-68b8-41f2-a151-8e2c61881d1b
arxiv_table_to_tex
\begin{table*}[htp]\begin{table*}[htp]\setcounter{table}{2}\centering\begin{tabular}{c|c|c|c|c}\topruleTasks & HalfCheetah & Hopper & Walker2d & Ant \\\midruleOptimizer & Adam & Adam & Adam & Adam\\Batch Size & $256$ & $256$ & $256$ & $256$ \\diffusion hidden layer & $128$ & $128$ & $128$ & $128$ \\diffusion timesteps & $10$ & $10$ & $10$ & $10$ \\noise schedule & linear & linear & linear & linear \\Diffusion learning rate & $3 \times 10^{-4}$ & $3 \times 10^{-4}$ & $3 \times 10^{-4}$ & $3 \times 10^{-4}$ \\use gradient penalty & $\textbf{True}$ & $\textbf{False}$ & $\textbf{True}$ & $\textbf{True}$ \\gradient penalty weight & $\textbf{0.1}$ & $\textbf{NAN}$ & $\textbf{0.001}$ & $\textbf{0.1}$ \\Discount factor $\gamma$ &$0.99$ & $0.99$ & $0.99$ & $0.99$ \\$Q$ learning rate & $3 \times 10^{-3}$ & $3 \times 10^{-3}$ & $3 \times 10^{-3}$ & $3 \times 10^{-3}$\\$\pi$ learning rate & $3 \times 10^{-3}$ & $3 \times 10^{-3}$ & $3 \times 10^{-3}$ & $3 \times 10^{-3}$ \\\bottomrule\end{tabular}\caption{Optimal hyperparameters of DiffAIL in both state-action and state-only settings. DiffAIL parameters change only on the gradient penalty term, indicating that it is very applicable. }\label{table_optimal_hyperparameters}\end{table*}
Write the LaTeX code for this table.
Optimal hyperparameters of DiffAIL in both state-action and state-only settings. DiffAIL parameters change only on the gradient penalty term, indicating that it is very applicable.
Could you generate a caption for this LaTeX table?
2312.06348v2_train__c1a1aadd-a2bf-4843-b2ea-67ca22997fdf
arxiv_table_to_tex
\begin{table*}[htp]\begin{table*}[htp]\centering\begin{tabular}{c|c|c|c|c|c}\toprulediffusion steps & Gail & DiffAIL$(2)$ & DiffAIL$(5)$ & DiffAIL$(10)$ & DiffAIL$(20)$ \\\midruleHalfCheetah $(timesteps=50)$ & 3.1h & 5.5h & 5.9h & 6.2h & 8.0h \\Ant $(timesteps=50)$ & 3.2h & 5.7h & 6.1 & 6.4 & 8.2h\\Hopper $(timesteps=75)$ &4.5h & 6.5h & 6.9h & 7.5h & 9.3h \\Walker2d $(timesteps=75)$ &4.8h & 8.4h & 8.9h & 9.5h & 11.4h \\\bottomrule\end{tabular}\caption{A comparison of running time for different diffusion stepswith DiffAIL on $4$ Mujoco Tasks, with a unit equal to $1$ hour. The first line denotes the algorithms, where the numbers in parentheses represent the diffusion steps. The first column represents the Mujoco task, where the number in parentheses represents the training steps in units of $10,000$.}\label{table_time}\end{table*}
Please generate the necessary LaTeX script to draw this table.
A comparison of running time for different diffusion stepswith DiffAIL on $4$ Mujoco Tasks, with a unit equal to $1$ hour. The first line denotes the algorithms, where the numbers in parentheses represent the diffusion steps. The first column represents the Mujoco task, where the number in parentheses represents the training steps in units of $10,000$.
Could you provide a caption for this LaTeX table?
2209.14831v1_train__5569f298-ca5b-4633-a78a-52428d08bb3d
arxiv_table_to_tex
\begin{table}[t]\begin{table}[t]\caption{Key space of encryption methods}\label{table:key space}\centering\begin{tabular}{|c|cc|}\hlineEncryption Method & Key space & Remark\\\hlineSHF\cite{maungmaung_kiya_2021} & $(c \times M \times M)!$ & $c =$ 3 \\CP & $c!$ & $c >$ 3 \\\hline\end{tabular}\end{table}
Could you please provide me the LaTeX code to generate this table?
Key space of encryption methods
Please write a caption that describes this LaTeX table.
2209.14831v1_train__5a21ac8f-c716-468d-b43f-337497c9ff54
arxiv_table_to_tex
\begin{table}[bt]\begin{table}[bt]\caption{Detection accuracy (mAP) of proposed models (PASCAL VOC dataset)}\label{table:result}\centering\begin{tabular}{|c|ccc|}\hlineSelected feature map & Correct ($K$) & No-enc & Incorrect ($K'$) \\\hlineModel-1& 0.7244 & 0.1363 & 0.0421 \\Model-2& 0.7611 & 0.0091 & 0.0180 \\Model-3& 0.7475 & 0.0091 & 0.0078 \\Model-4& 0.7611 & 0.0023 & 0.0043 \\Model-5& 0.7587 & 0.1672 & 0.1624\\Model-6& 0.7617 & 0.1732 & 0.1672\\Model-7& 0.7695 & 0.1768 & 0.1750 \\Model-8& 0.7677 & 0.3529 & 0.3415\\Model-9& 0.7705 & 0.5767 & 0.5678 \\Model-10& 0.7705 & 0.7177 & 0.7027 \\Model-11& 0.7512 & 0.7314 & 0.7252 \\\hlineBaseline& \multicolumn{3}{c|}{0.7690}\\\hline\end{tabular}\end{table}
Could you please provide me the LaTeX code to generate this table?
Detection accuracy (mAP) of proposed models (PASCAL VOC dataset)
May I have a caption for this LaTeX table?
2209.14831v1_train__91dc5c53-525c-49bb-a1dd-f7367306fe8a
arxiv_table_to_tex
\begin{table}[bt]\begin{table}[bt]\caption{Machine spec used for evaluating executing time}\label{table:machine info}\centering\begin{tabular}{|c|c|}\hlineProcessor & Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz\\OS & Ubuntu 18.04.6 LTS\\GPU & Quadro RTX 6000\\Memory (CPU) & 32GB\\Memory (GPU) & 24GB\\\hline\end{tabular}\end{table}
Please generate the necessary LaTeX script to draw this table.
Machine spec used for evaluating executing time
Could you provide a caption for this LaTeX table?
2209.14831v1_train__a8c116f2-534d-4b2d-bbb4-ea2159575806
arxiv_table_to_tex
\begin{table}[bt]\begin{table}[bt]\caption{Learning convergence speed with and without CP (PASCAL VOC dataset)}\label{table:loss value}\centering\begin{tabular}{|c|cc|}\hline&\multicolumn{2}{c|}{Loss value} \\\cline{2-3}No. of iterations & Baseline & Model-5 \\\hline20000 & 3.5232 & 3.2191 \\40000& 2.9450 & 3.1328 \\80000& 2.3215 & 2.2475 \\120000& 1.8859 & 2.0459 \\\hline\end{tabular}\end{table}
Create LaTeX code to produce this table.
Learning convergence speed with and without CP (PASCAL VOC dataset)
Could you provide a caption for this LaTeX table?
2204.02710v1_train__df739213-2c9d-4928-ad0a-5d9f92dcc5b4
arxiv_table_to_tex
\begin{table}\begin{table}\centering\small\caption{The Diversified-Relevance scores for ColBERT and Mix-and-Match in our human study.}\begin{tabular}{|l|l|l|}\hline& \textbf{ColBERT } & \textbf{Mix and Match~} \\\hline\multicolumn{1}{|c|}{Diversified-Relevance (DR)} & \multicolumn{1}{c|}{0.25} & \multicolumn{1}{c|}{{\bf 0.35}} \\\hline\end{tabular}\label{tab:DR-scores}\end{table}
Create a LaTeX script to illustrate this table.
The Diversified-Relevance scores for ColBERT and Mix-and-Match in our human study.
May I have a caption for this LaTeX table?
2312.00414v2_train__fb503ac5-5326-440d-a17d-a50e13a00c27
arxiv_table_to_tex
\begin{table*}[t]\begin{table*}[t]\centering\caption{Comparison of models. DL-DKD uses CLIP-B/32$^\ast$ only in training phase to transfer VLM knowledge into visual and textual backbone.}\scalebox{0.5}{\begin{tabular}{c|c|c|cc|c|c}\hline& Model name & Resolution & Vision backbone & Text backbone & Parameters (M) & Image-level GFLOPs \\ \hline& MS-SL & 224 & I3D (+ResNet152) & RoBERTa & 433.3 & 40.0 \\& GMMFormer & 224 & I3D (+ResNet152) & RoBERTa & 441.3 & 40.0 \\& DL-DKD & 224 & I3D (+ResNet152, CLIP-B/32$^\ast$) & RoBERTa$^\dagger$ (+CLIP-B/32$^\ast$) & 439.0 & 40.0 (+8.8$^\ast$) \\\multirow{-4}{*}{lightweight SOTA} & MS-SL (B32) & 224 & \multicolumn{2}{c|}{CLIP-B/32} & 156.3 & 8.8 \\ \hline& MS-SL (L14) & 224 & \multicolumn{2}{c|}{CLIP-L/14} & 432.6 & 162.0 \\\multirow{-2}{*}{heavyweight SOTA} & MS-SL (L14-336) & 336 & \multicolumn{2}{c|}{CLIP-L/14-336} & 432.9 & 381.9 \\ \hline& QASIR-B32 & 224 & \multicolumn{2}{c|}{CLIP-B/32} & 151.3 & 8.8 \\& QASIR-L14 & 224 & \multicolumn{2}{c|}{CLIP-L/14} & 427.6 & 162.0 \\\multirow{-3}{*}{Zero-shot} & QASIR-L14-336 & 336 & \multicolumn{2}{c|}{CLIP-L/14-336} & 427.9 & 381.9 \\ \hline& QASIR-B32 & 224 & \multicolumn{2}{c|}{CLIP-B/32} & 154.7 & 8.8 \\& QASIR-L14 & 224 & \multicolumn{2}{c|}{CLIP-L/14} & 431.0 & 162.0 \\\multirow{-3}{*}{Fine-tuned} & QASIR-L14-336 & 336 & \multicolumn{2}{c|}{CLIP-L/14-336} & 431.3 & 381.9 \\ \hline\end{tabular}}\label{tab:model_comparison}\end{table*}
Please create LaTeX code to produce this table.
Comparison of models. DL-DKD uses CLIP-B/32$^\ast$ only in training phase to transfer VLM knowledge into visual and textual backbone.
Please provide an appropriate caption for this LaTeX table.
2312.00414v2_train__d63093c6-0831-4185-ba57-938d1f424a8b
arxiv_table_to_tex
\begin{table*}[t]\begin{table*}[t]\centering\caption{Fine-tuning QASIR performance on benchmark datasets.}\scalebox{0.57}{\begin{tabular}{lcccccccccccccccccc}\hline\rowcolor[HTML]{EFEFEF}\multicolumn{1}{l|}{\cellcolor[HTML]{EFEFEF}} & \multicolumn{6}{c|}{\cellcolor[HTML]{EFEFEF}ActivityNet Captions} & \multicolumn{6}{c|}{\cellcolor[HTML]{EFEFEF}TVR} & \multicolumn{6}{c}{\cellcolor[HTML]{EFEFEF}Charades-STA} \\ \hline\multicolumn{1}{c|}{} & GFLOPs & R@1 & R@5 & R@10 & R@100 & \multicolumn{1}{c|}{sumR} & GFLOPs & R@1 & R@5 & R@10 & R@100 & \multicolumn{1}{c|}{sumR} & GFLOPs & R@1 & R@5 & R@10 & R@100 & sumR \\ \hline\multicolumn{1}{c|}{MS-SL} & 3.4$\times$10$^3$ & 7.1 & 22.5 & 34.7 & 75.8 & \multicolumn{1}{c|}{140.1} & 4.6$\times$10$^4$ & 13.5 & 32.1 & 43.4 & 83.4 & \multicolumn{1}{c|}{172.4} & 1.3$\times$10$^3$ & 1.8 & 7.1 & 11.8 & 47.7 & 68.4 \\\multicolumn{1}{c|}{GMMFormer} & 3.4$\times$10$^3$ & 8.3 & 24.9 & 36.7 & 76.1 & \multicolumn{1}{c|}{146.0} & 4.6$\times$10$^4$ & 13.9 & 33.3 & 44.5 & 84.9 & \multicolumn{1}{c|}{176.6} & 1.3$\times$10$^3$ & 2.1 & 7.8 & 12.5 & 50.6 & 72.9 \\\multicolumn{1}{c|}{DL-DKD} & 3.4$\times$10$^3$ & 8.0 & 25.0 & 37.5 & 77.1 & \multicolumn{1}{c|}{147.6} & 4.6$\times$10$^4$ & 14.4 & 34.9 & 45.8 & 84.9 & \multicolumn{1}{c|}{179.9} & - & - & - & - & - & - \\\multicolumn{1}{c|}{MS-SL (B32)} & 5.4$\times$10$^2$ & 11.7 & 31.2 & 43.5 & 81.7 & \multicolumn{1}{c|}{168.1} & 2.0$\times$10$^3$ & 17.5 & 39.5 & 51.3 & 88.3 & \multicolumn{1}{c|}{196.6} & 2.8$\times$10$^2$ & 1.2 & 4.4 & 7.6 & 41.5 & 54.7 \\\multicolumn{1}{c|}{MS-SL (L14)} & 9.8$\times$10$^3$ & 14.8 & 36.6 & 50.3 & 84.8 & \multicolumn{1}{c|}{186.5} & 3.7$\times$10$^4$ & 22.0 & 46.2 & 58.0 & 91.1 & \multicolumn{1}{c|}{217.3} & 5.0$\times$10$^3$ & 1.7 & 7.1 & 12.0 & 54.4 & 75.2 \\\multicolumn{1}{c|}{MS-SL (L14-336)} & 2.3$\times$10$^4$ & 14.2 & 36.9 & 50.4 & 85.2 & \multicolumn{1}{c|}{186.7} & 8.7$\times$10$^4$ & 24.7 & 49.3 & 60.6 & 92.0 & \multicolumn{1}{c|}{226.6} & 1.1$\times$10$^4$ & 2.3 & 8.6 & 14.1 & 57.6 & 82.6 \\ \hline\rowcolor[HTML]{EFEFEF}\multicolumn{19}{l}{\cellcolor[HTML]{EFEFEF}QASIR-B32} \\\multicolumn{1}{c|}{$1\times1$} & 5.4$\times$10$^2$ & \textbf{14.1} & \textbf{32.9} & \textbf{44.5} & 79.9 & \multicolumn{1}{c|}{\textbf{171.4}} & 2.0$\times$10$^3$ & \textbf{19.0} & \textbf{39.9} & \textbf{50.4} & \textbf{87.2} & \multicolumn{1}{c|}{\textbf{196.5}} & 2.8$\times$10$^2$ & \textbf{1.9} & \textbf{5.8} & \textbf{10.1} & \textbf{40.0} & \textbf{57.8} \\\multicolumn{1}{c|}{$2\times2$} & 1.4$\times$10$^2$ & 13.2 & 31.7 & 43.1 & \textbf{80.0} & \multicolumn{1}{c|}{167.9} & 5.1$\times$10$^2$ & 15.8 & 34.7 & 45.5 & 84.5 & \multicolumn{1}{c|}{180.4} & 7.7$\times$10$^1$ & 1.3 & 4.9 & 8.6 & 35.5 & 50.3 \\\multicolumn{1}{c|}{$3\times3$} & 6.9$\times$10$^1$ & 10.7 & 26.8 & 36.9 & 75.0 & \multicolumn{1}{c|}{149.3} & 2.3$\times$10$^2$ & 11.6 & 28.5 & 38.5 & 80.0 & \multicolumn{1}{c|}{158.7} & 4.0$\times$10$^1$ & 0.9 & 4.1 & 6.7 & 28.3 & 40.0 \\\multicolumn{1}{c|}{$4\times4$} & 4.3$\times$10$^1$ & 8.4 & 22.5 & 32.4 & 69.7 & \multicolumn{1}{c|}{133.0} & 1.3$\times$10$^2$ & 8.6 & 22.2 & 31.5 & 74.5 & \multicolumn{1}{c|}{136.8} & 2.7$\times$10$^1$ & 0.7 & 2.9 & 4.8 & 24.6 & 32.9 \\\multicolumn{1}{c|}{$5\times5$} & 3.1$\times$10$^1$ & 6.2 & 17.7 & 25.8 & 62.1 & \multicolumn{1}{c|}{111.9} & 9.0$\times$10$^1$ & 6.3 & 17.7 & 25.6 & 68.8 & \multicolumn{1}{c|}{118.4} & 2.1$\times$10$^1$ & 0.4 & 2.0 & 3.4 & 19.8 & 25.6 \\\multicolumn{1}{c|}{$6\times6$} & 2.5$\times$10$^1$ & 4.8 & 14.0 & 20.8 & 55.2 & \multicolumn{1}{c|}{94.7} & 6.6$\times$10$^1$ & 4.8 & 13.7 & 21.0 & 63.3 & \multicolumn{1}{c|}{102.8} & 1.5$\times$10$^1$ & 0.5 & 1.8 & 3.1 & 18.3 & 23.7 \\\rowcolor[HTML]{EFEFEF}\multicolumn{19}{l}{\cellcolor[HTML]{EFEFEF}QASIR-L14} \\\multicolumn{1}{c|}{$1\times1$} & 9.8$\times$10$^3$ & \textbf{18.9} & \textbf{41.0} & \textbf{53.2} & \textbf{84.8} & \multicolumn{1}{c|}{\textbf{197.9}} & 3.7$\times$10$^4$ & \textbf{23.6} & \textbf{47.0} & \textbf{56.5} & \textbf{89.0} & \multicolumn{1}{c|}{\textbf{216.1}} & 5.0$\times$10$^3$ & \textbf{3.0} & \textbf{11.2} & \textbf{17.4} & \textbf{55.7} & \textbf{87.2} \\\multicolumn{1}{c|}{$2\times2$} & 2.5$\times$10$^3$ & 18.2 & 39.6 & 52.2 & \textbf{84.8} & \multicolumn{1}{c|}{194.9} & 9.2$\times$10$^3$ & 23.0 & 45.4 & 56.3 & 88.9 & \multicolumn{1}{c|}{213.6} & 1.3$\times$10$^3$ & 2.7 & 9.3 & 14.9 & 49.1 & 76.0 \\\multicolumn{1}{c|}{$3\times3$} & 1.1$\times$10$^3$ & 16.5 & 37.5 & 49.9 & 83.2 & \multicolumn{1}{c|}{187.1} & 4.1$\times$10$^3$ & 21.0 & 42.5 & 54.0 & 88.4 & \multicolumn{1}{c|}{205.8} & 6.4$\times$10$^2$ & 2.0 & 7.9 & 12.2 & 43.3 & 65.3 \\\multicolumn{1}{c|}{$4\times4$} & 7.0$\times$10$^2$ & 14.1 & 33.8 & 45.6 & 80.4 & \multicolumn{1}{c|}{174.0} & 2.3$\times$10$^3$ & 16.3 & 35.7 & 46.4 & 84.6 & \multicolumn{1}{c|}{182.9} & 4.0$\times$10$^2$ & 1.8 & 6.3 & 9.6 & 36.8 & 54.4 \\\multicolumn{1}{c|}{$5\times5$} & 4.8$\times$10$^2$ & 12.5 & 30.3 & 41.8 & 78.1 & \multicolumn{1}{c|}{162.8} & 1.5$\times$10$^3$ & 13.5 & 31.7 & 42.0 & 81.6 & \multicolumn{1}{c|}{168.9} & 3.0$\times$10$^2$ & 1.3 & 4.2 & 6.9 & 31.1 & 43.4 \\\multicolumn{1}{c|}{$6\times6$} & 3.7$\times$10$^2$ & 10.7 & 27.2 & 37.8 & 75.2 & \multicolumn{1}{c|}{150.9} & 1.1$\times$10$^3$ & 12.0 & 27.8 & 37.7 & 78.9 & \multicolumn{1}{c|}{156.4} & 1.9$\times$10$^2$ & 1.2 & 3.7 & 6.2 & 27.2 & 38.3 \\\rowcolor[HTML]{EFEFEF}\multicolumn{19}{l}{\cellcolor[HTML]{EFEFEF}QASIR-L14-336} \\\multicolumn{1}{c|}{$1\times1$} & 2.3$\times$10$^4$ & \textbf{19.7} & 41.4 & \textbf{53.9} & 85.1 & \multicolumn{1}{c|}{\textbf{200.1}} & 8.7$\times$10$^4$ & \textbf{26.9} & \textbf{50.6} & 60.7 & \textbf{91.4} & \multicolumn{1}{c|}{229.6} & 1.1$\times$10$^4$ & 3.1 & \textbf{10.8} & \textbf{17.3} & \textbf{57.0} & \textbf{88.2} \\\multicolumn{1}{c|}{$2\times2$} & 5.9$\times$10$^3$ & 19.3 & \textbf{41.5} & 53.8 & \textbf{85.3} & \multicolumn{1}{c|}{200.0} & 2.1$\times$10$^4$ & 26.7 & \textbf{50.6} & \textbf{61.4} & \textbf{91.4} & \multicolumn{1}{c|}{\textbf{230.1}} & 3.1$\times$10$^3$ & \textbf{3.3} & 10.6 & 17.0 & 55.2 & 86.0 \\\multicolumn{1}{c|}{$3\times3$} & 2.7$\times$10$^3$ & 18.7 & 40.6 & 52.7 & 84.4 & \multicolumn{1}{c|}{196.4} & 9.8$\times$10$^3$ & 24.2 & 47.2 & 58.3 & 89.9 & \multicolumn{1}{c|}{219.6} & 1.4$\times$10$^3$ & 2.6 & 9.0 & 15.2 & 51.3 & 78.1 \\\multicolumn{1}{c|}{$4\times4$} & 1.6$\times$10$^3$ & 16.8 & 38.1 & 50.0 & 83.4 & \multicolumn{1}{c|}{188.2} & 5.6$\times$10$^3$ & 21.6 & 43.3 & 54.3 & 88.7 & \multicolumn{1}{c|}{208.0} & 9.3$\times$10$^2$ & 2.3 & 7.1 & 11.9 & 44.3 & 65.6 \\\multicolumn{1}{c|}{$5\times5$} & 1.1$\times$10$^3$ & 15.3 & 35.6 & 47.4 & 82.0 & \multicolumn{1}{c|}{180.3} & 3.6$\times$10$^3$ & 19.0 & 39.4 & 49.7 & 86.0 & \multicolumn{1}{c|}{194.1} & 7.0$\times$10$^2$ & 2.0 & 6.5 & 10.6 & 39.7 & 58.9 \\\multicolumn{1}{c|}{$6\times6$} & 8.4$\times$10$^2$ & 13.6 & 33.0 & 44.9 & 80.0 & \multicolumn{1}{c|}{171.5} & 2.6$\times$10$^3$ & 15.9 & 35.0 & 45.2 & 83.5 & \multicolumn{1}{c|}{179.5} & 4.4$\times$10$^2$ & 1.4 & 5.1 & 8.4 & 35.4 & 50.3 \\\hline\end{tabular}}\label{tab:fine_tuned_overall}\end{table*}
I need the LaTeX code to create this table, please.
Fine-tuning QASIR performance on benchmark datasets.
I need a caption for this LaTeX table, please.
2312.00414v2_train__6721cc01-9f5a-4105-922f-14feab149784
arxiv_table_to_tex
\begin{table}[t]\begin{table}[t]\centering\caption{Comparison of super images with other reduce-then-encode methods: sparse sampling \cite{lei2021cvpr}, MGSampler \cite{Zhi_2021_ICCV}, and PMISampler \cite{Xian_2024_WACV}.}\scalebox{0.54}{\begin{tabular}{ccccccccccccccccccc}\hline\rowcolor[HTML]{EFEFEF}\multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}} & \multicolumn{6}{c|}{\cellcolor[HTML]{EFEFEF}ActivityNet Captions} & \multicolumn{6}{c|}{\cellcolor[HTML]{EFEFEF}TVR} & \multicolumn{6}{c}{\cellcolor[HTML]{EFEFEF}Charades-STA} \\ \hline\multicolumn{1}{c|}{} & GFLOPs & R@1 & R@5 & R@10 & R@100 & \multicolumn{1}{c|}{sumR} & GFLOPs & R@1 & R@5 & R@10 & R@100 & \multicolumn{1}{c|}{sumR} & GFLOPs & R@1 & R@5 & R@10 & R@100 & sumR \\ \hline\rowcolor[HTML]{EFEFEF}\multicolumn{19}{l}{\cellcolor[HTML]{EFEFEF}2$\times$2 setting (average \#Frames: 15.5/57.7/8.1 on ActivityNet Captions/TVR/Charades-STA)} \\\multicolumn{1}{c|}{Sparse sampling} & & 18.0 & 39.4 & 51.6 & 83.7 & \multicolumn{1}{c|}{192.7} & & \textbf{27.1} & 49.6 & 60.1 & 91.3 & \multicolumn{1}{c|}{228.2} & & \textbf{3.3} & \textbf{11.0} & 16.2 & 54.0 & 84.5 \\\multicolumn{1}{c|}{MGSampler} & & 18.1 & 39.5 & 51.6 & 83.5 & \multicolumn{1}{c|}{192.7} & & 26.7 & 49.9 & 60.4 & 90.9 & \multicolumn{1}{c|}{227.9} & & 3.0 & 9.7 & 15.2 & 54.5 & 82.3 \\\multicolumn{1}{c|}{PMISampler} & & 17.9 & 39.3 & 51.5 & 83.7 & \multicolumn{1}{c|}{192.6} & & 26.9 & \textbf{50.6} & 60.7 & 91.1 & \multicolumn{1}{c|}{229.4} & & 3.0 & 10.1 & 16.0 & 53.3 & 82.4 \\\multicolumn{1}{c|}{Super images} & \multirow{-4}{*}{5.9$\times$10$^3$} & \textbf{19.3} & \textbf{41.5} & \textbf{53.8} & \textbf{85.3} & \multicolumn{1}{c|}{\textbf{200.0}} & \multirow{-4}{*}{2.1$\times$10$^4$} & 26.7 & \textbf{50.6} & \textbf{61.4} & \textbf{91.4} & \multicolumn{1}{c|}{\textbf{230.1}} & \multirow{-4}{*}{3.1$\times$10$^3$} & \textbf{3.3} & 10.6 & \textbf{17.0} & \textbf{55.2} & \textbf{86.0} \\\rowcolor[HTML]{EFEFEF}\multicolumn{19}{l}{\cellcolor[HTML]{EFEFEF}3$\times$3 setting (average \#Frames: 7.1/23.9/3.9 on ActivityNet Captions/TVR/Charades-STA)} \\\multicolumn{1}{c|}{Sparse sampling} & & 15.8 & 35.4 & 47.2 & 80.1 & \multicolumn{1}{c|}{178.5} & & \textbf{25.3} & \textbf{47.4} & 57.3 & 89.4 & \multicolumn{1}{c|}{219.4} & & 2.2 & 7.8 & 12.7 & 45.9 & 68.6 \\\multicolumn{1}{c|}{MGSampler} & & 15.7 & 35.7 & 47.3 & 80.4 & \multicolumn{1}{c|}{179.2} & & 23.7 & 45.7 & 56.2 & 88.1 & \multicolumn{1}{c|}{213.7} & & 2.3 & 7.7 & 12.5 & 46.9 & 69.4 \\\multicolumn{1}{c|}{PMISampler} & & 15.9 & 35.7 & 47.5 & 80.7 & \multicolumn{1}{c|}{179.8} & & 23.8 & 45.2 & 55.5 & 88.0 & \multicolumn{1}{c|}{212.6} & & 2.4 & 7.6 & 12.8 & 46.7 & 69.6 \\\multicolumn{1}{c|}{Super images} & \multirow{-4}{*}{2.7$\times$10$^3$} & \textbf{18.7} & \textbf{40.6} & \textbf{52.7} & \textbf{84.4} & \multicolumn{1}{c|}{\textbf{196.4}} & \multirow{-4}{*}{9.8$\times$10$^3$} & 24.2 & 47.2 & \textbf{58.3} & \textbf{89.9} & \multicolumn{1}{c|}{\textbf{219.6}} & \multirow{-4}{*}{1.4$\times$10$^3$} & \textbf{2.6} & \textbf{9.0} & \textbf{15.2} & \textbf{51.3} & \textbf{78.1} \\\rowcolor[HTML]{EFEFEF}\multicolumn{19}{l}{\cellcolor[HTML]{EFEFEF}4$\times$4 setting (average \#Frames: 4.2/14.8/2.4 on ActivityNet Captions/TVR/Charades-STA)} \\\multicolumn{1}{c|}{Sparse sampling} & & 13.2 & 30.9 & 41.8 & 74.8 & \multicolumn{1}{c|}{160.7} & & \textbf{22.8} & 43.2 & 53.5 & 86.7 & \multicolumn{1}{c|}{206.1} & & 1.8 & 6.3 & 10.9 & 40.5 & 59.6 \\\multicolumn{1}{c|}{MGSampler} & & 10.4 & 31.9 & 43.7 & 77.1 & \multicolumn{1}{c|}{166.3} & & 21.8 & 42.7 & 52.6 & 85.8 & \multicolumn{1}{c|}{202.8} & & 1.8 & 6.9 & 11.3 & 39.1 & 59.1 \\\multicolumn{1}{c|}{PMISampler} & & 13.7 & 32.0 & 43.3 & 77.0 & \multicolumn{1}{c|}{166.0} & & 22.4 & 43.1 & 53.3 & 86.7 & \multicolumn{1}{c|}{205.6} & & 1.8 & 6.6 & 10.8 & 39.4 & 58.6 \\\multicolumn{1}{c|}{Super images} & \multirow{-4}{*}{1.6$\times$10$^3$} & \textbf{16.8} & \textbf{38.1} & \textbf{50.0} & \textbf{83.4} & \multicolumn{1}{c|}{\textbf{188.2}} & \multirow{-4}{*}{5.6$\times$10$^3$} & 21.6 & \textbf{43.3} & \textbf{54.3} & \textbf{88.7} & \multicolumn{1}{c|}{\textbf{208.0}} & \multirow{-4}{*}{9.3$\times$10$^2$} & \textbf{2.3} & \textbf{7.1} & \textbf{11.9} & \textbf{44.3} & \textbf{65.6} \\\rowcolor[HTML]{EFEFEF}\multicolumn{19}{l}{\cellcolor[HTML]{EFEFEF}5$\times$5 setting (average \#Frames: 2.9/9.6/1.8 on ActivityNet Captions/TVR/Charades-STA)} \\\multicolumn{1}{c|}{Sparse sampling} & & 11.3 & 26.7 & 36.3 & 66.4 & \multicolumn{1}{c|}{140.7} & & \textbf{19.4} & 38.7 & 48.7 & 83.1 & \multicolumn{1}{c|}{189.9} & & 1.6 & \textbf{6.6} & 10.0 & 35.1 & 53.2 \\\multicolumn{1}{c|}{MGSampler} & & 12.2 & 28.7 & 38.8 & 70.7 & \multicolumn{1}{c|}{150.3} & & 19.1 & 38.6 & 48.7 & 83.0 & \multicolumn{1}{c|}{189.4} & & 1.9 & 6.5 & 10.0 & 34.3 & 52.6 \\\multicolumn{1}{c|}{PMISampler} & & 12.0 & 28.6 & 39.0 & 70.6 & \multicolumn{1}{c|}{150.3} & & 19.3 & 38.2 & 48.0 & 83.0 & \multicolumn{1}{c|}{188.5} & & 1.8 & 6.3 & 10.2 & 34.6 & 52.8 \\\multicolumn{1}{c|}{Super images} & \multirow{-4}{*}{1.1$\times$10$^3$} & \textbf{15.3} & \textbf{35.6} & \textbf{47.4} & \textbf{82.0} & \multicolumn{1}{c|}{\textbf{180.3}} & \multirow{-4}{*}{3.6$\times$10$^3$} & 19.0 & \textbf{39.4} & \textbf{49.7} & \textbf{86.0} & \multicolumn{1}{c|}{\textbf{194.1}} & \multirow{-4}{*}{7.0$\times$10$^2$} & \textbf{2.0} & 6.5 & \textbf{10.6} & \textbf{39.7} & \textbf{58.9} \\\rowcolor[HTML]{EFEFEF}\multicolumn{19}{l}{\cellcolor[HTML]{EFEFEF}6$\times$6 setting (average \#Frames: 2.1/6.8/1.1 on ActivityNet Captions/TVR/Charades-STA)} \\\multicolumn{1}{c|}{Sparse sampling} & & 9.6 & 22.7 & 30.7 & 57.1 & \multicolumn{1}{c|}{120.2} & & 16.2 & 33.6 & 42.8 & 79.6 & \multicolumn{1}{c|}{172.2} & & 1.8 & 5.9 & 10.1 & \textbf{34.7} & 52.6 \\\multicolumn{1}{c|}{MGSampler} & & 10.8 & 25.4 & 34.7 & 63.2 & \multicolumn{1}{c|}{134.1} & & \textbf{17.1} & 33.7 & 42.8 & 78.6 & \multicolumn{1}{c|}{172.3} & & 1.8 & \textbf{6.6} & \textbf{10.2} & 33.9 & 52.5 \\\multicolumn{1}{c|}{PMISampler} & & 10.3 & 25.1 & 34.2 & 62.9 & \multicolumn{1}{c|}{132.6} & & 16.7 & 33.9 & 42.8 & 78.4 & \multicolumn{1}{c|}{171.8} & & \textbf{1.9} & 6.3 & 9.9 & \textbf{34.7} & \textbf{52.9} \\\multicolumn{1}{c|}{Super images} & \multirow{-4}{*}{8.4$\times$10$^2$} & \textbf{13.6} & \textbf{33.0} & \textbf{44.9} & \textbf{80.0} & \multicolumn{1}{c|}{\textbf{171.5}} & \multirow{-4}{*}{2.6$\times$10$^3$} & 15.9 & \textbf{35.0} & \textbf{45.2} & \textbf{83.5} & \multicolumn{1}{c|}{\textbf{179.5}} & \multirow{-4}{*}{4.4$\times$10$^2$} & 1.4 & 5.1 & 8.4 & 35.4 & 50.3 \\ \hline\end{tabular}}\label{fig:performance_change_other_sampling}\end{table}
Write the LaTeX code for this table.
Comparison of super images with other reduce-then-encode methods: sparse sampling \cite{lei2021cvpr
Please write a caption that describes this LaTeX table.
2312.00414v2_train__64b26da4-b62d-478d-92e3-5edde19572ae
arxiv_table_to_tex
\begin{table}[t]\begin{table}[t]\centering\caption{Zero-shot retrieval performance on each TV show. We use the $2\times2$ QASIR-L14-336 as our base model.}\begin{tabular}{ccccccc}\hline\rowcolor[HTML]{EFEFEF}& \multicolumn{6}{c}{\cellcolor[HTML]{EFEFEF}TVR} \\ \hlineTV show title & \#Videos & R@1 & R@5 & R@10 & R@100 & sumR \\ \hlineThe Big Bang Theory & 2,155 & 22.6 & 42.6 & 53.1 & 84.4 & 202.7 \\How I Met Your Mother & 745 & 24.8 & 43.4 & 54.6 & 87.2 & 210.1 \\Friends & 2,800 & 19.5 & 36.8 & 46.4 & 81.0 & 183.8 \\Grey's Anatomy & 520 & 15.9 & 32.1 & 40.2 & 78.1 & 166.3 \\House & 2,310 & 11.9 & 24.3 & 31.6 & 66.0 & 133.9 \\Castle & 2,365 & 12.4 & 26.6 & 33.8 & 68.7 & 141.5 \\ \hlineTotal & 10,835 & 17.2 & 33.3 & 42.1 & 76.1 & 168.7 \\ \hline\end{tabular}\label{tab:tvshow_results}\end{table}
Please generate the necessary LaTeX script to draw this table.
Zero-shot retrieval performance on each TV show. We use the $2\times2$ QASIR-L14-336 as our base model.
Please give a caption for the LaTeX table.
2401.06144v2_train__a10068fc-5361-4c00-8557-65d4c294dca4
arxiv_table_to_tex
\begin{table}[ht]\begin{table}[ht]\begin{center}\begin{tabular}{ |p{0.2\linewidth}|c c c|c c| }\hline& 32x32 & 64x64 & 96x96 & 128x128 & 160x160 \\\hline\hlinePre-trained Model & 1.47 & 2.92 & 4.96 & 7.85 & 14.8 \\\hlineVanilla fine-tune ($w_R = 1.0$) & 20.92 & 12.94 & 14.96 & 32.21 & 27.1 \\\hlineMix w/ $w_R = 0.2$ & 1.41 & \textbf{2.44} & 4.66 & 7.69 & 12.4 \\\hlineFrozen spatial conv & 1.46 & 2.85 & 4.76 & 7.62 & 11.9 \\\hlineFrozen spatial conv + $w_R = 0.2$ & \textbf{1.36} & 2.46 & \textbf{4.54} & \textbf{7.06} & \textbf{11.3} \\\hline\end{tabular}\end{center}\caption{FIDs on FFHQ across resolutions for different fine-tuning schemes. Our best result combines mixed resolution fine-tuning and parameter freezing.}\label{table:finetuned-fids}\end{table}
Please create LaTeX code to produce this table.
FIDs on FFHQ across resolutions for different fine-tuning schemes. Our best result combines mixed resolution fine-tuning and parameter freezing.
Please provide an appropriate caption for this LaTeX table.
2203.10965v4_train__d4013423-5df4-4233-8379-b6d7f72f403f
arxiv_table_to_tex
\begin{table}[h]\begin{table}[h]\vspace{-2mm}\centering\caption{Comparison of all variants of \toolname with a triplet architecture and the baseline approach Post2Vec.}\vspace{-2mm}\label{tab:all_results}\begin{tabular}{c|ccccc}\hline\multirow{2}{*}{Model Name} & \multicolumn{5}{c}{Precision@k} \\ \cline{2-6}& \multicolumn{1}{c|}{P@1} & \multicolumn{1}{c|}{P@2} & \multicolumn{1}{c|}{P@3} & \multicolumn{1}{c|}{P@4} & P@5 \\ \hline\textbf{CodeBERT$_{ALL}$} & \multicolumn{1}{c|}{\textbf{0.848}} & \multicolumn{1}{c|}{\textbf{0.701}} & \multicolumn{1}{c|}{\textbf{0.579}} & \multicolumn{1}{c|}{\textbf{0.486}} & \textbf{0.415} \\ \hlineBERT$_{ALL}$ & \multicolumn{1}{c|}{0.845} & \multicolumn{1}{c|}{0.696} & \multicolumn{1}{c|}{0.575} & \multicolumn{1}{c|}{0.482} & 0.413 \\ \hlineRoBERTa$_{ALL}$ & \multicolumn{1}{c|}{0.843} & \multicolumn{1}{c|}{0.694} & \multicolumn{1}{c|}{0.571} & \multicolumn{1}{c|}{0.478} & 0.409 \\ \hlineBERTOverflow$_{ALL}$ & \multicolumn{1}{c|}{0.725} & \multicolumn{1}{c|}{0.592} & \multicolumn{1}{c|}{0.489} & \multicolumn{1}{c|}{0.412} & 0.354 \\ \hlineALBERT$_{ALL}$ & \multicolumn{1}{c|}{0.748} & \multicolumn{1}{c|}{0.586} & \multicolumn{1}{c|}{0.469} & \multicolumn{1}{c|}{0.386} & 0.327 \\ \hlinePost2Vec & \multicolumn{1}{c|}{0.786} & \multicolumn{1}{c|}{0.628} & \multicolumn{1}{c|}{0.507} & \multicolumn{1}{c|}{0.421} & 0.359 \\ \hline \hline\multirow{2}{*}{Model Name} & \multicolumn{5}{c}{Recall@k} \\ \cline{2-6}& \multicolumn{1}{c|}{R@1} & \multicolumn{1}{c|}{R@2} & \multicolumn{1}{c|}{R@3} & \multicolumn{1}{c|}{R@4} & R@5 \\ \hline\textbf{CodeBERT$_{ALL}$} & \multicolumn{1}{c|}{\textbf{0.848}} & \multicolumn{1}{c|}{\textbf{0.756}} & \multicolumn{1}{c|}{\textbf{0.724}} & \multicolumn{1}{c|}{\textbf{0.733}} & \textbf{0.757} \\ \hlineBERT$_{ALL}$ & \multicolumn{1}{c|}{0.845} & \multicolumn{1}{c|}{0.750} & \multicolumn{1}{c|}{0.719} & \multicolumn{1}{c|}{0.728} & 0.752 \\ \hlineRoBERTa$_{ALL}$ & \multicolumn{1}{c|}{0.843} & \multicolumn{1}{c|}{0.747} & \multicolumn{1}{c|}{0.714} & \multicolumn{1}{c|}{0.722} & 0.746 \\ \hlineBERTOverflow$_{ALL}$ & \multicolumn{1}{c|}{0.725} & \multicolumn{1}{c|}{0.635} & \multicolumn{1}{c|}{0.607} & \multicolumn{1}{c|}{0.619} & 0.644 \\ \hlineALBERT$_{ALL}$ & \multicolumn{1}{c|}{0.748} & \multicolumn{1}{c|}{0.630} & \multicolumn{1}{c|}{0.588} & \multicolumn{1}{c|}{0.588} & 0.605 \\ \hlinePost2Vec & \multicolumn{1}{c|}{0.786} & \multicolumn{1}{c|}{0.678} & \multicolumn{1}{c|}{0.636} & \multicolumn{1}{c|}{0.639} & 0.659 \\ \hline \hline\multirow{2}{*}{Model Name} & \multicolumn{5}{c}{F1-score@k} \\ \cline{2-6}& \multicolumn{1}{c|}{F@1} & \multicolumn{1}{c|}{F@2} & \multicolumn{1}{c|}{F@3} & \multicolumn{1}{c|}{F@4} & F@5 \\ \hline\textbf{CodeBERT$_{ALL}$} & \multicolumn{1}{c|}{\textbf{0.848}} & \multicolumn{1}{c|}{\textbf{0.719}} & \multicolumn{1}{c|}{\textbf{0.625}} & \multicolumn{1}{c|}{\textbf{0.561}} & \textbf{0.513} \\ \hlineBERT$_{ALL}$ & \multicolumn{1}{c|}{0.845} & \multicolumn{1}{c|}{0.714} & \multicolumn{1}{c|}{0.621} & \multicolumn{1}{c|}{0.557} & 0.510 \\ \hlineRoBERTa$_{ALL}$ & \multicolumn{1}{c|}{0.843} & \multicolumn{1}{c|}{0.711} & \multicolumn{1}{c|}{0.617} & \multicolumn{1}{c|}{0.553} & 0.505 \\ \hlineBERTOverflow$_{ALL}$ & \multicolumn{1}{c|}{0.725} & \multicolumn{1}{c|}{0.606} & \multicolumn{1}{c|}{0.527} & \multicolumn{1}{c|}{0.475} & 0.427 \\ \hlineALBERT$_{ALL}$ & \multicolumn{1}{c|}{0.748} & \multicolumn{1}{c|}{0.600} & \multicolumn{1}{c|}{0.506} & \multicolumn{1}{c|}{0.447} & 0.406 \\ \hlinePost2Vec & \multicolumn{1}{c|}{0.786} & \multicolumn{1}{c|}{0.646} & \multicolumn{1}{c|}{0.549} & \multicolumn{1}{c|}{0.488} & 0.445 \\ \hline\end{tabular}\vspace{-4mm}\end{table}
Could you please provide me the LaTeX code to generate this table?
Comparison of all variants of \toolname with a triplet architecture and the baseline approach Post2Vec.
I need a caption for this LaTeX table, please.
2210.11532v1_train__149078bb-8d39-4915-baae-7a773e0d4ad5
arxiv_table_to_tex
\begin{table*}[!th]\begin{table*}[!th]\centering\caption{List of 10 stocks randomly selected [source \url{it.finance.yahoo.com}].}\label{tab:vol_feats}\begin{tabular}{|l|l|l|}\hline\multicolumn{1}{|c|}{\textbf{Ticker}} & \multicolumn{1}{c|}{\textbf{Company}} & \multicolumn{1}{c|}{\textbf{Market}} \\ \hlineCSGKF & \multicolumn{1}{c|}{Credit Suisse Group AG} & \multicolumn{1}{c|}{Other OTC} \\ \hlineEOG & \multicolumn{1}{c|}{EOG Resources, Inc.} & \multicolumn{1}{c|}{NYSE} \\ \hlineMETA & \multicolumn{1}{c|}{Meta Platforms, Inc. } & \multicolumn{1}{c|}{Nasdaq GS} \\ \hlineNKE & \multicolumn{1}{c|}{NIKE, Inc.} & \multicolumn{1}{c|}{NYSE} \\ \hlineDIS & \multicolumn{1}{c|}{The Walt Disney Company} & \multicolumn{1}{c|}{NYSE} \\ \hlinePG & \multicolumn{1}{c|}{The Procter \& Gamble Company} & \multicolumn{1}{c|}{NYSE} \\ \hlineQQQ & \multicolumn{1}{c|}{Invesco QQQ Trust} & \multicolumn{1}{c|}{Nasdaq GM} \\ \hlineIBM & \multicolumn{1}{c|}{International Business Machines Corporation} & \multicolumn{1}{c|}{NYSE} \\ \hlineANF & \multicolumn{1}{c|}{Abercrombie \& Fitch Co.} & \multicolumn{1}{c|}{NYSE} \\ \hlineCS & \multicolumn{1}{c|}{Credit Suisse Group AG} & \multicolumn{1}{c|}{NYSE} \\ \hline\end{tabular}\end{table*}
Convert this table into LaTeX code.
List of 10 stocks randomly selected [source \url{it.finance.yahoo.com
Could you provide a caption for this LaTeX table?
2210.11532v1_train__30137258-4cfd-4703-8f15-b29080ecbe3e
arxiv_table_to_tex
\begin{table}[!ht]\begin{table}[!ht]\centering\caption{ADF test stationarity with AIC optimization.}\label{tab:adf-test}\begin{tabular}{l|c|c|c|c|ccc|}\cline{2-8}& \textbf{Test Statistic} & \textbf{p-value} & \textbf{Lags} & \textbf{Observations} & \multicolumn{3}{c|}{\textbf{Critical Value}} \\ \cline{6-8}& & & & & \multicolumn{1}{c|}{\textbf{1\%}} & \multicolumn{1}{c|}{\textbf{5\%}} & \textbf{10\%} \\ \hline\multicolumn{1}{|l|}{\textit{ANF}} & -2.302 & 0.171 & 5 & 2529 & \multicolumn{1}{c|}{-3.432} & \multicolumn{1}{c|}{-2.863} & -2.567 \\ \hline\multicolumn{1}{|l|}{\textit{EOG}} & -2.422 & 0.135 & 5 & 2529 & \multicolumn{1}{c|}{-3.433} & \multicolumn{1}{c|}{-2.862} & -2.567 \\ \hline\end{tabular}\end{table}
Create LaTeX code to produce this table.
ADF test stationarity with AIC optimization.
May I have a caption for this LaTeX table?
2101.05806v1_train__bd3cc7a9-dd56-4fdc-b01b-36c769372a76
arxiv_table_to_tex
\begin{table*}[ht]\begin{table*}[ht]\centering\begin{tabu}{|l|c c c c|}\tabucline[1pt]{-}\hspace{1.7cm} \textbf{Features} & \textbf{B@4} & \textbf{R} & \textbf{M} & \textbf{C} \\\tabucline[1.2pt]{-}ResNet-152 & 0.473 & 0.677 & 0.329 & 77.15 \\Inception-v4 & 0.499 & 0.679 & 0.327 & 81.96 \\\hlineI3D & 0.437 & 0.678 & 0.314 & 69.24 \\R(2+1)D & 0.504 & 0.700 & 0.336 & 82.41 \\\hlineFaster-RCNN & 0.247 & 0.525 & 0.209 & 60.76 \\\hlineI3D + Faster-RCNN & 0.453 & 0.675 & 0.318 & 69.62 \\I3D + ResNet-152 & 0.478 & 0.699 & 0.337 & 81.86 \\\hlineInception-v4 + Faster-RCNN & 0.492 & 0.688 & 0.334 & 83.36 \\% Inception-v4 + C3D & 0.460 & 0.676 & 0.324 & 78.16 \\Inception-v4 + R(2+1)D & 0.510 & 0.714 & \textbf{0.345} & 88.88 \\%old% Inception-v4 + I3D & \textbf{0.502} & \textbf{0.708} & 0.330 & \textbf{85.57} \\Inception-v4 + I3D & 0.501 & \textbf{0.715} & 0.337 & 89.58 \\\hline% {'Bleu_1': 0.7736661619621461, 'Bleu_2': 0.6597869836261058, 'Bleu_3': 0.5629914922556372, 'Bleu_4': 0.46502848427303367, 'METEOR': 0.32497884042114666, 'ROUGE_L': 0.678% 0873091264125, 'CIDEr': 0.8113884523156315}Inception-v4 + I3D + Faster-RCNN & 0.465 & 0.678 & 0.325 & 81.14 \\% {'Bleu_1': 0.8130965003668075, 'Bleu_2': 0.7120395663677739, 'Bleu_3': 0.6236522919804935, 'Bleu_4': 0.5241551266873649, 'METEOR': 0.3380563761486914, 'ROUGE_L': 0.7126094288298294, 'CIDEr': 0.8626006249377787}% Inception-v4 + I3D + R(2+1)D & 0.524 & 0.713 & 0.338 & 86.26 \\Inception-v4 + R(2+1)D + Faster-RCNN & \textbf{0.511} & \textbf{0.715} & 0.344 & \textbf{90.93} \\\tabucline[1pt]{-}\end{tabu}\vspace{0.1cm}\caption{Comparison of captioning performance using different feature extractors benchmarked using WAFTM + WordPiece on the MSVD dataset.}\label{tab:comparing_features}\end{table*}
Write the LaTeX code for this table.
Comparison of captioning performance using different feature extractors benchmarked using WAFTM + WordPiece on the MSVD dataset.
Please create a caption for this LaTeX table.
2101.05806v1_train__05f401eb-c129-48c9-b0bc-6984aee7ba0f
arxiv_table_to_tex
\begin{table}[!ht]\begin{table}[!ht]\centering\begin{tabu}{|l|c|}\tabucline[1pt]{-}\hspace{0.8cm} \textbf{Architecture} & \textbf{M} \\ \tabucline[1pt]{-}% \footnotesize RecNet \cite{wang2018reconstruction} & \small \\ \hline\footnotesize Krishna et. al \cite{krishna2017dense} & \small 0.0888 \\ \hline\footnotesize Iashin et. al \cite{iashin2020better} & \small 0.0956 \\ \hline\footnotesize WAFTM + WordPiece + SCST & \small 0.0906 \\ \tabucline[1pt]{-}\end{tabu}\vspace{0.1cm}\caption{Results on ActivityNet Captions Dataset using ground-truth proposals and where each model is trained from scratch.}\label{tab:activitynet}\end{table}
Create LaTeX code to produce this table.
Results on ActivityNet Captions Dataset using ground-truth proposals and where each model is trained from scratch.
May I have a caption for this LaTeX table?
2305.14427v2_train__4bee43bc-0268-485a-b187-1566a68c6889
arxiv_table_to_tex
\begin{table}[!t]\begin{table}[!t]\begin{center}\begin{tabular}{|c|c|c|}\hline$n$ & $\langle \gamma^{(n)}_N(T)\rangle/T $ & $\langle s^{(n)}_N(T)\rangle/T$ \\\hline\hline0 & 0.0091 & 0.0434 \\1 & 0.0051 & 0.0086 \\2 & -0.0022 & -0.0165\\\hline\end{tabular}\caption{Coefficients in the momentum averaged rates at $T= 10^6$ GeV.}\label{tab:rates}\end{center}\end{table}
Create a LaTeX script to illustrate this table.
Coefficients in the momentum averaged rates at $T= 10^6$ GeV.
Could you generate a caption for this LaTeX table?
2305.14427v2_train__fe9ba078-2ea6-41f1-8082-7dcdaf018622
arxiv_table_to_tex
\begin{table}[!t]\begin{table}[!t]\begin{center}\begin{tabular}{| c | c c c c c c c c |}\hlineRegime & $M/\rm{GeV}$ & $\log_{10}(y_{e})$ & $\log_{10}(y_{\mu})$ & $\log_{10}(y_{\tau})$ & $\log_{10}(y'_{\alpha})$ & $\Delta \varphi_e$ & $\Delta \varphi_\mu$ & $\Delta \varphi_\tau$ \\\hline\hline1 (wHC) & 1 & $-7.2$ & $-6$ & $-5.7$ & $-7.5$ & $\pi/2$ & $\pi/3$ & $\pi/4$ \\2 (sHC) & 100 & $-7.2$ & $-6$ & $-5.8$ & $-8$ & $\pi/2$ & $\pi/3$ & $\pi/4$ \\\hline\end{tabular}\caption{Model parameters for the comparison between the analytical and numerical solutions shown in Fig.~\ref{fig:ana_vs_num} for the regimes as labelled in Fig.~\ref{fig:regimes} and defined in Eqs.~\eqref{eq:define_regime_4}-\eqref{eq:define_regime_3}.}\label{tab:ana_vs_num_initial_condition}\end{center}\end{table}
Please generate the necessary LaTeX script to draw this table.
Model parameters for the comparison between the analytical and numerical solutions shown in Fig.~\ref{fig:ana_vs_num
Could you provide a caption for this LaTeX table?
2305.14427v2_train__4445bfc9-a940-4729-b31f-37de0acb9a21
arxiv_table_to_tex
\begin{table}[!t]\begin{table}[!t]\begin{center}\begin{tabular}{| c | c c c c | c |}\hline& $M^{\rm true}/\rm{GeV}$ & $(U_e^2)_{\rm true}$ & $(U_\mu^2)_{\rm true}$ & $(U_\tau^2)_{\rm true}$ & $\delta^{\rm true}/\rm{rad}$ \\\hline\hlineNH & $31.60$ & $2.843\times 10^{-12}$ & $ 1.087 \times 10^{-11} $ & $1.234 \times 10^{-11} $ & $ 5.396 $\\IH & $20.731$ & $3.291\times 10^{-11}$ & $ 4.823 \times 10^{-12} $ & $3.465 \times 10^{-12} $ & $5.402 $ \\\hline\end{tabular}\caption{Measurable neutrino parameters for two benchmark points that reproduce the baryon asymmetry.}\label{tab:potential_measure}\end{center}\end{table}
I need the LaTeX code to create this table, please.
Measurable neutrino parameters for two benchmark points that reproduce the baryon asymmetry.
Could you provide a caption for this LaTeX table?
2105.12838v1_train__dab1d207-774e-485b-ad0d-c582f32283a5
arxiv_table_to_tex
\begin{table}[ht!]\begin{table}[ht!]\caption{The simulation-related parameters}\centering\scalebox{0.95}{\begin{tabular}{|c|c|}\hline\textbf{Parameter List} & \textbf{Values} \\ \hline{Total transmitted power at the WPT} & 10~dB\\ \hline{Total active transmit antenna at the WPT} & 64\\ \hlineThe distance of information receiver & $\rm d_{\rm info}=20$ m \\ \hlineThe distance of energy harvester & $\rm d_{\rm energy}=1$ m \\ \hlineAntenna spacing & $\lambda/2$ \\ \hlineOperation frequency & $2$ GHz \\ \hlineArray antenna gain & $15$ dBi \\ \hlinePath loss model & $128.1 + 37.6\log_{10}(d)$, d[km]\\ \hlineAngular offset’s standard deviation & $2^{\circ}$ \\ \hlineLog-normal shadowing’s standard deviation & $8$ dB \\ \hlineSubcarrier bandwidth & $15$ kHz \\ \hline{Obstacle cover ratio} & {$0-0.9$} \\ \hline{Obstacle radius} & [0.3, 0.6]\\ \hline{Obstacle height} & [5, 25]\\ \hline\end{tabular}}\label{tab:SimulationParameters}\end{table}
I need the LaTeX code to create this table, please.
The simulation-related parameters
Can you give me a caption for this LaTeX table?
2012.14034v2_train__85e2a723-8d2f-447e-b1e2-e6bacfbdcf3b
arxiv_table_to_tex
\begin{table}[!htb]\begin{table}[!htb]\begin{center}%\scalebox{0.8}{\begin{tabular}{ |c|c|c|c| }\hlineObservable & BP-I & BP-II & BP-III \\\hline\hlineLightest Higgs mass & 125.0 & 125.2 & 125.6 \\\hline2nd Higgs mass & 338.2 & 322.1 & 370.4\\\hline3rd Higgs mass & 462.2 & 484.0 & 483.6 \\\hlineLightest Pseudoscalar Higgs mass & 259.0 & 256.8 & 261.7\\\hline2nd Pseudoscalar Higgs mass & 446.9 & 470.2 & 468.1 \\\hline\hlineLightest Sneutrino mass & 219.4 & 219.9 & 228.4 \\\hlineLightest CP-odd Sneutrino mass & 220.0 & 219.8 & 229.2 \\\hline\hlineLightest Chargino mass & 185.8 & 177.6 & 205.1 \\\hlineLightest Neutralino mass & 177.3 & 168.2 & 196.0 \\\hlineNext-to Lightest Neutralino mass & 200.3 & 193.2 & 218.6 \\\hline\hlineBR($h_3 \to \tilde{\nu_1} \tilde{\nu_1}$) (in \%) & 5.3 & 4.9 & 4.9 \\\hlineBR($A_3 \to \tilde{\nu_1} \tilde{\nu_1}^\prime$) (in \%) & 1.2 & 1.3 & 1.6 \\\hlineBR($\tilde{\nu_1} \to \ell \tilde{\chi^{\pm}_1}$) (in \%) & 48.2 & 48.6 & 45.3 \\\hlineBR($\tilde{\nu_1} \to \nu \tilde{\chi^{0}_1}$) (in \%)& 51.8 & 51.4 & 54.7 \\\hline$\Gamma(\tilde{\nu_1})$ (GeV) & $1.6 \times 10^{-13}$ & $8.5 \times 10^{-13}$ & $9 \times 10^{-14}$ \\\hline\end{tabular}%}\caption{Details of the BPs (all the masses are in GeV). The leptonic BRsinclude electron, muons and taus.}\label{tab:benchmark}\end{center}\end{table}
Please generate the necessary LaTeX script to draw this table.
Details of the BPs (all the masses are in GeV). The leptonic BRsinclude electron, muons and taus.
Please give a caption for the LaTeX table.
2012.06125v1_train__4a7be948-b40b-4c2a-9c42-23715c458bae
arxiv_table_to_tex
\begin{table*}[!ht]\small\begin{table*}[!ht]\small\begin{center}\setlength\tabcolsep{4pt}\begin{tabular}{|c|l|ccccc|}\hline\multicolumn{2}{|c|}{Ablation} & Well lit & Shadows & Mixed colors & Overexposure & Low light\\ \hline\multirow{3}{*}{Loss} & No Stereo Loss & 12.80 & 12.78 & 12.78 & 12.82 & 12.81 \\& No NIR Photometric Loss & 12.64 & 12.66 & 12.64 & 12.69 & 12.75 \\& No Photometric Loss & 12.77 & 12.77 & 12.81 & 12.79 & 12.77 \\ \hlineReflectance Model & No Specular Component & 12.44 & 12.43 & 12.44 & 12.51 & 12.47 \\ \hline% No Input Segmentation & 12.51 & 12.55 & 12.45 & 12.52 & 12.54 \\\multirow{2}{*}{Inputs} & No RGB Input & 12.54 & 12.54 & 12.54 & 12.54 & 12.54 \\& No NIR Input & 13.13 & 15.19 & 16.43 & 19.82 & 19.39 \\ \hline\multicolumn{2}{|c|}{Ours (Full Method)} & \textbf{12.08} & \textbf{12.06} & \textbf{12.06} & \textbf{12.14} & \textbf{12.10} \\ \hline\end{tabular}\end{center}\vspace{-10pt}\caption{Mean absolute angular error in degrees of normal maps computed with modified versions of our full network. Results are reported for the five lighting conditions described in Section~\ref{section:Evaluation}.}\label{table:ablation}\end{table*}
Please create LaTeX code to produce this table.
Mean absolute angular error in degrees of normal maps computed with modified versions of our full network. Results are reported for the five lighting conditions described in Section~\ref{section:Evaluation
Can you suggest a caption for this LaTeX table?
2012.06125v1_train__8e623a71-972e-4239-a6c6-257ed6a2f7f6
arxiv_table_to_tex
\begin{table*}[!h]\small\begin{table*}[!h]\small\begin{center}\setlength\tabcolsep{4pt}\begin{tabular}{|c|ccccc|}\hlineMethod & Well lit & Shadows & Mixed colors & Overexposure & Low light \\ \hlineSfSNet \cite{Sengupta2018SFSNet} & 14.10 & 18.32 & - & - & -\\ \hlineNestmeyer et al.\cite{Nestmeyer2020FaceRelighting} & 14.82 & 17.52 & 15.87 & 21.85 & 25.56 \\ \hlineOurs & \textbf{12.29} & \textbf{12.27} & \textbf{12.28} & \textbf{12.30} & \textbf{12.33} \\ \hline\end{tabular}\end{center}\vspace{-10pt}\caption{Mean absolute angular errors in degrees for three normal estimation methods and five different lighting scenarios. Results for SfSNet~\cite{Sengupta2018SFSNet} are omitted in some cases where they fail to generate plausible normals.}\label{table:baselines}\end{table*}
Please create LaTeX code to produce this table.
Mean absolute angular errors in degrees for three normal estimation methods and five different lighting scenarios. Results for SfSNet~\cite{Sengupta2018SFSNet
Can you suggest a caption for this LaTeX table?
2202.10679v2_train__ecb4fc3d-1466-4c25-a014-2068e5b8a3fe
arxiv_table_to_tex
\begin{table*}\begin{table*}\centering\caption{Some studies of PIGL.}\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}\begin{tabular}{|p{0.4\columnwidth}|l|l|p{0.7\columnwidth}|}\hlineType &Objective& Reference& Approach\\\hline\multirow{3}{*}{\tabincell{l}{Physics-based Graph \\Data Processing} }&Sparse data enhancement& \cite{li2021physics} & Utilizing physical sparsity property of data to locate unlabelled data. \\\cline{2-4}&Filling missing data&\cite{seo2019differentiable}& Incorporating differentiable physics equations with graph learning.\\\cline{2-4}&Noisy value removal& \cite{salehi2021physgnn} & Incorporating the physical characteristics with GNNs to capture the noisy values.\\\hline\multirow{2}{*}{\tabincell{l}{Graph Representation\\ with Physical Properties}}&Precise graph visualisation&\cite{haleem2019evaluating}& Combining force-directed graph layout algorithm and deep learning algorithms.\\\cline{2-4}&Graph data structure preserving& \cite{sun2020graph} & Embedding the principles of force interaction into graph learning models. \\\hline\multirow{3}{*}{\tabincell{l}{Physics-driven Learning \\Models } } &Data dependency reduction& \cite{shah2018airsim} & Generating simulated data to pre-train the driving algorithm.\\\cline{2-4}&Reasonable parameter initialization& \cite{jia2021physics} & Generating simulated physical variables to pre-train recurrent graph network models. \\%&Accurate and explicable learning & \cite{schuetz2021combinatorial} & Physics-driven loss function \\\cline{2-4}&Accurate and explicable learning&\cite{daw2020physics}&Physics-driven loss function of LSTM model.\\\hline\end{tabular}\label{table}\end{table*}
Convert this table into LaTeX code.
Some studies of PIGL.
Please write a caption that describes this LaTeX table.
2201.00614v2_train__82ea7ed9-68fb-4a55-9165-57d987edf578
arxiv_table_to_tex
\begin{table}[!t]\begin{table}[!t]\small\begin{tabular}{ccccc}\hline\multicolumn{1}{l|}{Classes} & \multicolumn{1}{c|}{Train-500} & \multicolumn{1}{c|}{Train-1000} & \multicolumn{1}{c|}{Train-1500} & Test \\ \hline\multicolumn{5}{c}{\usadata} \\ \hline\multicolumn{1}{l|}{Pro-Dem} & \multicolumn{1}{c|}{320} & \multicolumn{1}{c|}{654} & \multicolumn{1}{c|}{981} & 1543 \\\multicolumn{1}{l|}{Anti-Dem} & \multicolumn{1}{c|}{9} & \multicolumn{1}{c|}{22} & \multicolumn{1}{c|}{32} & 46 \\\multicolumn{1}{l|}{Pro-Rep} & \multicolumn{1}{c|}{133} & \multicolumn{1}{c|}{253} & \multicolumn{1}{c|}{381} & 576 \\\multicolumn{1}{l|}{Anti-Rep} & \multicolumn{1}{c|}{13} & \multicolumn{1}{c|}{17} & \multicolumn{1}{c|}{29} & 46 \\\multicolumn{1}{l|}{Other} & \multicolumn{1}{c|}{25} & \multicolumn{1}{c|}{54} & \multicolumn{1}{c|}{77} & 111 \\ \hline\multicolumn{1}{l|}{Total} & \multicolumn{1}{c|}{500} & \multicolumn{1}{c|}{1000} & \multicolumn{1}{c|}{1500} & 2322 \\ \hline\multicolumn{5}{c}{\indiadata} \\ \hline\multicolumn{1}{l|}{Pro-BJP} & \multicolumn{1}{c|}{67} & \multicolumn{1}{c|}{149} & \multicolumn{1}{c|}{208} & 360 \\\multicolumn{1}{l|}{Anti-BJP} & \multicolumn{1}{c|}{136} & \multicolumn{1}{c|}{275} & \multicolumn{1}{c|}{425} & 680 \\\multicolumn{1}{l|}{Pro-INC} & \multicolumn{1}{c|}{24} & \multicolumn{1}{c|}{60} & \multicolumn{1}{c|}{83} & 158 \\\multicolumn{1}{l|}{Anti-INC} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{6} & 15 \\\multicolumn{1}{l|}{Pro-AAP} & \multicolumn{1}{c|}{35} & \multicolumn{1}{c|}{61} & \multicolumn{1}{c|}{99} & 142 \\\multicolumn{1}{l|}{Anti-AAP} & \multicolumn{1}{c|}{52} & \multicolumn{1}{c|}{101} & \multicolumn{1}{c|}{163} & 210 \\\multicolumn{1}{l|}{Other} & \multicolumn{1}{c|}{184} & \multicolumn{1}{c|}{351} & \multicolumn{1}{c|}{516} & 1120 \\ \hline\multicolumn{1}{l|}{Total} & \multicolumn{1}{c|}{500} & \multicolumn{1}{c|}{1000} & \multicolumn{1}{c|}{1500} & 2685\\\hline\end{tabular}\caption{Class-wise sample distribution in different training and testing splits of the annotated datasets.}\label{tab:dataset_class_distr}\vspace{-5mm}\end{table}
Create LaTeX code to produce this table.
Class-wise sample distribution in different training and testing splits of the annotated datasets.
May I have a caption for this LaTeX table?
2201.00614v2_train__91ea8fad-9f96-4973-b7a9-048f657691f0
arxiv_table_to_tex
\begin{table}[!t]\begin{table}[!t]\small\centering\begin{tabular}{l|c|c|c|c|c|c}\hline\multirow{2}{*}{Model} & \multicolumn{3}{c|}{\usadata} & \multicolumn{3}{c}{\indiadata} \\ \cline{2-7}& \begin{tabular}[c]{@{}c@{}}$|\mathcal{D}_s|$\\ =0.5K\end{tabular} & \begin{tabular}[c]{@{}c@{}}$|\mathcal{D}_s|$\\ =1K\end{tabular} & \begin{tabular}[c]{@{}c@{}}$|\mathcal{D}_s|$\\ =1.5K\end{tabular} & \begin{tabular}[c]{@{}c@{}}$|\mathcal{D}_s|$\\ =0.5K\end{tabular} & \begin{tabular}[c]{@{}c@{}}$|\mathcal{D}_s|$\\ =1K\end{tabular} & \begin{tabular}[c]{@{}c@{}}$|\mathcal{D}_s|$\\ =1.5K\end{tabular} \\\hlineSiamNet & $0.39$ & $0.43$ & $0.42$ & $0.12$ & $0.14$ & $0.13$\\BICE & $0.27$ & $0.30$ & $0.33$ & $0.16$ & $0.17$ & $0.23$\\TAN & $0.38$ & $0.46$ & $0.45$ & $0.14$ & $0.14$ & $0.17$ \\SVM & $0.37$ & $0.37$ & $0.45$ & $0.13$ & $0.13$ & $0.16$\\BERT & $0.39$ & $0.50$ & $0.51$ & $0.17$ & $0.17$ & $0.21$\\ConvNet & $ 0.37 $ & $ 0.43 $ & $ 0.45 $ & $ 0.35 $ & $ 0.40 $ & $ 0.41 $\\BLSTM & $ 0.35 $ & $ 0.43 $ & $ 0.44 $ & $ 0.31 $ & $ 0.39 $ & $ 0.38 $\\\hlineLS-SVM & $0.39$ & $0.42$ & $0.44$ & $0.18$ & $0.19$ & $0.18$\\ST-ConvNet & $0.13$ & $0.15$ & $0.16$ & $0.10$ & $0.11$ & $0.11$\\ST-BLSTM & $0.13$ & $0.16$ & $0.19$ & $0.09$ & $0.12$ & $0.11$\\UST & $0.35$ & $0.42$ & $0.41$ & $0.12$ & $0.16$ & $0.16$\\GCN-ConvNet & $0.41$ & $0.45$ & $0.47$ & $0.33$ & $0.35$ & $0.40$\\GCN-BLSTM & $0.39$ & $0.42$ & $0.46$ & $0.36$ & $0.41$ & $0.42$\\\hline\name/Net.($\mathcal{C}_1$) & $ 0.32 $ & $ 0.41 $ & $ 0.42 $ & $ 0.10 $ & $ 0.12 $ & $ 0.15 $\\\name/Net.($\mathcal{C}_2$) & $ 0.36 $ & $ 0.46 $ & $ 0.46 $ & $ 0.28 $ & $ 0.31 $ & $ 0.37 $ \\\name/Cont.($\mathcal{C}_1$) & $ 0.41 $ & $ 0.47 $ & $ 0.49 $ & $ 0.36 $ & $ 0.41 $ & $ 0.43 $\\\name/Cont.($\mathcal{C}_2$) & $ 0.47 $ & $ 0.51 $ & $ 0.53 $ & $ 0.38 $ & $ 0.44 $ & $ 0.45 $ \\\hline\name ($\mathcal{C}_1$) & $\mathbf{0.46}$ & $\mathbf{0.47}$ & $\mathbf{0.49}$ & $\mathbf{0.37}$ & $\mathbf{0.42}$ & $\mathbf{0.45}$\\\name ($\mathcal{C}_2$) & $\mathbf{0.49}$ & $\mathbf{0.53}$ & $\mathbf{0.55}$ & $\mathbf{0.42}$ & $\mathbf{0.45}$ & $\mathbf{0.47}$\\\hline\end{tabular}\caption{F1 scores of all models with different sizes of labeled training data on \usadata\ and \indiadata.}\vspace{-5mm}\label{tab:overall_result}\end{table}
Please create LaTeX code to produce this table.
F1 scores of all models with different sizes of labeled training data on \usadata\ and \indiadata.
Please provide an appropriate caption for this LaTeX table.
2201.00614v2_train__db2c08f6-874e-476b-a6de-25ea40fd9b89
arxiv_table_to_tex
\begin{table}[!t]\begin{table}[!t]\small\centering\begin{tabular}{c|c|c|c|c}\hline\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}\% unlabeled\\ data used\end{tabular}} & \multicolumn{2}{c|}{\usadata} & \multicolumn{2}{c}{\indiadata} \\ \cline{2-5}& m-F1 $\mathcal{C}_1$ & m-F1 $\mathcal{C}_2$ & m-F1 $\mathcal{C}_1$ & m-F1 $\mathcal{C}_2$ \\ \hline$100$ & 0.49 & 0.55 & 0.45 & 0.47 \\$80$ & 0.49 & 0.54 & 0.44 & 0.46 \\$50$ &0.46 & 0.51 & 0.42 & 0.43 \\$30$ & 0.45 & 0.48 & 0.42 & 0.41 \\$10$ & 0.45 & 0.45 & 0.41 & 0.39 \\ \hline\end{tabular}\caption{Macro-F1 scores of \name\ with different amount of unlabeled data used in semi-supervised phase. The size of labeled dataset used is $1500$.}\label{tab:vary_data_size}\vspace{-4mm}\end{table}
Write the LaTeX code for this table.
Macro-F1 scores of \name\ with different amount of unlabeled data used in semi-supervised phase. The size of labeled dataset used is $1500$.
Could you provide a caption for this LaTeX table?
2104.00178v3_train__9fb03713-b89b-4654-93a4-6667dadeb9eb
arxiv_table_to_tex
\begin{table}[]\begin{table}[]% \vspace{3mm}\renewcommand*{\arraystretch}{1.3}\centering\ttfamily\footnotesize\caption{Details of Nyx Dataset Used in Experiments}\vspace{-4mm}\input{Table/dataset}% \vspace{-4.5mm}\label{tab:DataDetail}\end{table}
Create LaTeX code to produce this table.
Details of Nyx Dataset Used in Experiments
Please create a caption for this LaTeX table.
2304.11410v1_train__5a2e3e5a-07d7-4246-bb53-7b2677bf5d25
arxiv_table_to_tex
\begin{table}[t]\begin{table}[t]\centering\scalebox{0.95}{\begin{tabular}{|l|c|c|c|}\hline\textbf{Predictor} & \textbf{Estimate} & \textbf{Std-Error} & \textbf{z-value} \\ \hlineIntercept & \textbf{0.018} & 0.008 & 1.98 \\$1^{st}$ constituent's deplen & {-0.004} & 0.012 & -0.35 \\$2^{nd}$ constituent's deplen & \textbf{0.088} & 0.014 & 6.20 \\$3^{rd}$ constituent's deplen & \textbf{-0.147} & 0.015 & -10.12 \\$4^{th}$ constituent's deplen & \textbf{-0.529} & 0.016 & -32.90 \\$5^{th}$ constituent's deplen & \textbf{-2.720} & 0.023 & -118.03 \\\hline\end{tabular}}\caption{Regression model containing dependency lengths (deplen) of preverbal constituents as predictors (87143 data points); significant predictors denoted in bold with p $<$ 0.001}\label{tab:deplen-reg-only5}\end{table}
Convert this table into LaTeX code.
Regression model containing dependency lengths (deplen) of preverbal constituents as predictors (87143 data points); significant predictors denoted in bold with p $<$ 0.001
Please write a caption that describes this LaTeX table.
2304.11410v1_train__d22a8011-318a-4be0-ae8c-28ddaff35aa3
arxiv_table_to_tex
\begin{table}[t]\begin{table}[t]\centering\scalebox{0.95}{\begin{tabular}{|l|c|c|c|}\hline\textbf{Predictor} & \textbf{Estimate} & \textbf{Std-Error} & \textbf{z-value} \\ \hlineIntercept & -0.003 & 0.008 & -0.41 \\$1^{st}$ constituent's length & \textbf{-0.083} & 0.009 & -8.44 \\%cl2$3^{rd}$ constituent's length & \textbf{0.058} & 0.010 & 5.72 \\$4^{th}$ constituent's length & \textbf{-0.148} & 0.009 & -15.26 \\$5^{th}$ constituent's length & \textbf{-1.549} & 0.016 & -97.82 \\\hline\end{tabular}}\caption{Regression model containing preverbal constituent lengths as predictors (87143 data points); significant predictors denoted in bold with p $<$ 0.001; constituent length = word counts in the constituent}\label{tab:constlen-reg-only5}\end{table}
Generate the LaTeX code to recreate this table.
Regression model containing preverbal constituent lengths as predictors (87143 data points); significant predictors denoted in bold with p $<$ 0.001; constituent length = word counts in the constituent
I need a caption for this LaTeX table, please.
2304.11410v1_train__ce7eadf2-53a8-4e37-bd7e-2c485496d962
arxiv_table_to_tex
\begin{table}[t]\begin{table}[t]\centering\scalebox{0.95}{\begin{tabular}{l|c}\hline%\textbf{Predictor} & \textbf{Brown z-value}& \textbf{WSJ z-value}\tabularnewlinePredictor(s) & Accuracy \\ \hline% \textsc{individual} & \\total dependency length & 62.69 \\2nd-last preverbal constituent's deplen & 68.48*** \\last preverbal constituent's deplen & 72.70*** \\\hline% \textsc{collective} & \\last + 2nd last preverbal constituent's deplen & 77.17*** \\% dl 2nd last phrase + dl last phrase + total dl & 76.85*** \\\hline\end{tabular}}\caption{Prediction accuracy of distinct models with dependency length as predictor on full dataset (184818 data points; deplen = dependency length; McNemar's two-tailed significance test compared to previous row: *** $p<0.001$)}\label{tab:pred-results-dl}\end{table}
Create a LaTeX script to illustrate this table.
Prediction accuracy of distinct models with dependency length as predictor on full dataset (184818 data points; deplen = dependency length; McNemar's two-tailed significance test compared to previous row: *** $p<0.001$)
Can you give me a caption for this LaTeX table?
2304.11410v1_train__8f939ba4-5eba-455d-990e-baaea11a417a
arxiv_table_to_tex
\begin{table}[t]\begin{table}[t]\centering\scalebox{0.95}{\begin{tabular}{l|c}\hline%\textbf{Predictor} & \textbf{Brown z-value}& \textbf{WSJ z-value}\tabularnewlinePredictor(s) & Accuracy \\ \hline% total dependency length & 62.69 \\2nd-last preverbal constituent length & 54.35 \\last preverbal constituent length & 69.62*** \\\hlinelast + 2nd last preverbal constituent length & 70.28*** \\% cl 2nd last + cl last + total dl & 70.52*** \\\hline\end{tabular}}\caption{Prediction accuracy of distinct models with constituent length as predictor (184818 data points; McNemar's two-tailed test compared to previous row: *** $p<0.001$)}\label{tab:pred-results-cl}\end{table}
Create a LaTeX script to illustrate this table.
Prediction accuracy of distinct models with constituent length as predictor (184818 data points; McNemar's two-tailed test compared to previous row: *** $p<0.001$)
May I have a caption for this LaTeX table?
2112.03518v2_train__d1daa31d-1a93-428d-b80b-396595254a8a
arxiv_table_to_tex
\begin{table}[h]\begin{table}[h]\centering\label{t2}\renewcommand{\arraystretch}{1.1}\resizebox{0.8\columnwidth}{!}{\begin{tabular}{c|cc|cc}\noalign{\smallskip}\noalign{\smallskip}\hline\hline\multirow{2}{*}{} & \multicolumn{2}{c|}{$\delta\ge$0.4} & \multicolumn{2}{c}{$\delta\ge$0.6} \\\cline{2-5}& Improvement & Sucess & Improvement & Sucess \\\hlineJT-VAE & 0.84\textpm1.45 & 84\% & 0.21\textpm0.71 & 46.4\% \\GCPN & 2.49\textpm1.30 & 100\% & 0.79\textpm0.63 & 100\% \\MMPA & 3.29\textpm1.12 & - & 1.65\textpm1.44 & - \\DEFactor & 3.41\textpm1.67 & 85.9\% & 1.55\textpm1.19 & 72.6\% \\VJTNN & 3.55\textpm1.67 & - & 2.33\textpm1.17 & - \\GA-DNN & 5.93\textpm1.41 & 100\% & 3.44\textpm1.09 & 99.8\% \\\hline\textbf{Constrained GA(Ours)} & \textbf{5.53\textpm1.29} & \textbf{100\%} & \textbf{3.67\textpm1.29} & \textbf{100\%} \\\hline\hline\end{tabular}}\\\caption{Comparison on constrained improvement of penalized $Log P$ of specific molecules}\end{table}
Create LaTeX code to produce this table.
Comparison on constrained improvement of penalized $Log P$ of specific molecules
Please provide an appropriate caption for this LaTeX table.
2209.01604v2_train__2e9135b6-020e-4d8e-aa29-ae8db6211aef
arxiv_table_to_tex
\begin{table}[h]\begin{table}[h]\centering\small\begin{tabular}{cc|rrrrrr}\hline\multicolumn{1}{c|}{Dataset} & \multicolumn{1}{c|}{Pretraining Method} & \multicolumn{1}{c}{B-1} & \multicolumn{1}{c}{B-2} & \multicolumn{1}{c}{B-3} & \multicolumn{1}{c}{B-4} & \multicolumn{1}{c}{M} & \multicolumn{1}{c}{R-L} \\ \hline\multicolumn{1}{c|}{\multirow{7}{*}{IU X-ray}} & Scratch & 0.404 & 0.249 & 0.172 & 0.126 & 0.161 & 0.320 \\\multicolumn{1}{c|}{} & AE & 0.417 & 0.273 & 0.195 & 0.148 & \textbf{0.176} & 0.361 \\\multicolumn{1}{c|}{} & Imagenet & 0.420 & 0.256 & 0.175 & 0.127 & 0.167 & 0.326 \\\multicolumn{1}{c|}{} & MLC & 0.423 & 0.270 & 0.190 & 0.141 & \textbf{0.176} & 0.359 \\\cline{2-8}\multicolumn{1}{c|}{} & MoCo & 0.426 & 0.259 & 0.175 & 0.126 & 0.168 & 0.325 \\\multicolumn{1}{c|}{} & SimCLR & \textbf{0.456} & \textbf{0.289} & \textbf{0.202} & \textbf{0.150} & \textbf{0.176} & \textbf{0.362}\\\multicolumn{1}{c|}{} & SimCLR (w/ Lung Seg)& 0.448 & 0.282 & 0.195 & 0.141 & \textbf{0.176} & 0.340 \\\hline\multicolumn{1}{c|}{\multirow{6}{*}{MIMIC-CXR}} & Scratch & 0.211 & 0.132 & 0.090 & 0.064 & 0.105 & 0.250 \\\multicolumn{1}{c|}{} & AE & 0.316 & 0.186 & 0.119 & 0.081 & 0.119 & 0.243 \\\multicolumn{1}{c|}{} & Imagenet & 0.270 & 0.166 & 0.109 & 0.076 & 0.114 & 0.255 \\\cline{2-8}\multicolumn{1}{c|}{} & MoCo & 0.334 & 0.203 & 0.133 & 0.091 & 0.129 & 0.261 \\\multicolumn{1}{c|}{} & SimCLR & \textbf{0.343} & \textbf{0.208} & \textbf{0.137} & \textbf{0.095} & \textbf{0.132} & 0.260 \\\multicolumn{1}{c|}{} & SimCLR (w/ Lung Seg)& 0.325 & 0.198 & 0.130 & 0.090 & 0.129 & \textbf{0.265}\\\hline\end{tabular}\caption{Quantitative comparison among different encoders when using Transformer as decoder. B-n, M, and R-L are short for BLEU-n, METEOR, and ROUGE-L, respectively.}\label{Table:Transformer_dif_encoder}\end{table}
I need the LaTeX code to create this table, please.
Quantitative comparison among different encoders when using Transformer as decoder. B-n, M, and R-L are short for BLEU-n, METEOR, and ROUGE-L, respectively.
I need a caption for this LaTeX table, please.
2209.01604v2_train__2ba4c7b2-64b6-4290-b82f-fcceeac107a9
arxiv_table_to_tex
\begin{table}[!h]\begin{table}[!h]\centering\small\begin{tabular}{cc|rrrrrr}\hline\multicolumn{1}{c|}{Dataset} & \multicolumn{1}{c|}{Pretraining Method} & \multicolumn{1}{c}{B-1} & \multicolumn{1}{c}{B-2} & \multicolumn{1}{c}{B-3} & \multicolumn{1}{c}{B-4} & \multicolumn{1}{c}{M} & \multicolumn{1}{c}{R-L} \\ \hline\multicolumn{1}{c|}{\multirow{6}{*}{IU X-ray}} & Scratch & 0.299 & 0.185 & 0.132 & 0.100 & 0.134 & 0.324 \\\multicolumn{1}{c|}{} & AE & 0.389 & 0.233 & 0.157 & 0.111 & 0.154 & 0.317 \\\multicolumn{1}{c|}{} & Imagenet & 0.397 & 0.246 & 0.170 & 0.126 & 0.160 & \textbf{0.329} \\\multicolumn{1}{c|}{} & MLC & 0.410 & 0.250 & 0.170 & 0.123 & 0.164 & 0.325 \\\cline{2-8}\multicolumn{1}{c|}{} & MoCo & 0.425 & 0.260 & 0.177 & 0.128 & 0.169 & 0.326 \\\multicolumn{1}{c|}{} & SimCLR & 0.426 & 0.265 & 0.181 & 0.131 & 0.168 & 0.326 \\\multicolumn{1}{c|}{} & SimCLR (w/ Lung Seg)& \textbf{0.431} & \textbf{0.268} & \textbf{0.185} & \textbf{0.137} & \textbf{0.172} & \textbf{0.332} \\\hline% \multicolumn{1}{c|}{} & SimCLR & \textbf{0.426} & \textbf{0.265} & \textbf{0.181} & \textbf{0.131} & 0.168 & 0.326 \\% \multicolumn{1}{c|}{} & MoCo & 0.425 & 0.260 & 0.177 & 0.128 & \textbf{0.169} & 0.326 \\% \hline\multicolumn{1}{c|}{\multirow{5}{*}{MIMC-CXR}} & Scratch & 0.312 & 0.193 & 0.130 & 0.092 & 0.128 & 0.272 \\\multicolumn{1}{c|}{} & AE & 0.282 & 0.174 & 0.117 & 0.083 & 0.119 & 0.264 \\\multicolumn{1}{c|}{} & Imagenet & 0.309 & 0.193 & 0.131 & 0.094 & 0.130 & 0.276 \\\cline{2-8}\multicolumn{1}{c|}{} & MoCo & 0.309 & 0.191 & 0.128 & 0.091 & 0.127 & 0.270 \\\multicolumn{1}{c|}{} & SimCLR & \textbf{0.330} & \textbf{0.206} & \textbf{0.139} & \textbf{0.099} & \textbf{0.134} & \textbf{0.278} \\\multicolumn{1}{c|}{} & SimCLR (w/ Lung Seg)& 0.325 & 0.203 & 0.138 & 0.095 & \textbf{0.134} & \textbf{0.278} \\\hline% \multicolumn{1}{c|}{} & SimCLR & \textbf{0.330} & \textbf{0.206} & \textbf{0.139} & \textbf{0.099} & \textbf{0.134} & \textbf{0.278} \\% \multicolumn{1}{c|}{} & MoCo & 0.309 & 0.191 & 0.128 & 0.091 & 0.127 & 0.270 \\% \hline\end{tabular}\caption{Quantitative comparison among different encoders when using LSTM as decoder. B-n, M, and R-L are short for BLEU-n, METEOR, and ROUGE-L, respectively.}\label{Table:LSTM_dif_encoder}\end{table}
Generate the LaTeX code to recreate this table.
Quantitative comparison among different encoders when using LSTM as decoder. B-n, M, and R-L are short for BLEU-n, METEOR, and ROUGE-L, respectively.
Can you give me a caption for this LaTeX table?
2209.01604v2_train__5e0589da-540d-4429-9027-df54180f56d2
arxiv_table_to_tex
\begin{table}[!h]\begin{table}[!h]\centering\small\begin{tabular}{cc|rrrrrr}\hline\multicolumn{1}{c|}{Dataset} & \multicolumn{1}{c|}{Pretraining Method} & \multicolumn{1}{c}{B-1} & \multicolumn{1}{c}{B-2} & \multicolumn{1}{c}{B-3} & \multicolumn{1}{c}{B-4} & \multicolumn{1}{c}{M} & \multicolumn{1}{c}{R-L} \\ \hline\multicolumn{1}{c|}{\multirow{7}{*}{IU X-ray}} & Scratch & 0.351 & 0.221 & 0.156 & 0.117 & 0.153 & 0.333 \\\multicolumn{1}{c|}{} & AE & 0.389 & 0.241 & 0.166 & 0.121 & 0.161 & 0.332 \\\multicolumn{1}{c|}{} & Imagenet & 0.417 & 0.254 & 0.173 & 0.124 & 0.165 & 0.328 \\\multicolumn{1}{c|}{} & MLC & 0.412 & \textbf{0.263} & \textbf{0.189} & \textbf{0.145} & \textbf{0.169} & \textbf{0.341} \\\cline{2-8}\multicolumn{1}{c|}{} & MoCo & \textbf{0.424} & 0.259 & 0.174 & 0.124 & \textbf{0.169} & 0.331 \\\multicolumn{1}{c|}{} & SimCLR & 0.423 & 0.259 & 0.175 & 0.126 & 0.166 & 0.327 \\\multicolumn{1}{c|}{} & SimCLR (w/ Lung Seg)& 0.415 & 0.255 & 0.174 & 0.127 & 0.165 & 0.331 \\\hline% \multicolumn{1}{c|}{} & SimCLR & 0.423 & 0.259 & 0.175 & 0.126 & 0.166 & 0.327 \\% \multicolumn{1}{c|}{} & MoCo & \textbf{0.424} & 0.259 & 0.174 & 0.124 & \textbf{0.169} & 0.331 \\% \hline\multicolumn{1}{c|}{\multirow{6}{*}{MIMC-CXR}} & Scratch & 0.309 & 0.193 & 0.130 & 0.093 & 0.128 & 0.274 \\\multicolumn{1}{c|}{} & AE & 0.299 & 0.184 & 0.124 & 0.088 & 0.123 & 0.270 \\\multicolumn{1}{c|}{} & Imagenet & 0.313 & 0.196 & 0.132 & 0.095 & 0.129 & 0.275 \\\cline{2-8}\multicolumn{1}{c|}{} & MoCo & 0.309 & 0.190 & 0.127 & 0.090 & 0.126 & 0.269 \\\multicolumn{1}{c|}{} & SimCLR & 0.327 & 0.204 & 0.137 & 0.098 & 0.134 & 0.278 \\\multicolumn{1}{c|}{} & SimCLR (w/ Lung Seg)& \textbf{0.335} & \textbf{0.209} & \textbf{0.140} & \textbf{0.100} & \textbf{0.136} & \textbf{0.279} \\\hline% \multicolumn{1}{c|}{} & SimCLR & \textbf{0.327} & \textbf{0.204} & \textbf{0.137} & \textbf{0.098} & \textbf{0.134} & \textbf{0.278} \\% \multicolumn{1}{c|}{} & MoCo & 0.309 & 0.190 & 0.127 & 0.090 & 0.126 & 0.269 \\% \hline\end{tabular}\caption{Quantitative comparison among different encoders when using GRU as decoder. B-n, M, and R-L are short for BLEU-n, METEOR, and ROUGE-L, respectively.}\label{Table:GRU_dif_encoder}\end{table}
Generate the LaTeX code to recreate this table.
Quantitative comparison among different encoders when using GRU as decoder. B-n, M, and R-L are short for BLEU-n, METEOR, and ROUGE-L, respectively.
Please give a caption for the LaTeX table.
2105.04024v3_train__40021ae1-db12-45a0-8f69-e5aa31a75b6a
arxiv_table_to_tex
\begin{table}[!ht]\begin{table}[!ht]\footnotesize\centering\renewcommand{\arraystretch}{1.5}\begin{tabular}{c | c c c c c| c c}\hline& Neighbors & Entropy Weight & Batch Size & Dropout & Epochs & Accuracy AG news & Accuracy DBPedia \\ \hlineDocSCAN & 5 & 2 & 128 & 0.1 & 5 & 83.2 \textpm 3.8 & 85.8 \textpm 3.5 \\ \hline\multirow{3}{*}{(A)} & 2 & & & & & 77.5 \textpm 6.7 & 83.1 \textpm 5.1 \\& 3 & & & & & 78.4 \textpm 5.5 & 85.3 \textpm 3.0 \\%re-run this& 10 & & & & & 82.4 \textpm 5.6 & 86.1 \textpm 3.5 \\ \hline\multirow{2}{*}{(B)} & & 1 & & & & 75.8 \textpm 5.3 & 80.3 \textpm 2.8 \\& & 4 & & & & 80.4 \textpm 3.5 & 86.7 \textpm 2.8 \\ \hline\multirow{2}{*}{(C)} & & & 64 & & & 82.4 \textpm 5.0 & 87.5 \textpm 4.2 \\& & & 256 & & & 81.3 \textpm 4.3 & 84.6 \textpm 4.1 \\ \hline\multirow{2}{*}{(D)} & & & & 0 & & 81.9 \textpm 3.7 & 86.1 \textpm 4.4 \\& & & & 0.33 & & 80.3 \textpm 3.9 & 86.8 \textpm 2.6 \\ \hline\multirow{2}{*}{(E)} & & & & & 3 & 79.4 \textpm 5.3 & 84.2 \textpm 4.4 \\& & & & & 10 & 81.5 \textpm 3.4 & 84.7 \textpm 3.7 \\ \hlinek-means & & & & & & 66.2 \textpm 8.2 & 77.1 \textpm 4.9 \\ \hline\end{tabular}\caption{Ablation Studies for DocSCAN Hyper-parameters (results reported on the AG news and DBPedia training set, cell values give the mean over 10 runs with 95\% confidence interval).}\label{tab:ablation_experiments}\end{table}
Provide LaTeX code for this table.
Ablation Studies for DocSCAN Hyper-parameters (results reported on the AG news and DBPedia training set, cell values give the mean over 10 runs with 95\% confidence interval).
Could you generate a caption for this LaTeX table?
2101.04017v5_train__2ae30d33-bc50-40e2-80ef-f050069d48d3
arxiv_table_to_tex
\begin{table*}\begin{table*}\caption{Reclassification results using Shaver lexicon, compared to the original NRC lexicon}\vspace{-0.2cm}\label{shaver}\footnotesize\centering\begin{tabular}{l|r r|r r|r r}\toprule%\hline\rule[-1ex]{0pt}{2.2ex} & \multicolumn{2}{p{2.2cm}|}{\textbf{ArsMeteo\newline(9171 artworks)}} & \multicolumn{2}{p{2.2cm}|}{\textbf{RaiPlay\newline(4612 media items)}} & \multicolumn{2}{p{2.2cm}}{\textbf{WikiArt Emotions\newline(4105 artworks)}} \\%\hline\midrule\rule[-1ex]{0pt}{2.2ex} \textbf{Emotion} & \textbf{NRC} & \textbf{Shaver} & \textbf{NRC} & \textbf{Shaver} & \textbf{NRC} & \textbf{Shaver} \\%\hline\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{awe (fear\_surprise)}} & $63$ & $117$ & $132$ & $7$ & $508$ & $0$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{delight (joy\_surprise)}} & $52$ & $71$ & $117$ & $2$ & $1672$ & $0$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{delight (surprise\_joy)}} & $52$ & $57$ & $117$ & $88$ & $1672$ & $0$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{despair (fear\_sadness)}} & $19$ & $49$ & $20$ & $132$ & $0$ & $508$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{disapproval (sadness\_surprise)}} & $58$ & $83$ & $64$ & $2$ & $404$ & $0$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{envy (anger\_sadness)}} & $25$ & $30$ & $21$ & $49$ & $0$ & $508$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{guilt (fear\_joy)}} & $22$ & $44$ & $73$ & $0$ & $1618$ & $0$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{guilt (joy\_fear)}} & $24$ & $45$ & $73$ & $117$ & $1618$ & $1672$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{outrage (anger\_surprise)}} & $41$ & $49$ & $46$ & $20$ & $508$ & $0$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{pride (anger\_joy)}} & $20$ & $27$ & $72$ & $64$ & $1618$ & $404$ \\\midrule%\rule[-1ex]{0pt}{2.5ex} {\textbf{TOTAL}} & $324$ & $572$ & $618$ & $481$ & $7946$ & $3092$ \\%\hline\bottomrule\end{tabular}\vspace{7pt}\end{table*}
Please create LaTeX code to produce this table.
Reclassification results using Shaver lexicon, compared to the original NRC lexicon
Could you generate a caption for this LaTeX table?
2306.04127v1_train__2f85ddb8-f132-4cc3-89d9-29d3e0fb0490
arxiv_table_to_tex
\begin{table}[ht!]\begin{table}[ht!]\begin{center}\begin{tabular}{|c|c|} \hline{\bf Scalar Sector} & \\ \hline$\delta m_\phi$, $\delta Z_\phi$ ($\phi=h_1,h_2,A$) & OS \\ \hline$\delta \alpha$ & OS-pinched \\& $p_*$-pinched \\ \hline$\delta v_S$ & OSproc1 (OS $h_1 \to AA$)\\& OSproc2 (OS $h_2 \to AA$) \\& ZEMproc1 (ZEM $h_1 \to AA$) \\& ZEMproc2 (ZEM $h_2 \to AA$) \\ \hline \hline{\bf Gauge Sector} & \\ \hline$\delta m_V$, $\delta Z_V$ ($V=W^\pm,Z$) & OS \\ \hline$\delta Z_e$ & $G_\mu$ scheme \\ \hline\hline{\bf Fermion Sector} & \\ \hline$\delta m_f$, $\delta Z_f$ & OS \\ \hline\end{tabular}\caption{Summary of the renormalization schemes applied in the EWcorrections to the non-loop-induced on-shell decay widths of theCxSM Higgs boson decays. For details, see text andRefs.~\cite{Krause:2016oke,Azevedo:2021ylf,Egle:2022wmq}. \label{tab:renscheme}}\end{center}\end{table}
Generate the LaTeX code to recreate this table.
Summary of the renormalization schemes applied in the EWcorrections to the non-loop-induced on-shell decay widths of theCxSM Higgs boson decays. For details, see text andRefs.~\cite{Krause:2016oke,Azevedo:2021ylf,Egle:2022wmq
Please write a caption that describes this LaTeX table.
1801.07729v2_train__f9860984-59fc-47a1-955e-d58cbdab7aef
arxiv_table_to_tex
\begin{table}[tbh]\begin{table}[tbh]\caption{The effect of adding two reduced dimension layers on the representation (subspacedimensionality and variance)}\label{T:SI3}%\begin{threeparttable}[b]%\begin{adjustbox}{width=1\textwidth}\small\scalebox{0.7}{\begin{tabular}{|l|l|c|c|c|c|c|c|}\hline{Model} & {Training Strategy} & \multicolumn{3}{c|}{Original Architecture} & \multicolumn{3}{c|}{Adding two dimensionality reduction layers} \\\cline{3-8} & &number of nodes &subspace dim $^1$ &retained variance $^2$ & number of nodes &subspace dim$^1$ &retained variance$^2$ \\\hline{AlexNet}& Pre-trained & {4096} & {201} & {21.71} & {512}& {9}& {59.64}\\& \& Finetuned &&&&&& \\\hlineAlexNet&From Scratch&4096&397&35.62&512&10&62.78\\\hline{VGGNet}& Pre-trained &{4096} &{55} &{49.52} &{512}&{7}&{66.87}\\& \& Finetuned &&&&&& \\\hlineVGGNet&From Scratch&4096&36&51.16&512&7&72.52\\\hline{ResNet}& Pre-trained &{2048}$^3$ &{491} &{17.53} &{512}&{6}&{73.71}\\& \& Finetuned &&&&&& \\\hline\multicolumn{8}{|l|}{$^1$ Subspace dim: Number of principle components cumulatively retaining 95\% of variance.} \\\multicolumn{8}{|l|}{$^2$ Retained variance: Percentage of variance retained by the first two principle components. } \\\multicolumn{8}{|l|}{$^3$ ResNet does not have FC layers. This is the number of the nodes in the last pooling layer. } \\ \hline\end{tabular}}%\end{adjustbox}\end{table}
Could you please provide me the LaTeX code to generate this table?
The effect of adding two reduced dimension layers on the representation (subspacedimensionality and variance)
Could you provide a caption for this LaTeX table?
1801.07729v2_train__c23eb56a-5767-4542-97db-706213288444
arxiv_table_to_tex
\begin{table}[tbh]\begin{table}[tbh]\caption{Analysis of the modes of variations of the activations on 62K paintings from the Wikiart collection in three models and their correlation with time (Pearson Correlation Coefficients) .The top two time-correlated modes are highlighted in bold.}\label{T:SI4}%\begin{adjustbox}{width=1\textwidth}\scalebox{0.77}{\small\begin{tabular}{ |l|c|c||c|c|c|c|c|c|c|c|c|c| }\hline& Subspace Dim $^1$ & Retained Variance $^2$ & \multicolumn{10}{|c|}{Correlation with time } \\% &&& \multicolumn{10}{|c|}{\hline} \\&&& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hlineAlexNet+2 & 10 & 51.13 & 0.12 & {\bf 0.66} & 0.24 & {\bf -0.43} & -0.23 & 0.04 & 0.08 & -0.05 & -0.03 & 0.00 \\\hlineVGGNet+2 & 9 & 43 & -0.15 & -0.14 & {\bf -0.46} & -0.07 & {\bf -0.59} & 0.28 & 0.02 & 0.07 & 0.02 & \\\hlineResNet+2 & 7 & 66.80 & -0.03 & {\bf -0.72} & 0.20 & 0.22 & {\bf 0.30} & -0.01 & 0.06 & & & \\\hline\multicolumn{13}{|l|}{$^1$ Subspace Dim: Number of principle components retaining 95\% of the variance} \\\multicolumn{13}{|l|}{$^2$ Retained Variance: Variance retained by first two dimensions }\\ \hline\end{tabular}}%\end{adjustbox}\end{table}
Generate the LaTeX code to recreate this table.
Analysis of the modes of variations of the activations on 62K paintings from the Wikiart collection in three models and their correlation with time (Pearson Correlation Coefficients) .The top two time-correlated modes are highlighted in bold.
Could you generate a caption for this LaTeX table?
2206.13365v1_train__ac8c042a-c0be-481d-a2a9-c890d0226393
arxiv_table_to_tex
\begin{table*}[tb]\begin{table*}[tb]\centering\setlength{\tabcolsep}{3pt}\caption{Comparison of the performance of the COVID-19 detection models using different signal representations on breathing and speech modalities in terms of the AUC(\%).}\vspace{-2mm}\begin{tabular}{l|cccccc||cccccc}\hline% \small\multirow{2}{*}{Representation Methods} & \multicolumn{6}{c||}{Breathing} & \multicolumn{6}{c}{Speech} \\\cline{2-13}& \multicolumn{1}{l}{Fold1} & \multicolumn{1}{l}{Fold2} & \multicolumn{1}{l}{Fold3} & \multicolumn{1}{l}{Fold4} & \multicolumn{1}{l}{Fold5} & \multicolumn{1}{l||}{Avg.}& \multicolumn{1}{l}{Fold1} & \multicolumn{1}{l}{Fold2} & \multicolumn{1}{l}{Fold3} & \multicolumn{1}{l}{Fold4} & \multicolumn{1}{l}{Fold5} & \multicolumn{1}{l}{Avg.}\\ \hlineMel-Spectrogram~\cite{sharma2021second} & 73.8 & 76.4 & 75.6 & 77.1 & 84.2 & 77.4 &74.8 & 85.1 & 79.1 & 78.6 & 80.4 & 79.6 \\SincNet~\cite{ravanelli2018interpretable} & 72.4 & 69.6 & \textbf{81.9} & 76.4 & 77.1 & 75.4& 75.0 & 84.6 & 77.4 & 76.7 & \textbf{85.2} & 79.7 \\LEAF~\cite{zeghidour2021leaf} & 71.5 & 62.9 & 66.1 & 66.6 & 73.6 & 68.2 & 60.5 & 59.1 & 63.8 & 57.4 & 62.0 & 60.6 \\%& & & & & & & \\%CosGauss-relev-2head~\cite{agrawal2020interpretable} & 76.4 & 76.3 & 79.4 & 77.9 & 81.5 & 78.3%& 77.1 & 83.3 & 81.5 & 78.4 & 85.7 & 81.2 \\\hdashline[1pt/1pt]CosGauss-relev & 75.4 & 76.0 & 79.7 & 78.1 & 82.8 & 78.4& 76.5 & \textbf{83.6} & 83.3 & \textbf{79.6} & 84.5 & \textbf{81.5}\\\hdashline[1pt/1pt]%CosGauss-relev-modFB & 72.2 & 70.4 & 79.4 & 76.7 & 81.9 & 76.1%& & & & & & \\%CosGauss-relev-modFB-relev & 73.7 & 76.0 & 81.1 & 78.3 & 82.9 & 78.4%& 75.6 & 79.4 & 80.7 & 81.2 & 82.6 & 79.9\\CosGauss-relev-pretr & 76.1 & 75.9 & 77.7 & 76.9 & 83.2 & 78.0 & 77.1 & 79.7 & \textbf{84.7} & 77.0 & 81.3 & 80.0\\CosGauss-relev-pretr-fine& \textbf{81.2} & 77.0 & 80.9 & 78.4 & \textbf{84.9} & \textbf{80.5} &78.3 & 81.5 & 83.5 & 78.0 & 81.5 & 80.6\\\hdashline[1pt/1pt]CosGauss-SSL-pretr & 77.3 & 73.9 & 79.6 & 79.3 & 81.6 & 78.3 & 78.7 & 80.5 & 81.9 & 75.8 & 81.8 & 79.7\\CosGauss-SSL-pretr-fine & 78.5 & \textbf{77.9} & 77.9 & \textbf{80.2 } & 81.3 & 79.2 & \textbf{80.7} & 81.7 & 82.7 & 76.8 & 78.3 & 80.0\\\hline\end{tabular}\label{tab:results}\end{table*}
Could you please provide me the LaTeX code to generate this table?
Comparison of the performance of the COVID-19 detection models using different signal representations on breathing and speech modalities in terms of the AUC(\%).
Could you provide a caption for this LaTeX table?
2306.16466v2_train__729297a2-d27c-449f-af1a-aa35c7f526b6
arxiv_table_to_tex
\begin{table}\begin{table}\caption{For different $t_1$, mean-field parameters $\chi_1$ and $\chi_2$ along upper phase boundary of CSL in Fig.\ref{spinmodeldiag}.}\label{tab:chi1chi2}%\centering\begin{tabular}{|c|c|c||c|c|c|}\hline$t_1$ & $\chi_1$ & $\chi_2$ &$t_1$ & $\chi_1$ & $\chi_2$\\\hline$0.05$& $0.3917e^{i\frac{\pi}{4}}$& $2.7796$& $0.7$& $0.4534e^{i\frac{\pi}{4}}$& $0.1958$\\\hline$0.1$& $0.3933e^{i\frac{\pi}{4}}$& $1.3887$&$0.711$& $0.4562e^{i\frac{\pi}{4}}$& $0.1866$ \\\hline$0.2$& $0.3986e^{i\frac{\pi}{4}}$& $0.6913$&$0.75$& $0.4465e^{i\frac{\pi}{4}}$& $0.2154$ \\\hline$0.3$& $0.4071e^{i\frac{\pi}{4}}$& $0.4546$&$0.8$& $0.4384e^{i\frac{\pi}{4}}$& $0.2345$ \\\hline$0.4$& $0.42e^{i\frac{\pi}{4}}$& $0.3266$&$0.85$& $0.4321e^{i\frac{\pi}{4}}$& $0.2473$ \\\hline$0.5$& $0.4416e^{i\frac{\pi}{4}}$& $0.2306$&$0.9$& $0.4266e^{i\frac{\pi}{4}}$& $0.2574$ \\\hline$0.6$& $0.4503e^{i\frac{\pi}{4}}$& $0.2052$&$0.95$& $0.4215e^{i\frac{\pi}{4}}$& $0.2661$ \\\hline$0.65$& $0.4493e^{i\frac{\pi}{4}}$& $0.2078$&$1$& $0.4163e^{i\frac{\pi}{4}}$& $0.274$ \\\hline\end{tabular}\end{table}
Create a LaTeX script to illustrate this table.
For different $t_1$, mean-field parameters $\chi_1$ and $\chi_2$ along upper phase boundary of CSL in Fig.\ref{spinmodeldiag
Can you give me a caption for this LaTeX table?
2211.08939v3_train__2d64e176-6c1e-4f6d-8688-57934b088e9a
arxiv_table_to_tex
\begin{table}[]\begin{table}[]\centering\caption{Results for the Wave equation.}\label{tab:wave}\begin{tabular}{|c|c|c|c|c|}\hlineModel & PINN & XPINNv2 & APINN-X & APINN-M\\ \hlineRel. $L_2$ & 1.900E-3$\pm$3.375E-4 & 1.378E-3$\pm$2.424E-4 & {1.492E-3$\pm$7.041E-4} & \textbf{1.299E-3$\pm$2.941E-4} \\ \hline\end{tabular}\end{table}
Create LaTeX code to produce this table.
Results for the Wave equation.
Please create a caption for this LaTeX table.
2211.08939v3_train__82ef9209-e97d-4254-99bf-74f8a0a6a3cd
arxiv_table_to_tex
\begin{table}[]\begin{table}[]\centering\caption{Results for the Helmholtz equation.}\label{tab:helmholtz}\begin{tabular}{|c|c|c|c|c|}\hlineModel & PINN & XPINNv1 & XPINNv2 & -\\ \hlineRel. $L_2$ & 2.438E-3$\pm$5.196E-4 & 5.222E-2$\pm$4.001E-3 & 1.297E-3$\pm$1.786E-4 & - \\ \hlineModel & APINN-X-F & APINN-X & APINN-M-F & APINN-M \\ \hlineRel. $L_2$ & 1.554E-3$\pm$3.203E-4 & \textbf{1.275E-3$\pm$4.710E-4} & 1.911E-3$\pm$3.850E-4 & 1.477E-3$\pm$5.679E-4\\ \hline\end{tabular}\end{table}
I need the LaTeX code to create this table, please.
Results for the Helmholtz equation.
Please give a caption for the LaTeX table.
2211.08939v3_train__0808672e-64d6-424e-afca-a910f327ab5f
arxiv_table_to_tex
\begin{table}[]\begin{table}[]\centering\caption{Results for the Klein-Gordon equation.}\label{tab:KG}\begin{tabular}{|c|c|c|c|c|}\hlineModel & PINN & XPINNv1 & XPINNv2 & -\\ \hlineRel. $L_2$ & 3.565E-3$\pm$9.412E-4 & 5.980E-1$\pm$7.601E-2 & 3.700E-3$\pm$2.741E-4 & - \\ \hlineModel & APINN-X-F & APINN-X & APINN-M-F & APINN-M\\ \hlineRel. $L_2$ & 3.195E-3$\pm$8.112E-4 & {3.030E-3$\pm$1.474E-3} & 3.197E-2$\pm$6.253E-3 & \textbf{2.846E-3$\pm$8.568E-4}\\ \hline\end{tabular}\end{table}
Convert this table into LaTeX code.
Results for the Klein-Gordon equation.
Please create a caption for this LaTeX table.
2211.08939v3_train__3d744c3a-0fab-4184-a31e-8b5fa89892ce
arxiv_table_to_tex
\begin{table}[]\begin{table}[]\centering\caption{Results for the Poisson's equation.}\label{tab:poisson}\begin{tabular}{|c|c|c|c|}\hlineModel & PINN & XPINNv1 & XPINNv2\\ \hlineRel. $L_2$ & 4.436E-2$\pm$ 2.721E-2 & 4.022E-1$\pm$ 1.005E-3 & 8.578E-2$\pm$ 9.856E-3 \\ \hlineModel & APINN-F & APINN & APINN-O\\ \hlineRel. $L_2$& 4.263E-2 $\pm$ 1.189E-2 & \textbf{3.978E-2 $\pm$ 2.061E-2} & \textcolor{red}{2.749E-2 $\pm$ 1.432E-2}\\\hline\end{tabular}\end{table}
Provide LaTeX code for this table.
Results for the Poisson's equation.
Please provide an appropriate caption for this LaTeX table.
2211.08939v3_train__abcb2e52-54c4-4c35-af50-f69f8548bf35
arxiv_table_to_tex
\begin{table}[]\begin{table}[]\centering\caption{Relative $L_2$ error for the function $u$ in the Boussinesq-Burger equation.}\label{tab:BB_u}\begin{tabular}{|c|c|c|c|c|}\hlineModel & PINN & XPINN & XPINN4 & / \\ \hlineRel. $L_2$ & 1.470E-02$\pm$6.297E-03 & 1.456E-02$\pm$6.391E-03&3.254E-02$\pm$1.025E-02& / \\ \hlineModel & APINN-M & APINN-X & APINN4-M & APINN4-X \\ \hlineRel. $L_2$ & \bf1.091E-02$\pm$4.588E-03 & 1.388E-02$\pm$4.310E-03 & 1.328E-02$\pm$8.099E-03& 2.559E-02$\pm$6.554E-03\\ \hline\end{tabular}\end{table}
Write the LaTeX code for this table.
Relative $L_2$ error for the function $u$ in the Boussinesq-Burger equation.
May I have a caption for this LaTeX table?
2211.08939v3_train__15a3b2d1-994a-4517-b523-0df9bbd69c54
arxiv_table_to_tex
\begin{table}[]\begin{table}[]\centering\caption{Relative $L_2$ error for the function $v$ in the Boussinesq-Burger equation.}\label{tab:BB_v}\begin{tabular}{|c|c|c|c|c|}\hlineModel & PINN & XPINN & XPINN4 & / \\ \hlineRel. $L_2$ & 1.106E-01$\pm$4.498E-02 & 9.786E-02$\pm$3.485E-02& 2.706E-01$\pm$9.078E-02& / \\ \hlineModel & APINN-M & APINN-X & APINN4-M & APINN4-X \\ \hlineRel. $L_2$ & \bf8.185E-02$\pm$2.973E-02 &9.623E-02$\pm$2.446E-02 &9.616E-02$\pm$5.397E-02& 1.676E-01$\pm$4.946E-02\\ \hline\end{tabular}\end{table}
Convert this table into LaTeX code.
Relative $L_2$ error for the function $v$ in the Boussinesq-Burger equation.
Can you suggest a caption for this LaTeX table?
2310.03542v3_train__0828b61b-9718-457f-8693-6373f3942d47
arxiv_table_to_tex
\begin{table}[t]\begin{table}[t]\begin{tabular}[t]{|l|l|l|}\hline\multicolumn{3}{|c|}{$(\beta=1.9, \kappa=1.0)$} \\\hline$N_s$ & \#configurations & \#eigenvalues \\ \hline16 & 3000 & 33 \\ \hline20 & 1500 & 63 \\ \hline\end{tabular}\mbox{}\begin{tabular}[t]{|l|l|l|}\hline\multicolumn{3}{|c|}{$(\beta=2.1, \kappa=1.0)$ and $(\beta=2.6, \kappa=0.3)$} \\\hline$N_s$ & \#configurations & \#eigenvalues \\ \hline20 & 8970 & 63 \\ \hline24 & 8000 & 110 \\ \hline28 & 3000 & 174 \\ \hline32 & 1150 & 260 \\ \hline\end{tabular}\caption{Configuration statistics and number of (non-degenerate)eigenvalues used to study the volume scaling of the localizationproperties of staggered eigenmodes in the confined phase (top table)and in the deconfined and Higgs phases (bottom table).}\label{table:2}\end{table}
Convert this table into LaTeX code.
Configuration statistics and number of (non-degenerate)eigenvalues used to study the volume scaling of the localizationproperties of staggered eigenmodes in the confined phase (top table)and in the deconfined and Higgs phases (bottom table).
May I have a caption for this LaTeX table?
2310.03542v3_train__6984c70c-3128-4525-b78a-03fa04b3280f
arxiv_table_to_tex
\begin{table}[b]\begin{table}[b]\centering\begin{tabular}{c|c}\hline\hline$\kappa_{\mathrm{loc}}$ & 0.7303(56)\\$a$ & 0.24915(13)\\$b$ & 0.1185(53)\\$c$ & 0.1874(52)\\$d$ & 26.4(2.0) \\\hline$\chi^2/\mathrm{dof}$& 2.05\end{tabular}\caption{Parameters of a best fit of the $\kappa$ dependence of themobility edge in the deconfined and Higgs phases at $\beta=2.6$,with the fitting function in Eq.~\eqref{eq:sigmo}. Onlystatistical errors are reported.}\label{tab:kc}\end{table}
Convert this table into LaTeX code.
Parameters of a best fit of the $\kappa$ dependence of themobility edge in the deconfined and Higgs phases at $\beta=2.6$,with the fitting function in Eq.~\eqref{eq:sigmo
Can you give me a caption for this LaTeX table?
2209.01217v3_train__94bfb8b4-552a-46ac-8c76-0620fd989ea7
arxiv_table_to_tex
\begin{table}[tb]\begin{table}[tb]\caption{performance of TabularNCD on the unknown classes.}\begin{center}\setlength{\tabcolsep}{3pt}\begin{tabular}{l l c c c c}\hlineDataset & Method & BACC & ACC & NMI & ARI \\\hline\multirow{4}{*}{MNIST} & Baseline & 57.7$\pm$4.7 & 57.6$\pm$4.5 & 0.37$\pm$0.2 & 0.31$\pm$0.3 \\& Spect. clust & - & - & - & - \\& \textit{k}-means & 60.1$\pm$0.0 & 61.1$\pm$0.0 & 0.48$\pm$0.0 & 0.38$\pm$0.0 \\& TabularNCD & \textbf{91.5$\pm$4.1} & \textbf{91.4$\pm$4.2} & \textbf{0.82$\pm$0.06} & \textbf{0.81$\pm$0.04} \\\hline\multirow{4}{*}{Forest} & Baseline & 55.6$\pm$2.0 & 68.5$\pm$1.4 & 0.27$\pm$0.02 & 0.15$\pm$0.01 \\& Spect. clust & 32.1$\pm$1.4 & 85.8$\pm$4.0 & 0.01$\pm$0.01 & 0.09$\pm$0.01 \\& \textit{k}-means & 32.9$\pm$0.0 & 62.0$\pm$0.0 & 0.04$\pm$0.00 & 0.05$\pm$0.00 \\& TabularNCD & \textbf{66.8$\pm$0.6} & \textbf{92.2$\pm$0.2} & \textbf{0.37$\pm$0.09} & \textbf{0.56$\pm$0.09} \\\hline\multirow{4}{*}{Letter} & Baseline & 55.7$\pm$3.6 & 55.9$\pm$3.6 & 0.49$\pm$0.04 & 0.33$\pm$0.04 \\& Spect. clust & 45.3$\pm$4.0 & 45.3$\pm$4.0 & 0.48$\pm$0.03 & 0.18$\pm$0.03 \\& \textit{k}-means & 50.2$\pm$0.6 & 49.9$\pm$0.6 & 0.40$\pm$0.01 & 0.28$\pm$0.01 \\& TabularNCD & \textbf{71.8$\pm$4.5} & \textbf{71.8$\pm$4.5} & \textbf{0.60$\pm$0.04} & \textbf{0.54$\pm$0.04} \\\hline\multirow{4}{*}{Human} & Baseline & 80.0$\pm$0.5 & 78.0$\pm$0.6 & 0.64$\pm$0.01 & 0.62$\pm$0.01 \\& Spect. clust & 70.2$\pm$0.0 & 69.4$\pm$0.0 & 0.72$\pm$0.00 & 0.60$\pm$0.00 \\& \textit{k}-means & 75.3$\pm$0.0 & 77.0$\pm$0.0 & 0.62$\pm$0.00 & 0.59$\pm$0.00 \\& TabularNCD & \textbf{98.9$\pm$0.2} & \textbf{99.0$\pm$0.2} & \textbf{0.95$\pm$0.01} & \textbf{0.97$\pm$0.01} \\\hline\multirow{4}{*}{Satimage} & Baseline & 53.8$\pm$3.4 & 53.9$\pm$4.2 & 0.25$\pm$0.03 & 0.22$\pm$0.03 \\& Spect. clust & 82.2$\pm$0.1 & 77.8$\pm$0.1 & 0.51$\pm$0.00 & 0.46$\pm$0.00 \\& \textit{k}-means & 73.7$\pm$0.3 & 69.2$\pm$0.2 & 0.30$\pm$0.00 & 0.28$\pm$0.00 \\& TabularNCD & \textbf{90.8$\pm$4.0} & \textbf{91.4$\pm$5.0} & \textbf{0.71$\pm$0.11} & \textbf{0.79$\pm$0.07} \\\hline\multirow{4}{*}{Pendigits} & Baseline & 72.8$\pm$5.5 & 72.8$\pm$5.4 & 0.62$\pm$0.06 & 0.54$\pm$0.07 \\& Spect. clust & {84.0$\pm$0.0} & {84.0$\pm$0.0} & \textbf{0.78$\pm$0.00} & 0.67$\pm$0.00 \\& \textit{k}-means & 82.5$\pm$0.0 & 82.5$\pm$0.0 & 0.72$\pm$0.00 & 0.63$\pm$0.00 \\& TabularNCD & \textbf{85.5$\pm$0.7} & \textbf{85.6$\pm$0.8} & 0.76$\pm$0.02 & \textbf{0.71$\pm$0.02} \\\hline\multirow{4}{*}{Census} & Baseline & 53.0$\pm$3.5 & \textbf{55.0$\pm$6.5} & 0.49$\pm$0.02 & 0.30$\pm$0.03 \\& Spect. clust & 23.6$\pm$3.3 & 51.3$\pm$5.5 & 0.24$\pm$0.11 & 0.18$\pm$0.09 \\& \textit{k}-means & 38.5$\pm$2.6 & 49.8$\pm$3.6 & 0.41$\pm$0.05 & 0.28$\pm$0.03 \\& TabularNCD & \textbf{61.9$\pm$0.6} & 50.1$\pm$0.9 & 0.48$\pm$0.01 & 0.30$\pm$0.00 \\\hline\end{tabular}\label{table:tabularncd_results}\end{center}\footnotesize{The standard deviation is computed over 10 executions. The 2 unsupervised clustering methods (Spect. clust and $k$-means) are only fitted to the test instances belonging to the unknown classes.}\end{table}
Please create LaTeX code to produce this table.
performance of TabularNCD on the unknown classes.
Please provide an appropriate caption for this LaTeX table.
2209.01217v3_train__7aaed380-002e-40f3-ba4b-582946aa3fbd
arxiv_table_to_tex
\begin{table}[!hbt]\begin{table}[!hbt]\caption{Ablation study of the proposed method.}\begin{center}\begin{tabular}{ l | c | c | c | c }\hlineMethod / Metrics & BACC & ACC & NMI & ARI \\\hlineTabularNCD & 90.8$\pm$4.0 & 91.4$\pm$5.0 & 0.71$\pm$0.11 & 0.79$\pm$0.07 \\\hline- w/o SSL & 88.4$\pm$5.3 & 88.6$\pm$7.0 & 0.67$\pm$0.15 & 0.67$\pm$0.10 \\- w/o CE & 72.0$\pm$6.1 & 69.5$\pm$6.0 & 0.44$\pm$0.12 & 0.49$\pm$0.08 \\- w/o BCE & 33.3$\pm$0.0 & 51.7$\pm$0.0 & 0.00$\pm$0.00 & 0.00$\pm$0.00 \\- w/o MSE & 66.7$\pm$5.7 & 63.9$\pm$4.4 & 0.44$\pm$0.02 & 0.37$\pm$0.02 \\ \hline\end{tabular}\label{table:ablation_study}\end{center}\footnotesize{\textbf{SSL:} Self-Supervised Learning, \textbf{CE:} Cross Entropy loss of the classification network, \textbf{BCE:} Binary Cross Entropy loss of the clustering network, \textbf{MSE:} Mean Squared Error consistency loss. The dataset is Satimage \cite{uci}.}\end{table}
Generate the LaTeX code to recreate this table.
Ablation study of the proposed method.
I need a caption for this LaTeX table, please.
2209.01217v3_train__62e53aed-07e4-4798-827e-4ebcf6df3871
arxiv_table_to_tex
\begin{table}[!h]\begin{table}[!h]\caption{Clustering performance on new classes of $k$-means on different data representations.}\begin{center}\begin{tabular}{l | c c c c}\hlineDataset & \multicolumn{4}{c}{MNIST} \\\hlineMethod / Metrics & BACC & ACC & NMI & ARI \\\hline(1) $k$-means & 61.0 & 61.1 & 0.476 & 0.378 \\(2) $k$-means w/ SSL & 63.8 & 63.9 & 0.501 & 0.414 \\(3) $k$-means w/ joint & 84.9 & 85.4 & 0.748 & 0.728 \\\hline\end{tabular}\label{table:k_means_different_latent}\end{center}\footnotesize{(1) corresponds to the input data, (2) to the latent space after Self-Supervised Learning on all the available data (Sec. \ref{sec:ssl}), and (3) after the joint training of the proposed method (Sec. \ref{sec:joint})}\end{table}
Generate the LaTeX code to recreate this table.
Clustering performance on new classes of $k$-means on different data representations.
Please provide an appropriate caption for this LaTeX table.
2307.06524v1_train__c059d00b-9e90-43ed-b75c-e2c7e364fdf3
arxiv_table_to_tex
\begin{table*}[ht]\begin{table*}[ht]\centering\label{tab:sample-dialogue}\begin{tabular}{l|l|l}\multicolumn{1}{c|}{\textbf{Hyperparameter}} & \multicolumn{1}{c}{\textbf{Sweep Range}} & \textbf{Best Value} \\\topruleLearning Rate & 1 $\times 10^{-4}, 1 \times 10^{-5}, 1 \times 10^{-3}, 6 \times 10^{-4}$ & $6 \times 10^{-4}$ \\\hlineBatch Size & 32, 64, 128 & 32 \\\hlineEarly Stopping (Min Delta) & 0, 0.001, 0.005, 0.1, 0.5 & 0 \\\hlineEarly Stopping (Patience) & 0, 1, 2, 3, 4 & 4 \\\hlineGradient Clip Norm & 1.0 & 1.0 \\\hlinePrecision & 16, 32 & 32 \\\hlineContext Window Size & 1, 2, 3, 4 & 3 \\\hline\end{tabular}\caption{Hyperparameter values for all experiments.\label{tab:hyperparams}}\end{table*}
Create a LaTeX script to illustrate this table.
Hyperparameter values for all experiments.\label{tab:hyperparams
I need a caption for this LaTeX table, please.
2111.03237v3_train__da32d273-d5d8-48fc-a1dc-3ba72b36c935
arxiv_table_to_tex
\begin{table}[htbp]\begin{table}[htbp]\centering\begin{tabular}{| c| c c c c c c c c |}\hline$\delta$ & $1.5$ & $2$ & $2.5$ & $3$ & $3.5$ & $4$& $4.5$& $5$ \\[0.5ex]\hline$\beta=0$ & 0.27 & 0.20 & 0.16 & 0.12 & 0.10 & 0.08 &0.07 & 0.06 \\\hline$\beta=5$ & 0.45 & 0.38 & 0.33 & 0.29 & 0.26 & 0.23 & 0.20 & 0.18 \\\hline$\beta=10$ & 0.62 & 0.58 & 0.55 & 0.52 & 0.50 & 0.48 & 0.45 & 0.44 \\\hline\end{tabular} \\[0.5ex]\caption{MSE of GLM-EP for the 1-bit CS problem. $n=10^5$. The MSE is averaged over 100 independent runs. The number of iterations is 20.}\label{Tab:1-bit}\end{table}
Please create LaTeX code to produce this table.
MSE of GLM-EP for the 1-bit CS problem. $n=10^5$. The MSE is averaged over 100 independent runs. The number of iterations is 20.
Could you generate a caption for this LaTeX table?
2111.03237v3_train__e1a365a4-3ec9-4b01-98b4-41cd2d274e6e
arxiv_table_to_tex
\begin{table}[htbp]\begin{table}[htbp]\centering\begin{tabular}{| c | c c c c c |}\hlineSNR & 30dB& 35dB & 40dB & 45dB & 50dB \\[0.2ex]\hlineMSE &1.28e-01 & 5.92e-02 & 2.18e-02 & 6.94e-03 & 2.14e-03\\\hline\end{tabular} \\[0.5ex]\caption{MSE of GLM-EP for noisy phase retrieval. $\delta=1.1$. $n=10^5$. $\beta=10$. The MSE is averaged over 100 independent runs. The number of iterations is 10.} \label{Tab:PR_noisy}\end{table}
Create a LaTeX script to illustrate this table.
MSE of GLM-EP for noisy phase retrieval. $\delta=1.1$. $n=10^5$. $\beta=10$. The MSE is averaged over 100 independent runs. The number of iterations is 10.
Can you give me a caption for this LaTeX table?
2203.13503v1_train__07c4e820-515d-4fae-bb54-0edbb5b2ca25
arxiv_table_to_tex
\begin{table*}[h]\begin{table*}[h]\vspace{-5pt}\tiny\centering\begin{tabular}{c|ccccc|ccccc|ccccc}\toprule\multirow{2}{*}{Criteria} & \multicolumn{5}{c|}{SL}& \multicolumn{5}{c|}{SSMI}&\multicolumn{5}{c}{PSNR}\\& BE & LGM & DEGM & DEGM-2& CN-DPM*& BE & LGM & DEGM & DEGM-2& CN-DPM*& BE & LGM & DEGM & DEGM-2& CN-DPM*\\\midruleMNIST &26.3&685.3&22.3 &22.3&21.9&0.88&0.19&0.90 &0.90&0.90&21.0&7.0&21.8 &21.8&21.8\\SVHN &47.0&941.7&30.1 &29.0&39.3&0.58&0.20&0.66 &0.67&0.61&13.7&5.0&15.5 &15.7&14.3\\Fashion&43.8&663.4&37.7 &27.4&36.6&0.68&0.15&0.72 &0.79&0.73&18.4&3.7&19.0 &20.6&19.2\\IFashion &45.9&1148.4&35.6 &27.4&38.4&0.72&0.11&0.76 &0.81&0.76&18.2&5.0&19.4 &20.6&19.1\\RMNIST &27.9&704.2&20.2 &22.1&25.3&0.87&0.20&0.91 &0.90&0.89&16.5&7.0&22.2 &21.8&21.2\\Cifar10 &994.4&1241.1&615.3 &608.1&892.1&0.29&0.23&0.49 &0.50&0.34&16.5&15.4&18.9 &18.9&17.0\\Average &197.5&897.4&126.9 &122.7&175.6&0.67&0.18&0.74 &0.76&0.70&18.1&7.2&19.5 &19.9&18.8\\\bottomrule\end{tabular}\caption{The performance of various models under the MSFIRC learning setting.}\label{Unsupervised1}\end{table*}
Please create LaTeX code to produce this table.
The performance of various models under the MSFIRC learning setting.
Please create a caption for this LaTeX table.
2203.13503v1_train__a246b93f-4c7f-4579-be51-b4101df5d275
arxiv_table_to_tex
\begin{table*}[h]\begin{table*}[h]\vspace{-5pt}\tiny\centering\begin{tabular}{c|ccccc|ccccc|ccccc}\toprule\multirow{2}{*}{Criteria} & \multicolumn{5}{c|}{SL}& \multicolumn{5}{c|}{SSMI}&\multicolumn{5}{c}{PSNR}\\& BE & LGM & DEGM & DEGM-2& CN-DPM*& BE & LGM & DEGM & DEGM-2& CN-DPM*& BE & LGM & DEGM & DEGM-2& CN-DPM*\\\midruleCelebA &213.9&535.6&229.2&217.0&215.4&0.69&0.48&0.66&0.69&0.69&23.5&19.3&23.2&23.4&23.5\\CACD &414.9&814.3&368.3&281.95&347.3&0.57&0.47&0.62&0.68&0.63&20.6&17.33&21.2&22.4&21.4\\3D-Chair &649.1&2705.9&324.0&291.46&513.8&0.73&0.42&0.84&0.86&0.79&19.0&13.54&22.4&23.1&20.5\\Omniglot &875.1&5958.9&225.6&195.7&343.2&0.73&0.22&0.92&0.93&0.89&17.9&9.2&24.0&24.6&22.1\\Sub-ImageNet &758.4&683.1&689.6&652.8&769.1&0.37&0.42&0.41&0.43&0.37&18.5&18.9&19.0&19.2&18.5\\Car &745.1&583.7&588.8&565.9&709.8&0.39&0.48&0.47&0.49&0.42&18.0&19.0&19.0&19.2&18.2\\Zappos &451.1&431.2&263.4&275.8&280.7&0.68&0.60&0.75&0.74&0.73&20.0&20.2&22.4&22.3&22.1\\CUB &492.0&330.2&461.3&569.6&638.6&0.35&0.48&0.45&0.43&0.35&19.0&20.9&19.3&18.6&18.0\\Average &575.0&1505.4&393.8&381.3&477.2&0.60&0.45&0.64&0.66&0.61&19.6&17.3&21.3&21.6&20.5\\\bottomrule\end{tabular}\caption{The performance of various models under the CCCOSCZC learning setting.}\label{Unsupervised2}\end{table*}
I need the LaTeX code to create this table, please.
The performance of various models under the CCCOSCZC learning setting.
Please write a caption that describes this LaTeX table.
2203.13503v1_train__cd26e0e8-a883-4da0-b806-ef8388345c98
arxiv_table_to_tex
\begin{table}[ht]\begin{table}[ht]\centering\begin{tabular}{l|cccccc}\hlineModel & LGM &BE & DEGM& DEGM-2&CN-DPM* &LIMix \\\hlineN &${1.9 \times 10^9}$ &${3.9 \times 10^9}$&${3.2 \times 10^8}$ &${1.3 \times 10^9}$&${9.4 \times 10^9}$&${9.4 \times 10^9}$ \\\hline\end{tabular}\caption{The number of parameters of various models under CCCOSCZC learning setting.}\label{Unsupervised_2}\end{table}
Could you please provide me the LaTeX code to generate this table?
The number of parameters of various models under CCCOSCZC learning setting.
Please provide an appropriate caption for this LaTeX table.
2311.10091v1_train__532b93ad-b829-45d6-bbc4-041e300f30e5
arxiv_table_to_tex
\begin{table}[t!]\begin{table}[t!]\setlength{\tabcolsep}{2pt}\centering\small\caption{Quantitative results on the \MipNerfDataset~ data set. We report the PSNR, LPIPS and SSIM results for each object and compare them to baselines. Our method achieves a performance comparable to the baselines while being significantly faster during inference\revAdded{~(see \tabref{perf_comparison})}.\revAdded{In our comparison, we exclude the two scenes with license issues: Flowers, Treehill.}}\label{tab:outdoor_scenes}\vspace{-1em}\resizebox{1\columnwidth}{!}{\begin{tabular}{cc|lll|lll}\toprule& & \multicolumn{3}{c}{Outdoor scenes} & \multicolumn{3}{c}{Indoor scenes} \\& & PSNR~$\uparrow$ & SSIM~$\uparrow$ & LPIPS~$\downarrow$ & PSNR~$\uparrow$ & SSIM~$\uparrow$ & LPIPS~$\downarrow$ \\\midrule\multirow{4}{*}{\rotatebox[origin=c]{90}{\emph{offline}}} & NeRF~\cite{mildenhall2020nerf} & 22.20 & 0.485 & 0.501 & 26.84 & 0.790 & 0.370 \\& Mip-NeRF~\cite{barron2021mipnerf} & 22.02 & 0.505 & 0.484 & 26.98 & 0.798 & 0.361 \\& Mip-NeRF 360~\cite{barron2022mipnerf360} & 25.92~\tikzcircle[gold,fill=gold]{2pt} & 0.747~\tikzcircle[gold,fill=gold]{2pt} & 0.244~\tikzcircle[gold,fill=gold]{2pt} & 31.72~\tikzcircle[gold,fill=gold]{2pt} & 0.917~\tikzcircle[gold,fill=gold]{2pt} & 0.179~\tikzcircle[gold,fill=gold]{2pt} \\& Ours (full ray) & 24.30~\tikzcircle[silver,fill=silver]{2pt} & 0.703~\tikzcircle[silver,fill=silver]{2pt} & 0.316~\tikzcircle[silver,fill=silver]{2pt} & 29.04 & 0.900~\tikzcircle[silver,fill=silver]{2pt} & 0.239~\tikzcircle[silver,fill=silver]{2pt} \\\midrule\midrule\multirow{4}{*}{\rotatebox[origin=c]{90}{\emph{interactive}}}& I-NGP~\cite{mueller2022ingp} & 23.90~\tikzcircle[bronze,fill=bronze]{2pt} & 0.648~\tikzcircle[bronze,fill=bronze]{2pt} & 0.369 & 29.47~\tikzcircle[silver,fill=silver]{2pt} & 0.877~\tikzcircle[bronze,fill=bronze]{2pt} & 0.273~\tikzcircle[bronze,fill=bronze]{2pt} \\& MobileNeRF~\cite{chen2022mobilenerf} & 22.90 & 0.524 & 0.463 & 25.74 & 0.757 & 0.453 \\& BakedSDF~\cite{yariv2023bakedsdf} & 23.40 & 0.577 & 0.351~\tikzcircle[bronze,fill=bronze]{2pt} & 27.20 & 0.845 & 0.300 \\& Ours & 23.17 & 0.606 & 0.389 & 29.19~\tikzcircle[bronze,fill=bronze]{2pt} & 0.872 & 0.285 \\\bottomrule\end{tabular}}\end{table}
Please create LaTeX code to produce this table.
Quantitative results on the \MipNerfDataset~ data set. We report the PSNR, LPIPS and SSIM results for each object and compare them to baselines. Our method achieves a performance comparable to the baselines while being significantly faster during inference\revAdded{~(see \tabref{perf_comparison
Please create a caption for this LaTeX table.
2311.10091v1_train__d52d8d03-64a6-4897-ae6f-0b53a378c553
arxiv_table_to_tex
\begin{table*}[!thbp]\begin{table*}[!thbp]\centering\small\caption{\revAdded{Per-scene quantitative LPIPS comparison on \ShellyDataset{} data set. }}\label{tab:shelly_details_lpips}\vspace{-1em}\begin{tabular}{c|c|c|c|c|c|c|c}\topruleLPIPS~$\downarrow$ & \tabincell{c}{NeRF\\~\cite{mildenhall2020nerf}} & \tabincell{c}{MipNeRF\\\cite{barron2021mipnerf}} & \tabincell{c}{NeuS\\\cite{wang2021neus}} & \tabincell{c}{I-NGP\\\cite{mueller2022ingp}} & \tabincell{c}{MobileNeRF\\\cite{chen2022mobilenerf}} & \tabincell{c}{Ours\\(full ray)} & {Ours} \\\midruleFernvase & 0.093 & 0.088 & 0.094 & 0.068 & 0.074 & 0.065 & 0.046 \\Pug & 0.198 & 0.190 & 0.209 & 0.156 & 0.167 & 0.132 & 0.093 \\Woolly & 0.241 & 0.217 & 0.232 & 0.181 & 0.163 & 0.139 & 0.089 \\Horse & 0.071 & 0.064 & 0.067 & 0.049 & 0.057 & 0.036 & 0.029 \\Khady & 0.246 & 0.239 & 0.251 & 0.215 & 0.218 & 0.185 & 0.160 \\Kitten & 0.094 & 0.092 & 0.097 & 0.079 & 0.094 & 0.066 & 0.056 \\\midruleAverage & 0.157 & 0.148 & 0.158 & 0.125 & 0.129 & 0.104 & 0.079 \\\bottomrule\end{tabular}\end{table*}
Provide LaTeX code for this table.
\revAdded{Per-scene quantitative LPIPS comparison on \ShellyDataset{
Can you give me a caption for this LaTeX table?
2311.10091v1_train__185c076a-ec1d-477f-8070-7d752efe67d4
arxiv_table_to_tex
\begin{table*}[!thbp]\begin{table*}[!thbp]\centering\small\caption{\revAdded{Per-scene quantitative SSIM comparison on \ShellyDataset{} dataset. }}\label{tab:shelly_details_ssim}\vspace{-1em}\begin{tabular}{c|c|c|c|c|c|c|c}\topruleSSIM~$\uparrow$ & \tabincell{c}{NeRF\\~\cite{mildenhall2020nerf}} & \tabincell{c}{MipNeRF\\\cite{barron2021mipnerf}} & \tabincell{c}{NeuS\\\cite{wang2021neus}} & \tabincell{c}{I-NGP\\\cite{mueller2022ingp}} & \tabincell{c}{MobileNeRF\\\cite{chen2022mobilenerf}} & \tabincell{c}{Ours\\(full ray)} & {Ours} \\\midruleFernvase & 0.937 & 0.940 & 0.932 & 0.955 & 0.944 & 0.964 & 0.976 \\Pug & 0.863 & 0.868 & 0.865 & 0.896 & 0.885 & 0.910 & 0.947 \\Woolly & 0.805 & 0.822 & 0.803 & 0.876 & 0.891 & 0.896 & 0.950 \\Horse & 0.975 & 0.980 & 0.973 & 0.985 & 0.980 & 0.988 & 0.992 \\Khady & 0.831 & 0.835 & 0.833 & 0.852 & 0.823 & 0.862 & 0.881 \\Kitten & 0.949 & 0.954 & 0.949 & 0.967 & 0.942 & 0.969 & 0.979 \\\midruleAverage & 0.893 & 0.899 & 0.893 & 0.922 & 0.911 & 0.932 & 0.954 \\\bottomrule\end{tabular}\end{table*}
Provide LaTeX code for this table.
\revAdded{Per-scene quantitative SSIM comparison on \ShellyDataset{
Please create a caption for this LaTeX table.