Spaces:
Running
Running
import streamlit as st | |
st.set_page_config(page_title="Home",page_icon="🏠") | |
translations = { | |
'en': { | |
'title': 'Vision Papers 📚', | |
'introduction': | |
""" | |
This app contains all of my paper posts on [X](https://x.com/mervenoyann) for your convenience! | |
Start browsing papers on the left tab 🔖 | |
This app is made by an amazing human being called [Loïck Bourdois](https://x.com/BdsLoick) so please show this some love and like the Space if you think it's useful 💖 | |
""", | |
'extra_content': | |
""" | |
Beyond this pack of summaries of papers, if you'd like to dig deeper into the subject of vision language models, you can check out some of the other resources I've been working on 👩🔬: | |
* This [collection](https://hf.co/collections/merve/vision-language-models-papers-66264531f7152ac0ec80ceca) of papers (listing models which are not summarized in this Space but which may be of interest) 📄 | |
* Tasks that can be handled by these models, such as [Document Question Answering](https://huggingface.co/tasks/document-question-answering), [Image-Text-to-Text](https://huggingface.co/tasks/image-text-to-text) or [Visual Question Answering](https://huggingface.co/tasks/visual-question-answering) | |
* Blog posts on [ConvNets](https://merveenoyan.medium.com/complete-guide-on-deep-learning-architectures-chapter-1-on-convnets-1d3e8086978d), [Autoencoders](https://merveenoyan.medium.com/complete-guide-on-deep-learning-architectures-part-2-autoencoders-293351bbe027), [explaining vision language models](https://huggingface.co/blog/vlms), [finetuning it with TRL](https://huggingface.co/blog/dpo_vlm) and the announcement of certain models such as [PaliGemma](https://huggingface.co/blog/paligemma) ✍️ | |
* A GitHub repository containing various notebooks taking full advantage of these models (optimizations, quantization, distillation, finetuning, etc.): [smol-vision](https://github.com/merveenoyan/smol-vision) ⭐ | |
* A 12-minute summary YouTube video 🎥 | |
""" | |
}, | |
'fr': { | |
'title': 'Papiers de vision 📚', | |
'introduction': | |
""" | |
Cette appli contient tous les résumés de papiers que j'ai publiés sur [X](https://x.com/mervenoyann) afin de vous faciliter la tâche ! | |
Vous avez juste à parcourir l'onglet de gauche 🔖 | |
Cette application a été créée par un être humain extraordinaire, [Loïck Bourdois](https://x.com/BdsLoick), alors s'il vous plaît montrez-lui un peu d'amour et aimez le Space si vous le pensez utile 💖 | |
""", | |
'extra_content': | |
""" | |
Au delà de ce pack de résumés de papiers, si vous souhaitez creuser le sujet des modèles de langage/vision, vous pouvez consulter d'autres ressources sur lesquelles j'ai travaillées 👩🔬: | |
* Cette [collection](https://hf.co/collections/merve/vision-language-models-papers-66264531f7152ac0ec80ceca) de papiers sur le sujet (listant des modèles non résumés dans ce Space qui pourraient tout de même vous intéresser) 📄 | |
* Les tâches pouvant être traitées par ces modèles comme le [Document Question Answering](https://huggingface.co/tasks/document-question-answering), l'[Image-Text-to-Text](https://huggingface.co/tasks/image-text-to-text) ou encore le [Visual Question Answering](https://huggingface.co/tasks/visual-question-answering) | |
* Des articles de blog portant sur [les ConvNets](https://merveenoyan.medium.com/complete-guide-on-deep-learning-architectures-chapter-1-on-convnets-1d3e8086978d), [les auto-encodeurs](https://merveenoyan.medium.com/complete-guide-on-deep-learning-architectures-part-2-autoencoders-293351bbe027), [l'explication des modèles de langage/vision](https://huggingface.co/blog/vlms), leur [finetuning avec TRL](https://huggingface.co/blog/dpo_vlm) ou encore l'annonce de modèles comme [PaliGemma](https://huggingface.co/blog/paligemma) ✍️ | |
* Un répertoire GitHub contenant divers notebooks pour tirer le meilleur parti de ces modèles (optimisations, quantization, distillation, finetuning, etc.) : [smol-vision](https://github.com/merveenoyan/smol-vision) ⭐ | |
* Une vidéo YouTube de synthèse en 12 minutes 🎥 | |
""" | |
} | |
} | |
def language_selector(): | |
languages = {'EN': '🇬🇧', 'FR': '🇫🇷'} | |
selected_lang = st.selectbox('', options=list(languages.keys()), format_func=lambda x: languages[x], key='lang_selector') | |
return 'en' if selected_lang == 'EN' else 'fr' | |
left_column, right_column = st.columns([5, 1]) | |
# Add a selector to the right column | |
with right_column: | |
lang = language_selector() | |
# Add a title to the left column | |
with left_column: | |
st.title(translations[lang]['title']) | |
# Main app content | |
# st.image("Turkish_girl_from_back_sitting_at_a_desk_writing_view_on_an_old_castle_in_a_window_wehre_a_cat_lying_ghibli_anime_like_hd.jpg", use_column_width=True) | |
st.markdown(""" """) | |
st.write(translations[lang]['introduction']) | |
st.markdown(""" """) | |
st.write(translations[lang]['extra_content']) | |
st.video("https://www.youtube.com/watch?v=IoGaGfU1CIg", format="video/mp4") |