wb

whitebill

AI & ML interests

None yet

Recent Activity

updated a collection 12 days ago
my1
liked a model 15 days ago
ruslandev/llama-3-8b-gpt-4o
updated a collection 15 days ago
my1
View all activity

Organizations

whitebill's activity

Reacted to averoo's post with ๐Ÿ‘ 26 days ago
view post
Post
3773
Hello, researchers! I've tried to made reading HF Daily Papers easier and made a tool that does reviews with LLMs like Claude 3.5, GPT-4o and sometimes FLUX.

๐Ÿ“š Classification by topics
๐Ÿ“… Sorting by publication date and HF addition date
๐Ÿ”„ Syncing every 2 hours
๐Ÿ’ป Hosted on GitHub
๐ŸŒ English, Russian, and Chinese
๐Ÿ“ˆ Top by week/month (in progress)

๐Ÿ‘‰ https://hfday.ru

Let me know what do you think of it.
updated a collection about 1 month ago
updated a collection about 2 months ago
Reacted to singhsidhukuldeep's post with ๐Ÿ‘ about 2 months ago
view post
Post
3986
Researchers have developed a novel approach called Logic-of-Thought (LoT) that significantly enhances the logical reasoning capabilities of large language models (LLMs).

Here are the steps on how Logic-of-Thought (LoT) is implemented:

-- 1. Logic Extraction

1. Use Large Language Models (LLMs) to identify sentences containing conditional reasoning relationships from the input context.
2. Generate a collection of sentences with logical relationships.
3. Use LLMs to extract the set of propositional symbols and logical expressions from the collection.
4. Identify propositions with similar meanings and represent them using identical propositional symbols.
5. Analyze the logical relationships between propositions based on their natural language descriptions.
6. Add negation (ยฌ) for propositions that express opposite meanings.
7. Use implication (โ†’) to connect propositional symbols when a conditional relationship exists.

-- 2. Logic Extension

1. Apply logical reasoning laws to the collection of logical expressions from the Logic Extraction phase.
2. Use a Python program to implement logical deduction and expand the expressions.
3. Apply logical laws such as Double Negation, Contraposition, and Transitivity to derive new logical expressions.

-- 3. Logic Translation

1. Use LLMs to translate the newly generated logical expressions into natural language descriptions.
2. Combine the natural language descriptions of propositional symbols according to the extended logical expressions.
3. Incorporate the translated logical information as a new part of the original input prompt.

-- 4. Integration with Existing Prompting Methods

1. Combine the LoT-generated logical information with the original prompt.
2. Use this enhanced prompt with existing prompting methods like Chain-of-Thought (CoT), Self-Consistency (SC), or Tree-of-Thoughts (ToT).
3. Feed the augmented prompt to the LLM to generate the final answer.

What do you think about LoT?
  • 1 reply
ยท
updated a collection about 2 months ago
Reacted to MohamedRashad's post with ๐Ÿ‘ 2 months ago
Reacted to Wauplin's post with ๐Ÿš€ 2 months ago
view post
Post
4511
๐Ÿš€ Exciting News! ๐Ÿš€

We've just released ๐š‘๐šž๐š๐š๐š’๐š—๐š๐š๐šŠ๐šŒ๐šŽ_๐š‘๐šž๐š‹ v0.25.0 and it's packed with powerful new features and improvements!

โœจ ๐—ง๐—ผ๐—ฝ ๐—›๐—ถ๐—ด๐—ต๐—น๐—ถ๐—ด๐—ต๐˜๐˜€:

โ€ข ๐Ÿ“ ๐—จ๐—ฝ๐—น๐—ผ๐—ฎ๐—ฑ ๐—น๐—ฎ๐—ฟ๐—ด๐—ฒ ๐—ณ๐—ผ๐—น๐—ฑ๐—ฒ๐—ฟ๐˜€ with ease using huggingface-cli upload-large-folder. Designed for your massive models and datasets. Much recommended if you struggle to upload your Llama 70B fine-tuned model ๐Ÿคก
โ€ข ๐Ÿ”Ž ๐—ฆ๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—”๐—ฃ๐—œ: new search filters (gated status, inference status) and fetch trending score.
โ€ข โšก๐—œ๐—ป๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ๐—–๐—น๐—ถ๐—ฒ๐—ป๐˜: major improvements simplifying chat completions and handling async tasks better.

Weโ€™ve also introduced tons of bug fixes and quality-of-life improvements - thanks to the awesome contributions from our community! ๐Ÿ’ช

๐Ÿ’ก Check out the release notes: Wauplin/huggingface_hub#8

Want to try it out? Install the release with:

pip install huggingface_hub==0.25.0

  • 1 reply
ยท
Reacted to aaditya's post with ๐Ÿ‘ 3 months ago
view post
Post
3483
Last Week in Medical AI: Top Research Papers/Models
๐Ÿ…(September 1 - September 7, 2024)

Medical LLM & Other Models :
- CancerLLM: Large Language Model in Cancer Domain
- MedUnA: Vision-Language Models for Medical Image
- Foundation Model for Robotic Endoscopic Surgery
- Med-MoE: MoE for Medical Vision-Language Models
- CanvOI: Foundation Model for Oncology
- UniUSNet: Ultrasound Disease Prediction
- DHIN: Decentralized Health Intelligence Network

Medical Benchmarks and Evaluations:
- TrialBench: Clinical Trial Datasets & Benchmark
- LLMs for Medical Q&A Evaluation
- MedFuzz: Exploring Robustness Medical LLMs
- MedS-Bench: Evaluating LLMs in Clinical Tasks
- DiversityMedQA: Assessing LLM Bias in Diagnosis
- LLM Performance in Gastroenterology

LLM Digital Twins:
- Digital Twins for Rare Gynecological Tumors
- DT-GPT: Digital Twins for Patient Health Forecasting

Medical LLM Applications:
- HIPPO: Explainable AI for Pathology
- LLMs vs Humans in CBT Therapy
- ASD-Chat: LLMs for Autistic Children
- LLMs for Mental Health
- LLMs for Postoperative Risk Prediction

Frameworks and Methodologies:
- Rx Strategist: LLM-based Prescription Verification
- Medical Confidence Elicitation
- Guardrails for Medical LLMs

Check the full thread : https://x.com/OpenlifesciAI/status/1832476252260712788

Thank you for your continued support and love for this series! Stay up-to-date with weekly updates on Medical LLMs, datasets, and top research papers by following @aaditya ๐Ÿค—