File size: 4,587 Bytes
1ab790f
d1806c6
 
 
 
1ab790f
 
 
e1d5259
1ab790f
0f38b9a
d1806c6
89d9031
d1806c6
27c12c8
d1806c6
 
 
27c12c8
d1806c6
27c12c8
 
d1806c6
 
 
27c12c8
 
 
 
 
 
 
d1806c6
 
 
 
27c12c8
 
d1806c6
27c12c8
d1806c6
 
 
27c12c8
 
d1806c6
 
 
 
 
 
 
27c12c8
d1806c6
 
 
27c12c8
 
 
 
 
d1806c6
 
 
89d9031
 
27c12c8
 
 
d1806c6
0f38b9a
d1806c6
 
 
89d9031
d1806c6
 
 
 
 
 
 
 
e1d5259
d1806c6
0f38b9a
d1806c6
 
 
 
 
 
0f38b9a
89d9031
e1d5259
d1806c6
27c12c8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
title: ArtyLLaMA
emoji: πŸ¦™πŸŽ¨
colorFrom: indigo
colorTo: pink
sdk: static
pinned: false
---
# ArtyLLaMA: Empowering AI Creativity in the Open Source Community πŸ¦™πŸŽ¨

ArtyLLaMA is an experimental chat interface for Open Source Large Language Models, leveraging the power of Ollama, OpenAI, and Anthropic. It features dynamic content generation and display through an "Artifacts-like" system, making AI-assisted creativity more accessible and interactive.

## Project Description

ArtyLLaMA is not a model itself, but a framework that allows users to interact with various language models. It provides a user-friendly interface for generating creative content, code, and visualizations using state-of-the-art language models.

### Key Features:

- πŸ¦™ **Multi-Provider Integration**: Seamless support for Ollama, OpenAI, and Anthropic models
- 🎨 **Dynamic Artifact Generation**: Create and display content artifacts during chat interactions
- πŸ–₯️ **Real-time HTML Preview**: Instantly visualize HTML artifacts with interactive canvas
- πŸ”„ **Multi-Model Support**: Choose from multiple language models across providers
- πŸ“± **Responsive Design**: Mobile-friendly interface built with Tailwind CSS
- πŸŒ™ **Dark Mode**: Easy on the eyes with a default dark theme
- πŸš€ **Local Inference**: Run models locally for privacy and customization
- πŸ–‹οΈ **Code Syntax Highlighting**: Enhanced readability for various programming languages
- 🎭 **SVG Rendering Support**: Display AI-created vector graphics
- 🌐 **3D Visualization**: Utilize Three.js for 3D visualizations and simulations
- πŸ” **User Authentication**: JWT-based system for user registration and login
- πŸ“š **Personalized Chat History**: Store and retrieve messages based on user ID
- πŸ” **Semantic Search**: Cross-model semantic search capabilities in chat history
- πŸ”€ **Dynamic Embedding Collections**: Support for multiple embedding models with automatic collection creation

## Intended Use

ArtyLLaMA is designed for developers, researchers, and creative professionals who want to:
- Explore the capabilities of various language models
- Generate and iterate on creative content, including code, designs, and written text
- Prototype AI-assisted applications and workflows
- Experiment with local and cloud-based AI inference

## Limitations

- Local setup requires installation of Ollama for certain features
- Performance depends on the user's hardware capabilities or chosen cloud provider
- Does not include built-in content moderation (users should implement their own safeguards)

## Ethical Considerations

Users of ArtyLLaMA should be aware of:
- Potential biases present in the underlying language models
- The need for responsible use and content generation
- Privacy implications of using AI-generated content and storing chat history

## Technical Specifications

- **Frontend**: React-based with Tailwind CSS
- **Backend**: Node.js with Express.js
- **Required Libraries**: React, Express.js, Tailwind CSS, Three.js, and others (see package.json)
- **Supported Model Formats**: Those supported by Ollama, OpenAI, and Anthropic
- **Hardware Requirements**: Varies based on the chosen model and deployment method

## Getting Started

1. Clone the repository: `git clone https://github.com/kroonen/ArtyLLaMA.git`
2. Install dependencies: `npm install`
3. Set up environment variables (see README for details on API keys)
4. Run the application: `npm run dev`
5. Access the interface at `http://localhost:3000`

For more detailed instructions, including Docker setup, visit our [GitHub repository](https://github.com/ArtyLLaMA/ArtyLLaMA).

## License

ArtyLLaMA is distributed under the ArtyLLaMa Research Project License. This license allows free use for non-commercial, academic, and research purposes with attribution. Commercial use requires explicit written permission. See the [LICENSE](https://github.com/kroonen/ArtyLLaMA/blob/main/LICENSE) file for full details.

## Citation

If you use ArtyLLaMA in your research or projects, please cite it as follows:

```bibtex
@software{artyllama2024,
  author = {Robin Kroonen},
  title = {ArtyLLaMA: Empowering AI Creativity in the Open Source Community},
  year = {2024},
  url = {https://github.com/kroonen}
}
```

## Contact

For questions, feedback, or collaborations, please reach out to:
- GitHub: [https://github.com/ArtyLLaMA](https://github.com/kroonen)
- Email: robin@kroonen.ai
- Twitter: [@rob_x_ai](https://x.com/rob_x_ai)

We welcome contributions and feedback from the community, subject to the terms of our license!