Python scraper based on AI
-
Updated
Jun 11, 2024 - Python
Python scraper based on AI
Unify Efficient Fine-Tuning of 100+ LLMs
Chat locally using leading open models built by the community, optimized and accelerated by NVIDIA's enterprise-ready inference runtime
Foundation model benchmarking tool. Run any model on Amazon SageMaker and benchmark for performance across instance type and serving stack options.
🤖 The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
V.I.S.O.R., my in-development assistant
On-device LLM Inference Powered by X-Bit Quantization
LlamaIndex is a data framework for your LLM applications
Your AI second brain. Get answers to your questions, whether they be online or in your own notes. Use online AI models (e.g gpt4) or private, local LLMs (e.g llama3). Self-host locally or use our cloud instance. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
A simple chat-bot made with Ollama and vercel AI.
Fully Configurable RAG Pipeline for Bengali Language RAG Applications. Supports both Local and Huggingface Models, Built with Langchain.
Devon: An open-source pair programmer
Add a description, image, and links to the llama3 topic page so that developers can more easily learn about it.
To associate your repository with the llama3 topic, visit your repo's landing page and select "manage topics."