llama3
Here are 329 public repositories matching this topic...
Chat locally using leading open models built by the community, optimized and accelerated by NVIDIA's enterprise-ready inference runtime
-
Updated
Jun 11, 2024 - Python
Detailed description given in the README
-
Updated
May 19, 2024 - Python
Run a "super-human" AI from the year 2124 in your terminal with only three lines of code, today!
-
Updated
May 21, 2024 - Python
A copy of groq cloud's demo app, with Llama-3-70B
-
Updated
May 27, 2024 - Python
Ce projet est destiné aux utilisateurs souhaitant extraire et analyser des informations de plusieurs fichiers PDF.
-
Updated
May 28, 2024 - Python
⚗️ Llama3 8b instruct model repository trained by meta managed by DVC
-
Updated
Jun 5, 2024 - Python
how to create a local RAG (Retrieval Augmented Generation) pipeline that processes and allows you to chat with your PDF file(s) using Ollama and LangChain!
-
Updated
May 11, 2024 - Jupyter Notebook
Considering how to analyse book collections, Large Language Model style
-
Updated
May 29, 2024 - Python
-
Updated
May 15, 2024 - Java
Adapted BERTopic pipeline for Topic Modeling the arXiv dataset
-
Updated
Jun 8, 2024 - Python
An attempt to run ollama on kuberenetes
-
Updated
Jun 9, 2024
Bot en telegram con IA de llama 2 and llama 3
-
Updated
Jun 12, 2024 - Java
Improve this page
Add a description, image, and links to the llama3 topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama3 topic, visit your repo's landing page and select "manage topics."