Fast Rust bundler for JavaScript with Rollup-compatible API. 🚧 Work in Progress Rolldown is currently in active development and not usable for production yet. Rolldown Rolldown is a JavaScript bundler written in Rust intended to serve as the future bundler used in Vite. It provides Rollup-compatible APIs and plugin interface, but will be more similar to esbuild in scope. For more information, please check out the documentation at rolldown.rs. Contributing We would love to have mor…
Run sing-box/mihomo as client in shell ShellCrash 中文 | English Function introduction: ~Convenient use in Shell environment through management script Crash~Support management of Crash functions~Support online import Crash supports sharing, subscription and configuration links~Support configuration timing tasks, support configuration file timing updates~Support online installation and Use local web panel to manage built-in rules~Support routing mode, native mode and other mode s…
Chat Ollama This is a Nuxt 3 + Ollama web application. It's an example of Ollama Javascript library. Feature list: Models management (list, download, delete) Chat with models Ollama Server You will need an Ollama server running. You can run it in local environment following the installation guide of Ollama. By default, Ollama server is running on http://localhost:11434. Install ChromaDB and startup #https://hub.docker.com/r/chromadb/chroma/tags docker pull chromadb/chroma docker run -d -…
The realtime kanban board for workgroups built with React and Redux.Planka Elegant open source project tracking. Client demo (without server features). Features Create projects, boards, lists, cards, labels and tasks Add card members, track time, set a due date, add attachments, write comments Markdown support in a card description and comment Filter by members and labels Customize project background Real-time updates User notifications Internationalization How to deploy Planka There are …
🚀 下一代 AI 一站式解决方案,一站式 Chat + 中转 API 站点,支持 OpenAI,Midjourney,Claude,讯飞星火,Stable Diffusion,DALL·E,ChatGLM,通义千问,腾讯混元,360 智脑,百川 AI,火山方舟,新必应,Google Gemini (PaLM2),Moonshot,LocalAI 等模型,支持对话分享,自定义预设,云端同步,模型市场,支持弹性计费 / 订阅计划模式,支持图片解析,支持联网搜索,支持模型缓存,丰富美观的后台管理与仪表盘数据统计。 Chat Nio 🚀 下一代 AI 一站式解决方案 🚀 Next Generation AI One-Stop Solution 官网 | 开放文档 | SDKs | QQ 群 📝 功能 ✨ AI 聊天对话功能 丰富格式兼容 支持 Vision 模型, 同时支持 直接上传图片 和 输入图片直链或 Base64 图片 功能 (如 GPT-4 Vision Preview, Gemini Pro Vision 等模型) 支持 DALL-E 模型绘图 支持 Midjourn…
Training LLMs with QLoRA + FSDPfsdp_qlora Training LLMs with Quantized LoRA + FSDP. Read our announcement blog post. You should treat this script as an alpha/preview release. If you’re not comfortable with testing and debugging models, we’d suggest holding off for a few months while the community more fully tests the approach. Installation The following steps should work (tested on Cuda 11.7, 11.8 and 12.1): Clone https://github.com/AnswerDotAI/fsdp_qlora pip install llama-recipes fastcore --e…
Learning English through the method of constructing sentences with conjunctions Earthworm English | 中文 ⚡ Introduction By constructing sentences with conjunctions, it helps you learn English better~ 😊 🚀 How to start ? ⚠️ Requirements pnpm version >= 8 Node.js version >= v20 MySQL version >= 8.0.0 Redis version >= 5.0.0 Docker. please make sure it is installed and running successfully on your local machine. The mentioned operations below are based on the root directory of the…
A distributed, fault-tolerant task queue A Distributed, Fault-Tolerant Task Queue Documentation · Website · Issues What is Hatchet? Hatchet replaces difficult to manage legacy queues or pub/sub systems so you can design durable workloads that recover from failure and solve for problems like concurrency, fairness, and rate limiting. Instead of managing your own task queue or pub/sub system, you can use Hatchet to distribute your functions betwe…
Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.pdfAQLM Official PyTorch implementation for Extreme Compression of Large Language Models via Additive Quantization Inference Demo Learn how to run the prequantized models using this Google Colab examples: Basic AQLM generation Streaming with GPU/CPU Inference with CUDA graphs (3x speedup) Fine-tuning with PEFT Models This repository is current…