TinyGPT-V: Efficient Multimodal Large Language Model via Small BackbonesTinyGPT-V TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones Zhengqing Yuan❁, Zhaoxu Li❁, Lichao Sun❋ ❁Visiting Students at LAIR Lab, Lehigh University ❋Lehigh University News [Dec.28 2023] Breaking! We release the code of our TinyGPT-V. TinyGPT-V Traning Process TinyGPT-V Model Structure TinyGPT-V Results Getting Started Installation 1. Prepare the code and the environment Git clone our reposi…
GPIOViewer Arduino Library to see live GPIO Pins on ESP32 boardsGPIOViewer Arduino Library to see live GPIO Pins on ESP32 boards Transforms the way you troubleshoot your microcontroller projects. Youtube Tutorial https://youtu.be/UxkOosaNohU Installation Arduino IDE (Version 2) ℹ️ Make sure you have the latest ESP32 boards by Espressif Systems in your Board Manager Install the GPIOViewer Library with the Arduino IDE Library Manager or Download the latest release and install the library in th…
Next Auth v5 - Advanced Guide (2024) This is a repository for Next Auth v5 - Advanced Guide (2024) VIDEO TUTORIAL Key Features: 🔐 Next-auth v5 (Auth.js) 🚀 Next.js 14 with server actions 🔑 Credentials Provider 🌐 OAuth Provider (Social login with Google & GitHub) 🔒 Forgot password functionality ✉️ Email verification 📱 Two factor verification 👥 User roles (Admin & User) 🔓 Login component (Opens in redirect or modal) 📝 Register component 🤔 Forgot password component ✅ Verification componen…
Instant voice cloning by MyShell. Join our Discord community https://discord.gg/myshell and select the Developer role upon joining to gain exclusive access to our developer-only channel! Don't miss out on valuable discussions and collaboration opportunities. Paper | Website Join Our Community Join our Discord community and select the Developer role upon joining to gain exclusive access to our developer-only channel! Don't miss out on valuable discussions and collaboration opportuni…
AnyText: Multilingual Visual Text Generation And Editing 📌News [2023.12.28] - Online demo is available here! [2023.12.27] - 🧨We released the latest checkpoint(v1.1) and inference code, check on modelscope in Chinese. [2023.12.05] - The paper is available at here. 💡Methodology AnyText comprises a diffusion pipeline with two primary elements: an auxiliary latent module and a text embedding module. The former uses inputs like text glyph, position, and masked image to generate latent features f…
Owshen rewards and airdrops🔺 The Bermuda Airdrop/Testnet 🔺 Owshen is an innovative privacy platform developed for EVM-based blockchains. Here you will find the instructions to contribute in Owshen's very first airdrop and testnet, known as The Bermuda Testnet. Claim your airdrop! Before starting our testnet, we are running a zk-airdrop do distribute the very first DIVE tokens to our users. In order to participate in this airdrop, you will need an Owshen Address. This is how you can get an O…
atomicalsir Atomicals mining manager. Usage Atomicals mining manager. Usage: atomicalsir [OPTIONS] <PATH> Arguments: <PATH> Path to the atomicals-js repository's folder Options: --max-fee <VALUE> Maximum acceptable fee. This value will be passed to atomicals-js's `--satsbyte` flag if the current network's priority fee is larger than this value. [default: 150] --no-unconfirmed-txs-check Di…
Jan is an open source alternative to ChatGPT that runs 100% offline on your computerJan - Bring AI to your Desktop Getting Started - Docs - Changelog - Bug reports - Discord ⚠️ Jan is currently in Development: Expect breaking changes and bugs! Jan is an open-source ChatGPT alternative that runs 100% offline on your computer. Jan runs on any hardware. From PCs to multi-GPU clusters, Jan supports universal architectures: Nvidia GPUs (fast) Apple M-series …
Run Mixtral-8x7B models in Colab or consumer desktopsMixtral offloading This project implements efficient inference of Mixtral-8x7B models. How does it work? In summary, we achieve efficient inference of Mixtral-8x7B models through a combination of techniques: Mixed quantization with HQQ. We apply separate quantization schemes for attention layers and experts to fit the model into the combined GPU and CPU memory. MoE offloading strategy. Each expert per layer is offloaded separately and only b…