Monad BFT Overview This repository contains implementation for the Monad consensus client and JsonRpc server. Monad consensus collects transactions and produces blocks which are written to a ledger filestream. These blocks are consumed by Monad execution, which then updates the state of the blockchain. The triedb is a database which stores block information and the blockchain state. Getting Started git submodule update --init --recursive Using Docker The most straightforward way to start a con…
Monad Execution Overview This repository contains the execution component of a Monad node. It handles the transaction processing for new blocks, and keeps track of the state of the blockchain. Consequently, this repository contains the source code for Category Labs' custom EVM implementation, its database implementation, and the high-level transaction scheduling. The other main repository is monad-bft, which contains the source code for the consensus component. Building the source code Pack…
3D Reconstruction for allBrush BrushSizzleCompressedFrame.mp4 Massive thanks to @GradeEterna for the beautiful scenes Brush is a 3D reconstruction engine using Gaussian splatting. It works on a wide range of systems: macOS/windows/linux, AMD/Nvidia/Intel cards, Android, and in a browser. To achieve this, it uses WebGPU compatible tech and the Burn machine learning framework. Machine learning for real time rendering has tons of potential, but most…
Open Battery Information This project aims to provide tools and information about various batteries in order to aid repair. It is very common for manufacturers to lock the BMS when a fault is detected to protect the device and the user. Very important feature! So when is it a problem? Well, there is always a chance for false triggering of this protection, or the fault could have been temporary or even repaired. In this case it would be wasteful to throw out a perfectly good BMS just because its…
Run LLMs with MLXMLX LM MLX LM is a Python package for generating text and fine-tuning large language models on Apple silicon with MLX. Some key features include: Integration with the Hugging Face Hub to easily use thousands of LLMs with a single command. Support for quantizing and uploading models to the Hugging Face Hub. Low-rank and full model fine-tuning with support for quantized models. Distributed inference and fine-tuning with mx.distributed The easiest way to get started is to instal…
A collection of projects showcasing RAG, agents, workflows, and other AI use casesAwesome AI Apps This repository is a comprehensive collection of practical examples, tutorials, and recipes for building powerful LLM-powered applications. From simple chatbots to advanced AI agents, these projects serve as a guide for developers working with various AI frameworks and tools. Powered by Nebius AI Studio - your one-stop platform for building and deploying AI applications. 🚀 Featured AI Agent Frame…
docker mcp CLI plugin / MCP GatewayDocker MCP Plugin and Docker MCP Gateway The MCP Toolkit, in Docker Desktop, allows developers to configure and consume MCP servers from the Docker MCP Catalog. Underneath, the Toolkit is powered by a docker CLI plugin: docker-mcp. This repository is the code of this CLI plugin. It can work in Docker Desktop or independently. The main feature of this CLI is the Docker MCP Gateway which allows easy and secure running and deployment of MCP servers. See Features…
DeepResearchAgent is a hierarchical multi-agent system designed not only for deep research tasks but also for general-purpose task solving. The framework leverages a top-level planning agent to coordinate multiple specialized lower-level agents, enabling automated task decomposition and efficient execution across diverse and complex domains.DeepResearchAgent English | 简体中文 | 🌐 Website Introduction image.png DeepResearchAgent is a hierarchical multi-agent system designed not only for deep res…
🚀 Efficient implementations of state-of-the-art linear attention models 💥 Flash Linear Attention This repo aims at providing a collection of efficient Triton-based implementations for state-of-the-art linear attention models. All implementations are written purely in PyTorch and Triton, making them platform-agnostic. Currently verified platforms include NVIDIA, AMD, and Intel. Any pull requests are welcome! News Models Installation Usage Token Mixing Fused Modules Generation Hybrid M…