slawlor / ractor
- воскресенье, 19 февраля 2023 г. в 00:13:45
Rust actor framework
Pronounced R-aktor
A pure-Rust actor framework. Inspired from Erlang's gen_server, with the speed + performance of Rust!
ractor tries to solve the problem of building and maintaining an Erlang-like actor framework in Rust. It gives
a set of generic primitives and helps automate the supervision tree and management of our actors along with the traditional actor message processing logic. It's built heavily on tokio which is a
hard requirement for ractor.
ractor is a modern actor framework written in 100% rust with NO unsafe code.
Additionally ractor has a companion library, ractor_cluster which is needed for ractor to be deployed in a distributed (cluster-like) scenario. ractor_cluster is not yet ready for public release, but is work-in-progress and coming shortly!
There are other actor frameworks written in Rust (Actix, riker, or just actors in Tokio) plus a whole list compiled on this Reddit post.
Ractor tries to be different by modelling more on a pure Erlang gen_server. This means that each actor can also simply be a supervisor to other actors with no additional cost (simply link them together!). Additionally we're aiming to maintain close logic with Erlang's patterns, as they work quite well and are well utilized in the industry.
Additionally we wrote ractor without building on some kind of "Runtime" or "System" which needs to be spawned. Actors can be run independently, in conjunction with other basic tokio runtimes with little additional overhead.
We currently have full support for:
ractor::registry) from Erlang's Registered processesractor::pg) from Erlang's pg moduleOn our roadmap is to add more of the Erlang functionality including potentially a distributed actor cluster.
Actors in ractor are generally quite lightweight and there are benchmarks which you are welcome to run on your own host system with:
cargo bench -p ractorInstall ractor by adding the following to your Cargo.toml dependencies.
[dependencies]
ractor = "0.7"ractor exposes a single feature currently, namely:
cluster, which exposes various functionality required for ractor_cluster to set up and manage a cluster of actors over a network link. This is work-in-progress and is being tracked in #16.Actors in ractor are very lightweight and can be treated as thread-safe. Each actor will only call one of its handler functions at a time, and they will
never be executed in parallel. Following the actor model leads to microservices with well-defined state and processing logic.
An example ping-pong actor might be the following
use ractor::{cast, Actor, ActorProcessingErr, ActorRef};
/// [PingPong] is a basic actor that will print
/// ping..pong.. repeatedly until some exit
/// condition is met (a counter hits 10). Then
/// it will exit
pub struct PingPong;
/// This is the types of message [PingPong] supports
#[derive(Debug, Clone)]
pub enum Message {
Ping,
Pong,
}
impl Message {
// retrieve the next message in the sequence
fn next(&self) -> Self {
match self {
Self::Ping => Self::Pong,
Self::Pong => Self::Ping,
}
}
// print out this message
fn print(&self) {
match self {
Self::Ping => print!("ping.."),
Self::Pong => print!("pong.."),
}
}
}
// the implementation of our actor's "logic"
#[async_trait::async_trait]
impl Actor for PingPong {
// An actor has a message type
type Msg = Message;
// and (optionally) internal state
type State = u8;
// Startup initialization args
type Arguments = ();
// Initially we need to create our state, and potentially
// start some internal processing (by posting a message for
// example)
async fn pre_start(
&self,
myself: ActorRef<Self>,
_: (),
) -> Result<Self::State, ActorProcessingErr> {
// startup the event processing
cast!(myself, Message::Ping)?;
// create the initial state
Ok(0u8)
}
// This is our main message handler
async fn handle(
&self,
myself: ActorRef<Self>,
message: Self::Msg,
state: &mut Self::State,
) -> Result<(), ActorProcessingErr> {
if *state < 10u8 {
message.print();
cast!(myself, message.next())?;
*state += 1;
} else {
println!();
myself.stop(None);
// don't send another message, rather stop the agent after 10 iterations
}
Ok(())
}
}
#[tokio::main]
async fn main() {
let (_actor, handle) = Actor::spawn(None, PingPong, ())
.await
.expect("Failed to start ping-pong actor");
handle
.await
.expect("Ping-pong actor failed to exit properly");
}which will output
$ cargo run
ping..pong..ping..pong..ping..pong..ping..pong..ping..pong..
$ The means of communication between actors is that they pass messages to each other. A developer can define any message type which is Send + 'static and it
will be supported by ractor. There are 4 concurrent message types, which are listened to in priority. They are
Signal::Kill, and it immediately terminates all work. This includes message processing or supervision event processing.Ractor actors can also be used to build a distributed pool of actors, similar to Erlang's EPMD which manages inter-node connections + node naming. In our implementation, we have ractor_cluster in order to facilitate distributed ractor actors.
ractor_cluster has a single main type in it, namely the NodeServer which represents a host of a node() process. It additionally has some macros and a procedural macros to facilitate developer efficiency when building distributed actors. The NodeServer is responsible for
NodeSession actors which represent a remote node connected to this host.TcpListener which hosts the server socket to accept incoming session requests.The bulk of the logic for node interconnections however is held in the NodeSession which manages
etc..
The NodeSession makes local actors available on a remote system by spawning RemoteActors which are essentially untyped actors that only handle serialized messages, leaving message deserialization up to the originating system. It also keeps track of pending RPC requests, to match request to response upon reply. There are special extension points in ractor which are added to specifically support RemoteActors that aren't generally meant to be used outside of the standard
Actor::spawn(Some("name".to_string()), MyActor).awaitpattern.
Note not all actors are created equal. Actors need to support having their message types sent over the network link. This is done by overriding specific methods of the ractor::Message trait all messages need to support. Due to the lack of specialization support in Rust, if you choose to use ractor_cluster you'll need to derive the ractor::Message trait for all message types in your crate. However to support this, we have a few procedural macros to make this a more painless process
Many actors are going to be local-only and have no need sending messages over the network link. This is the most basic scenario and in this case the default ractor::Message trait implementation is fine. You can derive it quickly with:
use ractor_cluster::RactorMessage;
use ractor::RpcReplyPort;
#[derive(RactorMessage)]
enum MyBasicMessageType {
Cast1(String, u64),
Call1(u8, i64, RpcReplyPort<Vec<String>>),
}This will implement the default ractor::Message trait for you without you having to write it out by hand.
If you want your actor to support remoting, then you should use a different derive statement, namely:
use ractor_cluster::RactorClusterMessage;
use ractor::RpcReplyPort;
#[derive(RactorClusterMessage)]
enum MyBasicMessageType {
Cast1(String, u64),
#[rpc]
Call1(u8, i64, RpcReplyPort<Vec<String>>),
}which adds a significant amount of underlying boilerplate (take a look yourself with cargo expand!) for the implementation. But the short answer is, each enum variant needs to serialize to a byte array of arguments, a variant name, and if it's an RPC give a port that receives a byte array and de-serialize the reply back. Each of the types inside of either the arguments or reply type need to implement the ractor_cluster::BytesConvertable trait which just says this value can be written to a byte array and decoded from a byte array. If you're using prost for your message type definitions (protobuf), we have a macro to auto-implement this for your types.
ractor_cluster::derive_serialization_for_prost_type! {MyProtobufType}Besides that, just write your actor as you would. The actor itself will live where you define it and will be capable of receiving messages sent over the network link from other clusters!
The original authors of ractor are Sean Lawlor (@slawlor), Dillon George (@dillonrg), and Evan Au (@afterdusk). To learn more about contributing to ractor please see CONTRIBUTING.md.
This project is licensed under MIT.