aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorkalkyl <[email protected]>2024-07-08 17:16:35 +0200
committerkalkyl <[email protected]>2024-07-08 17:16:35 +0200
commit028ca55f9ca3bfa2e4aa99b16bc0e0e29241fe70 (patch)
tree34d2b7fff1f4c8e39f2ebd387c7c0ff0db546328
parent87f66343493a5ae99f0f9b27602b96524111c94a (diff)
Add more docs and cross-links
-rw-r--r--docs/pages/faq.adoc3
-rw-r--r--docs/pages/sharing_peripherals.adoc2
-rw-r--r--examples/rp/src/bin/sharing.rs22
3 files changed, 20 insertions, 7 deletions
diff --git a/docs/pages/faq.adoc b/docs/pages/faq.adoc
index a2f56a539..fc1316062 100644
--- a/docs/pages/faq.adoc
+++ b/docs/pages/faq.adoc
@@ -352,7 +352,8 @@ There are two main ways to handle concurrency in Embassy:
352 352
353In general, either of these approaches will work. The main differences of these approaches are: 353In general, either of these approaches will work. The main differences of these approaches are:
354 354
355When using **separate tasks**, each task needs its own RAM allocation, so there's a little overhead for each task, so one task that does three things will likely be a little bit smaller than three tasks that do one thing (not a lot, probably a couple dozen bytes). In contrast, with **multiple futures in one task**, you don't need multiple task allocations, and it will generally be easier to share data, or use borrowed resources, inside of a single task. 355When using **separate tasks**, each task needs its own RAM allocation, so there's a little overhead for each task, so one task that does three things will likely be a little bit smaller than three tasks that do one thing (not a lot, probably a couple dozen bytes). In contrast, with **multiple futures in one task**, you don't need multiple task allocations, and it will generally be easier to share data, or use borrowed resources, inside of a single task.
356An example showcasing some methods for sharing things between tasks link:https://github.com/embassy-rs/embassy/blob/main/examples/rp/src/bin/sharing.rs[can be found here].
356 357
357But when it comes to "waking" tasks, for example when a data transfer is complete or a button is pressed, it's faster to wake a dedicated task, because that task does not need to check which future is actually ready. `join` and `select` must check ALL of the futures they are managing to see which one (or which ones) are ready to do more work. This is because all Rust executors (like Embassy or Tokio) only have the ability to wake tasks, not specific futures. This means you will use slightly less CPU time juggling futures when using dedicated tasks. 358But when it comes to "waking" tasks, for example when a data transfer is complete or a button is pressed, it's faster to wake a dedicated task, because that task does not need to check which future is actually ready. `join` and `select` must check ALL of the futures they are managing to see which one (or which ones) are ready to do more work. This is because all Rust executors (like Embassy or Tokio) only have the ability to wake tasks, not specific futures. This means you will use slightly less CPU time juggling futures when using dedicated tasks.
358 359
diff --git a/docs/pages/sharing_peripherals.adoc b/docs/pages/sharing_peripherals.adoc
index 6bcd56b01..ebd899c4e 100644
--- a/docs/pages/sharing_peripherals.adoc
+++ b/docs/pages/sharing_peripherals.adoc
@@ -126,3 +126,5 @@ async fn toggle_led(control: Sender<'static, ThreadModeRawMutex, LedState, 64>,
126 126
127This example replaces the Mutex with a Channel, and uses another task (the main loop) to drive the LED. The advantage of this approach is that only a single task references the peripheral, separating concerns. However, using a Mutex has a lower overhead and might be necessary if you need to ensure 127This example replaces the Mutex with a Channel, and uses another task (the main loop) to drive the LED. The advantage of this approach is that only a single task references the peripheral, separating concerns. However, using a Mutex has a lower overhead and might be necessary if you need to ensure
128that the operation is completed before continuing to do other work in your task. 128that the operation is completed before continuing to do other work in your task.
129
130An example showcasing more methods for sharing link:https://github.com/embassy-rs/embassy/blob/main/examples/rp/src/bin/sharing.rs[can be found here]. \ No newline at end of file
diff --git a/examples/rp/src/bin/sharing.rs b/examples/rp/src/bin/sharing.rs
index 0761500ef..5416e20ce 100644
--- a/examples/rp/src/bin/sharing.rs
+++ b/examples/rp/src/bin/sharing.rs
@@ -1,4 +1,14 @@
1//! This example shows some common strategies for sharing resources between tasks. 1//! This example shows some common strategies for sharing resources between tasks.
2//!
3//! We demonstrate five different ways of sharing, covering different use cases:
4//! - Atomics: This method is used for simple values, such as bool and u8..u32
5//! - Blocking Mutex: This is used for sharing non-async things, using Cell/RefCell for interior mutability.
6//! - Async Mutex: This is used for sharing async resources, where you need to hold the lock across await points.
7//! The async Mutex has interior mutability built-in, so no RefCell is needed.
8//! - Cell: For sharing Copy types between tasks running on the same executor.
9//! - RefCell: When you want &mut access to a value shared between tasks running on the same executor.
10//!
11//! More information: https://embassy.dev/book/#_sharing_peripherals_between_tasks
2 12
3#![no_std] 13#![no_std]
4#![no_main] 14#![no_main]
@@ -21,7 +31,7 @@ use rand::RngCore;
21use static_cell::{ConstStaticCell, StaticCell}; 31use static_cell::{ConstStaticCell, StaticCell};
22use {defmt_rtt as _, panic_probe as _}; 32use {defmt_rtt as _, panic_probe as _};
23 33
24type UartMutex = mutex::Mutex<CriticalSectionRawMutex, UartTx<'static, UART0, uart::Async>>; 34type UartAsyncMutex = mutex::Mutex<CriticalSectionRawMutex, UartTx<'static, UART0, uart::Async>>;
25 35
26struct MyType { 36struct MyType {
27 inner: u32, 37 inner: u32,
@@ -53,7 +63,7 @@ fn main() -> ! {
53 63
54 let uart = UartTx::new(p.UART0, p.PIN_0, p.DMA_CH0, uart::Config::default()); 64 let uart = UartTx::new(p.UART0, p.PIN_0, p.DMA_CH0, uart::Config::default());
55 // Use the async Mutex for sharing async things (built-in interior mutability) 65 // Use the async Mutex for sharing async things (built-in interior mutability)
56 static UART: StaticCell<UartMutex> = StaticCell::new(); 66 static UART: StaticCell<UartAsyncMutex> = StaticCell::new();
57 let uart = UART.init(mutex::Mutex::new(uart)); 67 let uart = UART.init(mutex::Mutex::new(uart));
58 68
59 // High-priority executor: runs in interrupt mode 69 // High-priority executor: runs in interrupt mode
@@ -80,7 +90,7 @@ fn main() -> ! {
80} 90}
81 91
82#[embassy_executor::task] 92#[embassy_executor::task]
83async fn task_a(uart: &'static UartMutex) { 93async fn task_a(uart: &'static UartAsyncMutex) {
84 let mut ticker = Ticker::every(Duration::from_secs(1)); 94 let mut ticker = Ticker::every(Duration::from_secs(1));
85 loop { 95 loop {
86 let random = RoscRng.next_u32(); 96 let random = RoscRng.next_u32();
@@ -100,7 +110,7 @@ async fn task_a(uart: &'static UartMutex) {
100} 110}
101 111
102#[embassy_executor::task] 112#[embassy_executor::task]
103async fn task_b(uart: &'static UartMutex, cell: &'static Cell<[u8; 4]>, ref_cell: &'static RefCell<MyType>) { 113async fn task_b(uart: &'static UartAsyncMutex, cell: &'static Cell<[u8; 4]>, ref_cell: &'static RefCell<MyType>) {
104 let mut ticker = Ticker::every(Duration::from_secs(1)); 114 let mut ticker = Ticker::every(Duration::from_secs(1));
105 loop { 115 loop {
106 let random = RoscRng.next_u32(); 116 let random = RoscRng.next_u32();
@@ -121,8 +131,8 @@ async fn task_c(cell: &'static Cell<[u8; 4]>, ref_cell: &'static RefCell<MyType>
121 loop { 131 loop {
122 info!("======================="); 132 info!("=======================");
123 133
124 let atomic = ATOMIC.load(Ordering::Relaxed); 134 let atomic_val = ATOMIC.load(Ordering::Relaxed);
125 info!("atomic: {}", atomic); 135 info!("atomic: {}", atomic_val);
126 136
127 MUTEX_BLOCKING.lock(|x| { 137 MUTEX_BLOCKING.lock(|x| {
128 let val = x.borrow().inner; 138 let val = x.borrow().inner;