Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can go full circle and also make operations on a mutex asynchronous. Hence the realization that message passing and shared memory are truly dual.


The very idea of a mutex is that it is synchronous. You wait until you can acquire the mutex.

If it's asynchronous, it's not a mutex anymore, or it's just used to synchronously setup some other asynchronous mechanism.


A mutex is a way to guarantee mutual exclusion nothing more nothing less; You can recover synchronous behaviour if you really want:

    synchronized<Something> something;
    ...
    co_await something.async_visit([&](Something& x) {
        /* critical section here */ 
    });


that isn't a mutex, that's delegating work asynchronously and delegating something else to run when it is complete (the implicitly defined continuation through coroutines).

In systems programming parlance, a mutex is a resource which can be acquired and released, acquired exactly once, and blocks on acquire if already acquired.


Do a CPS transform of your typical std::mutex critical section and you'll find they are exactly the same.


They're not, the interactions with the memory model are different, as are the guarantees.

CPS shouldn't be able to deadlock for example?


CPS can trivially deadlock for all meaningful definitions of deadlock.

Would you consider this a mutex?

   async_mutex mux;

   co_await mux.lock();
   /* critical section */
   co_await mux.unlock();
   
What about: my_mutex mux;

   {
      std::lock_guard _{mux};
      /* critical section */
   }
where the code runs in a user space fiber.

Would you consider boost synchronized a mutex?

Don't confuse the semantics with the implementation details (yes async/await leaks implementation details).


You only achieved a deadlock by re-introducing mutexes.


Given:

    Something someting;
    async_mutex mtx;
    void my_critical_section(Data&);
1:

    await mtx.lock();
    my_critical_section(something);
    await mtx.unlock();
2:

    auto my_locked_critical_section() {
      await mtx.lock();
      my_critical_section(something);
      await mtx.unlock();
    }
    ...    
    await my_locked_critical_section(something);
3:

    auto locked(auto mtx, auto critical_section) {
      await mtx.lock();
      critical_section();
      await mtx.unlock();
    }

    ...    
    await locked(mtx, [&]{ my_critical_section(something); });
4:

    template<class T>
    struct synchronized {
       async_mutex mtx;
       T data;
       auto async_visit(auto fn) { locked(mtx, [fn,&data]{ fn(data); }); }
    };

    synchronized<Something> something;
    await something.async_visit([](Something& data) { my_critical_section(something); });
If 1 is a mutex, at which point it stops being a mutex? Note that 4 is my initial example.


it's a mutex iff it's acquiring a resource exclusively.

which you don't need to do for synchronization of coroutines since you can control in which order things are scheduled and whether that's done concurrently or not.


Not if you have multiple schedulers. Case in point: asio.strand or execution::on [1].

And even with one scheduler it makes sense to explicitly mark your critical sections.

Really, at the end of the day the primary purpose of a mutex is serialization of all operations on some data. The blocking behaviour is just a way to implement it.

[1] https://en.cppreference.com/w/cpp/execution/on.html


Mutexes are a problematic pattern that doesn't compose, see the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: