[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ruby-core
Subject:    [ruby-core:74068] [CommonRuby Feature#12020] Documenting Ruby memory model
From:       email () pitr ! ch
Date:       2016-02-29 23:06:52
Message-ID: redmine.journal-57219.20160229230651.1f51a8f8d8052c7c () ruby-lang ! org
[Download RAW message or body]

Issue #12020 has been updated by Petr Chalupa.


I understand your point, I would like explore how it could be solved in MRI before \
relaxing the constant and method redefinition though. The relaxation could lead to \
undesirable unpredictable behaviour for users.

As you've mentioned the version would have to be a volatile (Java) or an atomic \
(C++11) variable to guarantee that the value is up to date. That would mean volatile \
read before each method call or constant read, volatile reads are not terribly \
expensive though. E.g. on x86 it's just a mov instruction (same as regular load) (I \
am not sure what other platforms MRI targets). Volatile writes are more expensive but \
that is happening only on rare path, the method or constant redefinition. Without JIT \
and more optimisations it might have only small or no overhead in MRI, which could be \
measured in current MRI with GIL just by making the version number atomic (in C \
terminology). (I am not capable of altering the MRI's source code to measure it \
though.)

But as Benoit has suggested:

> You are right, inline caches would have overhead on some platforms,
unless some form of safepoints/yieldpoints are available to the VM to clear the \
caches or ensure visibility (with a serial number check, it could just ensure \
visibility of the new serial to every thread). If the VM actually runs Ruby code in \
parallel, then it also most likely uses safepoints for the GC so I would guess Ruby \
VMs either have them or do not run Ruby code in parallel.

when MRI has no GIL it will need some king of safepoint to park threads allowing GC \
to run. It would allow to remove any overhead on the fast path, the version checking. \
Roughly it would work as follows, a constant redefinition would change the constant, \
update version number, wait for all threads to reach the safepoint to make sure that \
all threads will see new version number on next read, finish constant redefinition.

I feel silly for such a late answer, I did not get any email about new comment even \
though I watch the issue.



----------------------------------------
Feature #12020: Documenting Ruby memory model
https://bugs.ruby-lang.org/issues/12020#change-57219

* Author: Petr Chalupa
* Status: Open
* Priority: Normal
* Assignee: 
----------------------------------------
Defining a memory model for a language is necessary to be able to reason about a \
program behavior in a concurrent or parallel environment. 

There was a document created describing a Ruby memory model for concurrent-ruby gem, \
which fits several Ruby language implementations. It was necessary to be able to \
build lower-level unifying layer that enables creation of concurrency abstractions. \
They can be implemented only once against the layer, which ensures that it runs on \
all Ruby implementations.

The Ruby MRI implementation has stronger undocumented guaranties because of GIL \
semantics than the memory model, but the few relaxations from MRIs behavior allow \
other implementations to fit the model as well and to improve performance.

This issue proposes to document the Ruby memory model. The above mentioned memory \
model document which was created for concurrent-ruby can be used as a starting point: \
https://docs.google.com/document/d/1pVzU8w_QF44YzUCCab990Q_WZOdhpKolCIHaiXG-sPw/edit#. \
Please comment in the document or here.

The aggregating issue of this effort can be found \
[here](https://bugs.ruby-lang.org/issues/12019).



-- 
https://bugs.ruby-lang.org/

Unsubscribe: <mailto:ruby-core-request@ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic