[prev in list] [next in list] [prev in thread] [next in thread] 

List:       squeak-dev
Subject:    [squeak-dev] SqueakSource instances (was: Process scheduling)
From:       Chris Muller <asqueaker () gmail ! com>
Date:       2024-03-29 5:02:29
Message-ID: CANzdToFNOzDySLmrVu3nADdFY1o5aDH3F0be58vW3mf9LtxfUg () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Hi Dave,

I just downloaded squeaksource.8.image from dan and took a look.  I see you
abandoned the PersonalSqueakSource codebase back in Nov-2022.  That's too
bad.  Part of what I'd hoped to accomplish with the renovation was not only
a more responsive and resilient server, but for the relocation to /ss on
source.squeak.org to encourage your and the community's collaboration,
where we would eventually get to a point where questions like this:

> I'd happily collaborate on this but I need pointers to the code and
instructions on how to interact
> with the running server.

would be as universally known and natural as the Inbox process (although
maybe that isn't saying much anyway).  Your comment in the unmerge version
(SqueakSource.sscom-dtl.1147) mentions merge issues and startup problems.
I would've tried to help if you'd reached out.  Perhaps we can learn and
gain just as much remaining forked and cherry-picking from each other what
we deem to be most appropriate.  I just noticed the performance improvement
from Levente last September.  See, before I dreamt something like that
would simply be committed to /ss by him, and maybe it would send an email
like with /trunk and /inbox.  Then, we admins could merge fixes into the
servers whenever it was worthwhile to do so.

> Note that my observations were based on watching files being slowly
> written to disc while also watching /usr/bin/top. The activity also
> correlates with log messages written to the ss.log log file, so that's what
> made me suspect issues with the repository save mechanism.
>
I don't think saving data.obj was / is related to the client
slowness issues.  Why?  Because you're still rightly using SSFilesystem
from PersonalSqueakSource (which is good!), which essentially does what
Eliot described.  It forks the save at Processor userBackgroundPriority-1
(29), which is lower than client operations (30).  And although there
appears to be a bug that will cause other client save operations to be
blocked during the long serialization process (see the attached fix for
that, if you wish) *read* operations don't wait on any mutex, so should
remain completely unblocked.  You'd still see 100% CPU during
serialization, yes, but client responsiveness should still be fine due to
their (30) level processes preempting the serialization process.

 - Chris

[Attachment #5 (text/html)]

<div dir="ltr"><div>Hi Dave,</div><div><br></div><div>I just downloaded \
squeaksource.8.image from dan and took a look.   I see you abandoned the \
PersonalSqueakSource codebase back in Nov-2022.   That&#39;s too bad.   Part of what \
I&#39;d hoped to accomplish with the renovation was not only a more responsive and \
resilient server, but for the relocation to /ss on <a \
href="http://source.squeak.org">source.squeak.org</a> to encourage your and the \
community&#39;s collaboration, where we would eventually get to a point where \
questions like this:</div><div><br></div><div>&gt; I'd happily collaborate on this \
but I need pointers to the code and instructions on how to interact  </div><div>&gt; \
with the running server.</div><div><br></div><div>would be as universally known and \
natural as the Inbox process (although maybe that  isn&#39;t saying much anyway).   \
Your comment in the unmerge version (SqueakSource.sscom-dtl.1147) mentions merge \
issues and startup problems.   I would&#39;ve tried to  help if you&#39;d reached \
out.   Perhaps we can learn and gain just as much remaining forked and cherry-picking \
from each other what we deem to be most appropriate.   I just noticed the performance \
improvement from Levente last September.   See, before I dreamt something like that \
would simply be committed to /ss by him,  and maybe it would send an email like with \
/trunk and /inbox.   Then, we admins could merge fixes into the servers whenever it \
was worthwhile to do so.</div><div class="gmail_quote"><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div \
style="font-size:10pt;font-family:Verdana,Geneva,sans-serif"><p><span \
style="font-size:10pt">Note that my observations were based on watching files being \
slowly written to disc while also watching /usr/bin/top. The activity also correlates \
with log messages written to the ss.log log file, so that&#39;s what made me suspect \
issues with the repository save mechanism.</span></p></div></blockquote><div>I \
don&#39;t think saving data.obj was / is related to the client slowness  issues.   \
Why?   Because you&#39;re still rightly using SSFilesystem from PersonalSqueakSource \
(which is good!), which essentially does what Eliot described.   It forks the save at \
Processor userBackgroundPriority-1 (29), which is lower than client operations (30).  \
And although there appears to be a bug that will cause other client save operations \
to be blocked during the long serialization process (see the attached fix for that, \
if you wish) *read* operations don&#39;t wait on any mutex, so should remain \
completely unblocked.   You&#39;d still see 100% CPU during serialization, yes, but \
client responsiveness should still be fine due to their  (30) level processes \
preempting the serialization process.</div><div><br></div><div>  - \
Chris</div></div></div>


["SSFilesystem-saveRepositoryNow.st" (application/octet-stream)]



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic