[prev in list] [next in list] [prev in thread] [next in thread]
List: python-ideas
Subject: Re: [Python-ideas] Keep free list of popular iterator objects
From: Kyle Fisher <anthonyfk () gmail ! com>
Date: 2013-09-17 20:18:32
Message-ID: CALRznwTidcxn5eVJ9qcucrwnM3M_TrLSAkJrbd0RjGSw6Mi-tA () mail ! gmail ! com
[Download RAW message or body]
[Attachment #2 (multipart/alternative)]
Story time.
I was able to make a build at work with freelists enabled for iterators in
dictobject.c, listobject.c and iterobject.c. When running this through our
application I saw:
1) When loading several datapoints from database: 0.1% improvement (with a
wider-but-forgotten standard deviation). So, no improvement but no ruined
performance either. Makes sense since this was mostly an I/O bound task.
2) When parsing in-memory data files: 1.5% improvement. This is
approximately what I was expecting, so far so good!
At this point I decided to run the benchmark suite Antoine pointed me to.
I also realized that I had been testing without some optimizations turned
on. I made two new builds, both with "-O3 -DNDEBUG -march=native" and
profile guided optimizations turned on. I then added a benchmark to
explicitly test tight inner loops. I ran the benchmarks and saw... a 1.02x
improvement on the benchmark I made and a 1.04x slow down on two others
(nbody, slowunpickle). I then ran our application again and confirmed that
all initial speed ups I saw were now lost in the noise.
So, thank you everyone for letting me entertain this idea, but it looks
like Raymond's hunch was right. :)
Cheers,
-Kyle
[Attachment #5 (text/html)]
<div dir="ltr"><div><div><div><div><div><div>Story time.<br></div><div><br>I was able \
to make a build at work with freelists enabled for iterators in dictobject.c, \
listobject.c and iterobject.c. When running this through our application I saw:<br> \
</div> 1) When loading several datapoints from database: 0.1% improvement (with a \
wider-but-forgotten standard deviation). So, no improvement but no ruined \
performance either. Makes sense since this was mostly an I/O bound task.<br> </div> \
2) When parsing in-memory data files: 1.5% improvement. This is approximately what I \
was expecting, so far so good!<br><br></div>At this point I decided to run the \
benchmark suite Antoine pointed me to. I also realized that I had been testing \
without some optimizations turned on. I made two new builds, both with "-O3 \
-DNDEBUG -march=native" and profile guided optimizations turned on. I then \
added a benchmark to explicitly test tight inner loops. I ran the benchmarks and \
saw... a 1.02x improvement on the benchmark I made and a 1.04x slow down on two \
others (nbody, slowunpickle). I then ran our application again and confirmed that \
all initial speed ups I saw were now lost in the noise.<br> <br></div>So, thank you \
everyone for letting me entertain this idea, but it looks like Raymond's hunch \
was right. :)<br><br></div>Cheers,<br></div>-Kyle</div>
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic