[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-bugs-dist
Subject:    [Bug 191182] New: VALGRIND_LEAK_CHECK quadratic when big nr of
From:       <philippe.waroquiers () eurocontrol ! int>
Date:       2009-04-30 22:15:06
Message-ID: bug-191182-17878 () http ! bugs ! kde ! org/
[Download RAW message or body]

https://bugs.kde.org/show_bug.cgi?id=191182

           Summary: VALGRIND_LEAK_CHECK quadratic when big nr of chunks or
                    big nr of errors + patch.
           Product: valgrind
           Version: 3.5 SVN
          Platform: Compiled Sources
        OS/Version: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: NOR
         Component: memcheck
        AssignedTo: jseward@acm.org
        ReportedBy: philippe.waroquiers@eurocontrol.int


VALGRIND_LEAK_CHECK quadratic when big nr of chunks or big nr of errors +
patch.

When leak search is done, the print_report can be extremely slow (quadratic
behaviour)
when the nr of stacks in lc_chunks and/or in errlist becomes big.
A first cause of slowness was identified in the "Common up the lost blocks"
where each lc_extras execontext is compared with all the exe context in the
errlist.
A 2nd cause of slowness is in the "Print out the commoned-up blocks"
where the errlist is output ordered by size, but this ordered output is
implemented
using a double loop (quadratic).
We have encountered this quadratic behaviour at work, where we start up
a big system, run tests and call VALGRIND_LEAK_CHECK between each test.
I have reproduced this with a test program sheap.c (Simulate HEAP).

The below gives the timing with the original valgrind (SVN 9680 VEX 1888)
and the version with the patch which avoids these 2 causes of slowness.
As you can see, the patch ensures that sheap leak search is less expensive :).
These problems are mostly triggered by high leak resolution, but if enough
different
"medium resolution" stack traces or errlist are created, I suspect it can also
cause 
performance degradation.

You will find below:
    an explanation of the patch   
    the diff -c -p of the patch
    the source of the sheap.c test program

[philippe@soleil valgrind_elem]$ time ../install_orig/bin/valgrind 
--leak-resolution=high --time-stamp=yes ./sheap
==00:00:00:00.000 14801== Memcheck, a memory error detector.
....
==00:00:00:03.536 14801== checked 1,668,332 bytes.
==00:00:02:04.158 14801== 
....
real    2m4.189s
user    2m3.542s
sys    0m0.448s
[philippe@soleil valgrind_elem]$ 

[philippe@soleil valgrind_elem]$ time ../install_elem/bin/valgrind 
--leak-resolution=high --time-stamp=yes ./sheap
==00:00:00:00.000 14903== Memcheck, a memory error detector.
....
==00:00:00:03.351 14903== checked 1,668,332 bytes.
==00:00:00:03.605 14903== 
....
real    0m3.628s
user    0m3.235s
sys    0m0.386s
[philippe@soleil valgrind_elem]$ 


The patch consists in adding a hash table of the errlist. This solves
the O (N lc_chunks * M errlist comparisons) problem.
The 2nd problem is solved by creating an array from the hash table 
and sorting it.

The errlist hash table can however not be a simple hash table, as
two different execontext might have to "go to/added to" the same errlist
entry => the pub_tool_hashtable.h was extended with a more general
HT construct allowing to provide eq_key and hash_key functions.
This more general HT is used to implement the "summing up" 
hash table.

Regression tests shows only 2 differences with the patch
(memcheck/tests/leak-cases-full, memcheck/tests/mempool ) but from what I can
see, this is just inversion of output because the leaked blocks of various
stacks have the same size, and these are now inverted with the sort.

All this has been compiled and tested only on fedora core 10, x86.

Note that I have *not* changed the function HT_to_array, I have only
put a comment there to indicate it should be changed.
With the patches (and no changes in HT_to_array), the performances
tests are showing no significant differences (but I have to admit
I have lost quite some faith in these regression tests :).


Philippe


*** valgrind_orig/include/pub_tool_execontext.h    2009-04-29
22:02:55.000000000 +0200
--- valgrind_elem/include/pub_tool_execontext.h    2009-04-30
21:54:04.000000000 +0200
*************** extern void VG_(apply_ExeContext)( void(
*** 81,86 ****
--- 81,90 ----
  //   Vg_HighRes: all
  extern Bool VG_(eq_ExeContext) ( VgRes res, ExeContext* e1, ExeContext* e2 );

+ // Produce a hash value for the ExeContext. Two exe context which are
"eq_ExeContext" 
+ // with the given resolution will give the same hash value
+ extern UWord VG_(hash_ExeContext)( VgRes res, ExeContext* e);
+ 
  // Print an ExeContext.
  extern void VG_(pp_ExeContext) ( ExeContext* ec );

*** valgrind_orig/coregrind/m_execontext.c    2009-04-29 22:02:48.000000000
+0200
--- valgrind_elem/coregrind/m_execontext.c    2009-04-30 21:56:02.000000000
+0200
*************** void VG_(pp_ExeContext) ( ExeContext* ec
*** 172,179 ****
     VG_(pp_StackTrace)( ec->ips, ec->n_ips );
  }

! 
! /* Compare two ExeContexts, comparing all callers. */
  Bool VG_(eq_ExeContext) ( VgRes res, ExeContext* e1, ExeContext* e2 )
  {
     Int i;
--- 172,178 ----
     VG_(pp_StackTrace)( ec->ips, ec->n_ips );
  }

! /* Compare two ExeContexts. Nr of callers Comparisons depends on res. */
  Bool VG_(eq_ExeContext) ( VgRes res, ExeContext* e1, ExeContext* e2 )
  {
     Int i;
*************** Bool VG_(eq_ExeContext) ( VgRes res, Exe
*** 218,223 ****
--- 217,252 ----
     }
  }

+ 
+ UWord VG_(hash_ExeContext)( VgRes res, ExeContext* e)
+ {
+    Int i;
+    UWord hash;
+    // We need to ensure that hashing two "equal" exe contexts gives the same
hash value.
+    // In high resolution, the hash can be the pointer value itself.
+    // In low or med resolution; loss records are equal if up to 2 (or 4)
+    // callers are equal.
+    // ???? a more intelligent hash function might be appropriate.
+    // ???? maybe like calc_hash
+    switch (res) {
+    case Vg_LowRes:
+       return (UWord) e->ips[0] + (e->n_ips > 1 ? (UWord) e->ips[1] : 0);
+    case Vg_MedRes:
+       hash = 0;
+       for (i = 0; i < (e->n_ips > 4 ? 4 : e->n_ips) ; i++) {
+             hash += (UWord) e->ips[i];
+          }
+       return hash;
+    case Vg_HighRes:
+       return (UWord) e;
+ 
+    default:
+       VG_(core_panic)("VG_(hash_ExeContext): unrecognised VgRes");
+    }
+ }
+ 
+ 
+ 
  /* VG_(record_ExeContext) is the head honcho here.  Take a snapshot of
     the client's stack.  Search our collection of ExeContexts to see if
     we already have it, and if not, allocate a new one.  Either way,
*** valgrind_orig/include/pub_tool_hashtable.h    2009-04-29 22:02:54.000000000
+0200
--- valgrind_elem/include/pub_tool_hashtable.h    2009-04-30 21:51:20.000000000
+0200
*************** extern void* VG_(HT_Next) ( VgHashTable 
*** 93,98 ****
--- 93,120 ----
  extern void VG_(HT_destruct) ( VgHashTable t );


+ /*--------------------------------------------------------------------*/
+ /*--- Creating a new table with more general key and hash handling ---*/
+ /*--------------------------------------------------------------------*/
+ /* In this interface, the content of key is "private" to the user
+    of the hash table.
+    Hashing is done using OHTHash_t function.
+    Comparison is done using OHTCmp_t function.
+ */
+ 
+ typedef Bool  (*OHTEq_key_t)       ( const UWord key1, const UWord key2);
+ typedef UWord (*OHTHash_key_t)     ( const UWord key);
+ 
+ /* Make a new table. Behaviour similar to VG_(HT_construct) but
+    providing a more general behaviour thanks to Eq and Hash.
+    The extension from VG_(HT_construct) to VG_(HTGen_construct) is similar to
+    the extension from VG_(OSetWord_Create)to VG_(OSetGen_Create)
+ */
+ extern VgHashTable VG_(HTGen_construct) ( HChar* name ,
+                                           OHTEq_key_t eq_key, 
+                                           OHTHash_key_t hash_key);
+ 
+ 
  #endif   // __PUB_TOOL_HASHTABLE_H

  /*--------------------------------------------------------------------*/
*** valgrind_orig/coregrind/m_hashtable.c    2009-04-29 22:02:48.000000000
+0200
--- valgrind_elem/coregrind/m_hashtable.c    2009-04-30 22:09:06.000000000
+0200
***************
*** 38,45 ****
  /*--- Declarations                                                 ---*/
  /*--------------------------------------------------------------------*/

- #define CHAIN_NO(key,tbl) (((UWord)(key)) % tbl->n_chains)
- 
  struct _VgHashTable {
     UInt         n_chains;   // should be prime
     UInt         n_elements;
--- 38,43 ----
*************** struct _VgHashTable {
*** 48,55 ****
--- 46,60 ----
     VgHashNode** chains;     // expanding array of hash chains
     Bool         iterOK;     // table safe to iterate over?
     HChar*       name;       // name of table (for debugging only)
+    Bool          fastOK;     // can fast key/hash be used ? (i.e. not a Gen
HT)
+    OHTEq_key_t   eq_key;     // key equality
+    OHTHash_key_t hash_key;   // key hash
  };

+ // CHAIN_NO and KEY_EQ , usable for Gen or UWORD hash table
+ #define CHAIN_NO(key,tbl) ((tbl->fastOK ? ((UWord)(key)) :
((UWord)(tbl->hash_key (key)))) % tbl->n_chains)
+ #define KEY_EQ(k1, k2, tbl) ((tbl->fastOK ? k1 == k2 : tbl->eq_key(k1, k2)))
+ 
  #define N_HASH_PRIMES 20

  static SizeT primes[N_HASH_PRIMES] = {
*************** static SizeT primes[N_HASH_PRIMES] = {
*** 64,70 ****
  /*--- Functions                                                    ---*/
  /*--------------------------------------------------------------------*/

! VgHashTable VG_(HT_construct) ( HChar* name )
  {
     /* Initialises to zero, ie. all entries NULL */
     SizeT       n_chains = primes[0];
--- 69,76 ----
  /*--- Functions                                                    ---*/
  /*--------------------------------------------------------------------*/

! VgHashTable VG_(HTGen_construct) ( HChar* name ,
!                                    OHTEq_key_t eq_key, OHTHash_key_t hash_key
)
  {
     /* Initialises to zero, ie. all entries NULL */
     SizeT       n_chains = primes[0];
*************** VgHashTable VG_(HT_construct) ( HChar* n
*** 76,85 ****
--- 82,101 ----
     table->n_elements    = 0;
     table->iterOK        = True;
     table->name          = name;
+    table->fastOK        = False;
+    table->eq_key        = eq_key;
+    table->hash_key      = hash_key;
     vg_assert(name);
     return table;
  }

+ VgHashTable VG_(HT_construct) ( HChar* name )
+ {
+    VgHashTable table = VG_(HTGen_construct) (name, 0, 0);
+    table->fastOK = True;
+    return table;
+ }
+ 
  Int VG_(HT_count_nodes) ( VgHashTable table )
  {
     return table->n_elements;
*************** void* VG_(HT_lookup) ( VgHashTable table
*** 160,166 ****
     VgHashNode* curr = table->chains[ CHAIN_NO(key, table) ];

     while (curr) {
!       if (key == curr->key) {
           return curr;
        }
        curr = curr->next;
--- 176,182 ----
     VgHashNode* curr = table->chains[ CHAIN_NO(key, table) ];

     while (curr) {
!       if (KEY_EQ (key, curr->key, table)) {
           return curr;
        }
        curr = curr->next;
*************** void* VG_(HT_remove) ( VgHashTable table
*** 179,185 ****
     table->iterOK = False;

     while (curr) {
!       if (key == curr->key) {
           *prev_next_ptr = curr->next;
           table->n_elements--;
           return curr;
--- 195,201 ----
     table->iterOK = False;

     while (curr) {
!       if (KEY_EQ (key, curr->key, table)) {
           *prev_next_ptr = curr->next;
           table->n_elements--;
           return curr;
*************** VgHashNode** VG_(HT_to_array) ( VgHashTa
*** 201,206 ****
--- 217,226 ----
     VgHashNode** arr;
     VgHashNode*  node;

+    /* replacing the loop below by this assignment:
+    *n_elems = table->n_elements;
+    slows done significantly a.o. perf/bz2 ????? */
+ 
     *n_elems = 0;
     for (i = 0; i < table->n_chains; i++) {
        for (node = table->chains[i]; node != NULL; node = node->next) {
*** valgrind_orig/memcheck/mc_leakcheck.c    2009-04-29 22:02:48.000000000
+0200
--- valgrind_elem/memcheck/mc_leakcheck.c    2009-04-30 22:01:21.000000000
+0200
*************** static void lc_process_markstack(Int cli
*** 750,761 ****
     }
  }

  static void print_results(ThreadId tid, Bool is_full_check)
  {
!    Int         i, n_lossrecords;
     LossRecord* errlist;
     LossRecord* p;
     Bool        is_suppressed;

     // Common up the lost blocks so we can print sensible error messages.
     n_lossrecords = 0;
--- 750,822 ----
     }
  }

+ // used to store a lossrecord in a hash table, to retrieve it fast
+ // starting from an exe context and a loss mode
+ // The 2nd element of the record is the key used by the hash table:
+ // We use the general form of the hash table, where we use "fake" inheritance
+ // to allow to hash and compare the key by using "extended" attributes in p.
+ // So, as key, we use a ptr to "ourself"
+ // this allows to retrieve/hash/compare using p data
+ typedef
+    struct _Hashed_LossRecord {
+       struct _Hashed_LossRecord* next;
+       LossRecord* p; // our key is "ourself"
+    }
+    Hashed_LossRecord;
+ 
+ // comparison of two keys of the Hashed_LossRecord
+ static
+ Bool Hashed_LossRecord_eq (const UWord key1, const UWord key2)
+ {
+    LossRecord* p1 = (LossRecord*) key1; 
+    LossRecord* p2 = (LossRecord*) key2; 
+    return (p1->loss_mode == p2->loss_mode
+       && VG_(eq_ExeContext) ( MC_(clo_leak_resolution),
+                               p1->allocated_at, 
+                               p2->allocated_at) );
+ }
+ 
+ // hash the key of the Hashed_LossRecord using elements of the inner
LossRecord
+ static
+ UWord Hashed_LossRecord_hash (const UWord key)
+ {
+    LossRecord* p = (LossRecord*) key; 
+    return p->loss_mode + VG_(hash_ExeContext) (MC_(clo_leak_resolution),
p->allocated_at);
+ }
+ 
+ // Compare the Hashed_LossRecord by 'size' (i.e. addition of total bytes and
indirect size)
+ static Int compare_Hashed_LossRecord_size(void* n1, void* n2)
+ {
+    Hashed_LossRecord* mc1 = *(Hashed_LossRecord**)n1;
+    Hashed_LossRecord* mc2 = *(Hashed_LossRecord**)n2;
+    SizeT s1 = mc1->p->total_bytes + mc1->p->indirect_szB;
+    SizeT s2 = mc2->p->total_bytes + mc2->p->indirect_szB;
+    
+    if (s1 < s2) return -1;
+    if (s1 > s2) return  1;
+    return 0;
+ }
+ 
  static void print_results(ThreadId tid, Bool is_full_check)
  {
!    Int         i, n_lossrecords, n_errlist;
     LossRecord* errlist;
     LossRecord* p;
+    Hashed_LossRecord* hashed_p;
+ 
     Bool        is_suppressed;
+    VgHashTable errlist_ht;   // hash table of errlist, key-ed using
execontext and loss_mode
+    Hashed_LossRecord** errlist_array;
+ 
+    // setup the Hashed_LossRecord we use to search an equivalent loss record
+    Hashed_LossRecord hashed_search; // used to build a search key
+    LossRecord search;   
+    hashed_search.next = NULL;
+    hashed_search.p = &search;
+ 
+    errlist_ht = VG_(HTGen_construct) ( "errlist_ht" ,
+                                        Hashed_LossRecord_eq, 
+                                        Hashed_LossRecord_hash);

     // Common up the lost blocks so we can print sensible error messages.
     n_lossrecords = 0;
*************** static void print_results(ThreadId tid, 
*** 764,781 ****
        MC_Chunk* ch = lc_chunks[i];
        LC_Extra* ex = &(lc_extras)[i];

!       for (p = errlist; p != NULL; p = p->next) {
!          if (p->loss_mode == ex->state
!              && VG_(eq_ExeContext) ( MC_(clo_leak_resolution),
!                                      p->allocated_at, 
!                                      ch->where) ) {
!             break;
!          }
!       }
!       if (p != NULL) {
!          p->num_blocks++;
!          p->total_bytes  += ch->szB;
!          p->indirect_szB += ex->indirect_szB;
        } else {
           n_lossrecords++;
           p = VG_(malloc)( "mc.fr.1", sizeof(LossRecord));
--- 825,839 ----
        MC_Chunk* ch = lc_chunks[i];
        LC_Extra* ex = &(lc_extras)[i];

!       search.loss_mode = ex->state;
!       search.allocated_at = ch->where;
!       hashed_p = VG_(HT_lookup) ( errlist_ht, (UWord) hashed_search.p );
! 
! 
!       if (hashed_p != NULL) {
!          hashed_p->p->num_blocks++;
!          hashed_p->p->total_bytes  += ch->szB;
!          hashed_p->p->indirect_szB += ex->indirect_szB;
        } else {
           n_lossrecords++;
           p = VG_(malloc)( "mc.fr.1", sizeof(LossRecord));
*************** static void print_results(ThreadId tid, 
*** 786,791 ****
--- 844,852 ----
           p->num_blocks   = 1;
           p->next         = errlist;
           errlist         = p;
+          hashed_p        = VG_(malloc)( "mc.fr.2",
sizeof(Hashed_LossRecord));
+          hashed_p->p     = p;
+          VG_(HT_add_node) ( errlist_ht, hashed_p );
        }
     }

*************** static void print_results(ThreadId tid, 
*** 796,812 ****
     MC_(blocks_suppressed) = MC_(bytes_suppressed) = 0;

     // Print out the commoned-up blocks and collect summary stats.
     for (i = 0; i < n_lossrecords; i++) {
        Bool        print_record;
!       LossRecord* p_min = NULL;
!       SizeT       n_min = ~(0x0L);
!       for (p = errlist; p != NULL; p = p->next) {
!          if (p->num_blocks > 0 && p->total_bytes < n_min) {
!             n_min = p->total_bytes + p->indirect_szB;
!             p_min = p;
!          }
!       }
!       tl_assert(p_min != NULL);

        // Rules for printing:
        // - We don't show suppressed loss records ever (and that's controlled
--- 857,868 ----
     MC_(blocks_suppressed) = MC_(bytes_suppressed) = 0;

     // Print out the commoned-up blocks and collect summary stats.
+    // First get all the loss records and sort them by size
+    errlist_array = (Hashed_LossRecord**) VG_(HT_to_array) (errlist_ht,
&n_errlist);
+    VG_(ssort)(errlist_array, n_errlist, sizeof(Hashed_LossRecord*),
compare_Hashed_LossRecord_size);
     for (i = 0; i < n_lossrecords; i++) {
        Bool        print_record;
!       LossRecord* p_min = errlist_array[i]->p;

        // Rules for printing:
        // - We don't show suppressed loss records ever (and that's controlled
*************** static void print_results(ThreadId tid, 
*** 851,858 ****
        } else {
           VG_(tool_panic)("unknown loss mode");
        }
-       p_min->num_blocks = 0;
     }

     if (VG_(clo_verbosity) > 0 && !VG_(clo_xml)) {
        UMSG("");
--- 907,916 ----
        } else {
           VG_(tool_panic)("unknown loss mode");
        }
     }
+    VG_(free) ( errlist_array );
+    VG_(HT_destruct) ( errlist_ht );
+ 

     if (VG_(clo_verbosity) > 0 && !VG_(clo_xml)) {
        UMSG("");










#include <malloc.h>
#include <strings.h>
#include <stdio.h>
#include <math.h>

/* parameters */

int stack_fan_out = 15;
int stack_depth = 4; 
/* we will create stack_fan_out ^ stack_depth different call stacks */

int malloc_fan = 4;
/* for each call stack, allocate malloc_fan blocks */

int malloc_depth = 2; 
/* for each call stack, allocate data structures having malloc_depth
indirection
   at each malloc-ed level */

int malloc_data = 5;
/* in addition to the pointer needed to maintain the levels; some more
   bytes are allocated simulating the data stored in the data structure */


int free_every_n = 2;
/* every n top blocks, 1 block and all its children will be freed instead of
being kept */

int leak_every_n = 250;
/* every n release block operation, 1 block and its children will be leaked */



struct Chunk {
   struct Chunk* child;
   char   s[];
};

struct Chunk** topblocks;
int freetop = 0;

/* statistics */
long total_malloced = 0;
int blocknr = 0;
int blockfreed = 0;
int blockleaked = 0;
int total_stacks = 0;
int releaseop = 0;

void free_chunks (struct Chunk ** mem)
{
   if (*mem == 0)
      return;

   free_chunks ((&(*mem)->child));

   blockfreed++;
   free (*mem);
   *mem = 0; 
}

void release (struct Chunk ** mem)
{
   releaseop++;

   if (releaseop % leak_every_n == 0)
      {
         blockleaked++;
         *mem = 0; // lose the pointer without free-ing the blocks
      }
   else
      free_chunks (mem);
}

void call_stack (int level)
{
   int call_fan_out = 1;

   if (level == stack_depth)
      {  
         int sz = sizeof(struct Chunk*) + malloc_data;
         int d;
         int f;

         for (f = 0; f < malloc_fan; f++)
            {
               struct Chunk *new;
               struct Chunk *prev = NULL;

               for (d = 0; d < malloc_depth; d++)
                  {
                     new = malloc (sz);
                     total_malloced += sz;
                     blocknr++;
                     new->child = prev;
                     prev = new;
                  }
               topblocks[freetop] = new;

               if (freetop % free_every_n == 0)
                  {
                     release (&topblocks[freetop]);
                  }
               freetop++;
            }

         total_stacks++;
      }
   else
      {
         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;

         call_stack (level + 1);
         if (call_fan_out == stack_fan_out) return;
         call_fan_out++;


         printf ("maximum stack_fan_out exceeded\n");
    }
}

int  main()
{
   int d;
   int stacks = 1;
   for (d = 0; d < stack_depth; d++)
      stacks *= stack_fan_out;
   printf ("will generate %d different stacks\n", stacks);
   topblocks = malloc(sizeof(struct Chunk*) * stacks * malloc_fan);
   call_stack (0);
   printf ("total stacks %d\n", total_stacks);
   printf ("total bytes malloc-ed: %ld\n", total_malloced);
   printf ("total blocks malloc-ed: %d\n", blocknr);
   printf ("total blocks free-ed: %d\n", blockfreed);
   printf ("total blocks leak-ed: %d\n", blockleaked);
}

-- 
Configure bugmail: https://bugs.kde.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching all bug changes.
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic