Hi, Recently Waldo fixed the dcopserver to handle possibly corrupt data better. (I had a crash because of this, twice) Just 15 minutes ago a friend here in the uni had a crashing konqueror because of a corrupted konq_history file. What's the common thing? Both (konq/dcop) use QDataStream to marshal data and demarshal it without any checkings. This can result into errors like an attempt to demarshal a QString with the length of 5124361243561 bytes (which makes us end up in a nice malloc() call :-) , just because the length byte (or the data in general) was corrupted. Sure, we can't recover from corrupted data, but at least we could bail out if we would have a way to check the consistency of the data before starting to extract and interpret data out of it. So how about using checksums to check the correctness? It is for sure nonsense to add a checksum for each and every object, but how about doing it once for the whole chunk of data? What I'm thinking of in detail is a KCheckSumByteArray (TODO: find better name :) , which inherits from QByteArray and adds a checksum property (like we could use a simple CRC32) when being marshalled into a QDataStream (and does so likewise when reading from a QDataStream, including a check of the checksum and setting a isValid() boolean property appropriately) . (I volunteer to implement this, just in case) What do you think? Bye, Simon