[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-admin
Subject:    Re: 2GB max file size
From:       Glynn Clements <glynn () gclements ! plus ! com>
Date:       2005-04-02 12:35:42
Message-ID: 16974.37278.127245.186636 () gargle ! gargle ! HOWL
[Download RAW message or body]


Luca Ferrari wrote:

> > tar will handle input files greater than 2GB.
> >
> > I use files greater than 2GB on a RedHat 8 system with
> > a generic 2.4.20 kernel and tar (version = tar (GNU tar) 1.13.25)
> >
> > I can also use dump/restore for backing up to tape.
> 
> Actually I'm trying with zip, that fails if the file becomes bigger than 2 GB. 
> SInce I remember that, due to the i-node structure, unix cannot handle a file 
> greater than 2GB, I was wondering the problem was of the filesystem and not 
> of the program itself. However, before renouncing to use zip, is there 
> something I can do on the filesystem to handle bigger files?

The issue almost certainly lies with the program, not the filesystem.
The 2Gb limitation was lifted for ext2 so long ago that it's a distant
memory. Certainly, 2.4.20 has no problems with files larger than 2Gb.

The main problem is that the historical Unix API used a "long" to
represent offsets within files (including the offset to the end of the
file, i.e. its size). On a 32-bit system, a "long" is only 32 bits, so
can only hold values up to +/- 2Gb.

A later revision of the API introduced the off_t type to hold file
offsets (and sizes). Because of the amount of historical code which
uses the "long" type, the off_t type is equivalent to "long" by
default. This can be changed to a 64-bit type at compile-time, but you
need to ensure that all code which references the file (including any
libraries) support large files (which is why it isn't the default).

The most likely reason why you might have problems is that the zip
program was compiled to use a 32-bit off_t type. If the other programs
from your distribution support large files, that may indicate that the
zip program itself would need code modifications (and not just
compilation options) before it will support large files. Early
versions of the zip file format cannot store large files because they
only use 32 bits for the size field.

In any case, if you're backing up to tape, you should be using tar,
cpio or dump rather than zip. The zip format was designed for random
access devices (i.e. disks), not sequential access devices (i.e. 
tapes), whereas tar, cpio and dump were designed for tapes.

To create a zip archive on tape, you first have to create the file on
disk then copy it to tape; similarly, to read a zip archive, you have
to copy it to disk first. With tar, cpio or dump, you can read/write
directly from/to tape.

-- 
Glynn Clements <glynn@gclements.plus.com>
-
To unsubscribe from this list: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic