GHC 6.4/mingw32: files larger than 4 GB
andhFileSize/hSetFileSize/c_stat
Bulat Ziganshin
bulatz at HotPOP.com
Tue Apr 12 10:21:54 EDT 2005
Hello Simon,
mingw32 have 32-bit off_t type; access to large files are supported
with special functions and structures defined in msvcrrt.dll:
_CRTIMP __int64 __cdecl _lseeki64(int, __int64, int);
_CRTIMP __int64 __cdecl _telli64(int);
_CRTIMP int __cdecl _fstati64(int, struct _stati64 *);
_CRTIMP int __cdecl _stati64(const char *, struct _stati64 *);
struct _stati64 {
....
__int64 st_size;
....
};
so, we need either
1) define COff to __int64 instead of off_t and rewrite existing
functions to work with 64 bit filesizes
or
2) define new set of low-level functions to work with 64 bit
filesizes. define high-level functions in terms of these 64-bit
functions on Win32
what solution will be better? in the first case COff no longer will be
equal to off_t, what some applications can imply. in the second case
we will add a lot of duplicating code
is there any other (GHC-supported) environments supporting 64 bit
filesizes, but having only 32-bit off_t?
(sorry for my bad English, i'm too far from native speaker)
Friday, April 08, 2005, 3:33:36 PM, you wrote:
SPJ> By all means. If someone cares to send us a patch, we'll incorporate
SPJ> it.
SPJ> | While GHC I/O library on mingw32 platform perfectly reads and writes
SPJ> | files larger than 4 GB, functions hFileSize/c_fstat,
SPJ> | hSetFileSize/c_ftruncate and c_stat are still tied to C functions
SPJ> | returning 32 bit values and as a result truncate larger sizes to their
SPJ> | low 32 bits. Can this behaviour be fixed in the next bug-fix version?
--
Best regards,
Bulat mailto:bulatz at HotPOP.com
More information about the Glasgow-haskell-users
mailing list