zipindex

package module
v0.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 7, 2026 License: Apache-2.0 Imports: 18 Imported by: 10

README

zipindex

Go Reference Go

zipindex provides a size optimized representation of a zip file directory to allow decompressing files inside a ZIP file without reading the file index every file.

It will only provide the minimal needed data for successful decompression and CRC checks.

Custom metadata can be stored per file and filtering can be performed on the incoming files.

Currently, up to 100 million files per zip file is supported. If a streaming format is added, this limit may be lifted.

Usage

Indexing

Indexing is performed on the last part of a complete ZIP file.

Three methods can be used:

The zipindex.ReadDir function allows parsing from a raw buffer from the end of the file. If this isn't enough to read the directory zipindex.ErrNeedMoreData is returned, which will return how much data is needed to read the directory.

Alternatively, zipindex.ReadFile will open a file on disk and read the directory from that.

Finally zipindex.ReaderAt allows to read the index from anything supporting the io.ReaderAt interface.

By default, only "regular" files are indexed, meaning directories and other entries are skipped, as well as files for which a decompressor isn't registered.

A custom filter function can be provided to change the default filtering. This also allows adding custom data for each file if more information is needed.

See examples in the documentation

Serializing

Before serializing it is recommended to run the OptimizeSize() on the returned files. This will sort the entries and remove any redundant CRC information.

The files are serialized using the Serialize() method. This will allow the information to be recreated using zipindex.DeserializeFiles, or to find a single file zipindex.FindSerialized can be used.

See examples in the documentation

Accessing File Content

To read a file, you will need 1) the serialized index and once you have gotten the information for the file you want to decompress form the ZIP file, you will need to forward the zip file to the offset specified by the information returned from the index.

A file contains the following information:

type File struct {
    Name               string // Name of the file as stored in the zip.
    CompressedSize64   uint64 // Size of compressed data, excluding ZIP headers.
    UncompressedSize64 uint64 // Size of the Uncompressed data.
    Offset             int64  // Offset where file data header starts.
    CRC32              uint32 // CRC of the uncompressed data.
    Method             uint16 // Storage method.
    Flags              uint16 // General purpose bit flag

    Custom map[string]string
}

First an io.Reader must be forwarded to the absolute offset in Offset. It is up to the caller to decide how to achieve that.

To open an individual file from the index use the (*File).Open(r io.Reader) with the forwarded Reader to open the content.

Similar to stdlib zip, not all methods/flags may be supported.

For expert users, (*File).OpenRaw allows access to the compressed data.

Compression Methods

By default, zipindex keeps files stored uncompressed or deflate compressed. This covers the most commonly seen compression methods.

Furthermore, files compressed with zstandard as method 93 will be preserved and can be read back.

Use RegisterDecompressor to register non-standard decompressors.

Layered Indexes

The LayeredIndex[T] type provides a way to combine multiple zip indexes into a single searchable entity. This is useful when you need to overlay multiple archives or apply incremental updates without rebuilding the entire index.

Key Features

  • Generic type parameter: Each layer is associated with a comparable reference type T (e.g., version number, timestamp, file path)
  • Override semantics: Files in newer layers override files with the same path in older layers
  • Delete layers: Special layers that remove files from previous layers
  • Efficient lookups: Find files across all layers with proper precedence

Basic Usage

// Create a new layered index with string references
layered := zipindex.NewLayeredIndex[string]()

// Add base layer
baseFiles, _ := zipindex.ReadFile("base.zip")
err := layered.AddLayer(baseFiles, "v1.0")

// Add update layer (overrides files from base)
updateFiles, _ := zipindex.ReadFile("update.zip")
err = layered.AddLayer(updateFiles, "v1.1")

// Add a delete layer (removes specified files)
deleteFiles := zipindex.Files{{Name: "obsolete.txt"}}
err = layered.AddDeleteLayer(deleteFiles, "cleanup")

// Find a file across all layers
file, found := layered.Find("readme.txt")
if found {
    // file.File contains the file info
    // file.LayerRef contains the layer reference (e.g., "v1.1")
}

// Iterate over all files
for ref, file := range layered.FilesIter() {
    fmt.Printf("File %s from layer %v\n", file.Name, ref)
}

// Merge all layers into a single index, this will lose the reference information
merged := layered.ToSingleIndex()
serialized, _ := merged.Serialize()

API Reference

Creation and Layer Management
  • NewLayeredIndex[T]() - Create a new empty layered index
  • AddLayer(files, ref) - Add a layer (returns error if ref already exists)
  • AddDeleteLayer(files, ref) - Add a delete layer to remove files from previous layers
  • RemoveLayer(index) - Remove layer by index
  • RemoveLayerByRef(ref) - Remove all layers with the given reference
  • Clear() - Remove all layers
File Access
  • Find(name) - Find a file across all layers, returns (*FileWithRef[T], bool)
  • FindInLayer(name, ref) - Find a file in a specific layer only
  • FilesIter() - Iterator that yields (T, File) pairs on merged indexes
  • Files() - Get all files as []FileWithRef[T] after applying layer operations
  • HasFile(name) - Check if a file exists
Layer Information
  • LayerCount() - Number of layers
  • GetLayerRef(index) - Get reference for a layer
  • FileCount() - Total unique files after applying operations
  • IsEmpty() - True if no files remain after applying all operations
Conversion
  • ToSingleIndex() - Merge all layers into a single Files collection
Serialization
  • SerializeLayered(RefSerializer[T]) - Serialize the layered index preserving all layers
  • DeserializeLayered[T](data, RefSerializer[T]) - Reconstruct a layered index from serialized data

The serialization requires providing a RefSerializer[T] with functions to convert your reference type to/from bytes:

// Example for string references
stringSerializer := RefSerializer[string]{
    Marshal: func(s string) ([]byte, error) {
        return []byte(s), nil
    },
    Unmarshal: func(b []byte) (string, error) {
        return string(b), nil
    },
}

// Serialize
data, err := layered.SerializeLayered(stringSerializer)

// Deserialize
layered2, err := DeserializeLayered(data, stringSerializer)
Important Notes
  1. Deletion semantics: Delete layers only remove files that exist in previous layers. Files added in subsequent layers are not affected.

  2. Directory handling: When a file is deleted, empty parent directories are automatically removed. A directory is kept if it still contains any files.

  3. Duplicate references: The same reference cannot be used twice. Attempting to add a layer with an existing reference returns an error.

  4. Performance: The layered index maintains files in memory. For large numbers of layers or files, consider merging to a single index periodically.

License

zipindex is released under the Apache License v2.0. You can find the complete text in the file LICENSE.

zipindex contains code that is Copyright (c) 2009 The Go Authors. See GO_LICENSE file for license.

Contributing

Contributions are welcome, please send PRs for any enhancements.

Documentation

Overview

Package zipindex provides a size optimized representation of a zip file to allow decompressing the file without reading the zip file index.

It will only provide the minimal needed data for successful decompression and CRC checks.

Custom metadata can be stored per file and filtering can be performed on the incoming files.

Index

Examples

Constants

View Source
const (
	Store   uint16 = 0                    // no compression
	Deflate uint16 = 8                    // DEFLATE compressed
	Zstd    uint16 = zstd.ZipMethodWinZip // Zstd in zip.
)

Compression methods.

View Source
const MaxCustomEntries = 1000

MaxCustomEntries is the maximum number of custom entries per file.

View Source
const MaxFiles = 1_000_000_000

MaxFiles is the maximum number of files inside a zip file.

View Source
const MaxIndexSize = 128 << 20

MaxIndexSize is the maximum index size, uncompressed.

Variables

View Source
var (
	// ErrFormat is returned when zip file cannot be parsed.
	ErrFormat = errors.New("zip: not a valid zip file")
	// ErrAlgorithm is returned if an unsupported compression type is used.
	ErrAlgorithm = errors.New("zip: unsupported compression algorithm")
	// ErrChecksum is returned if a file fails a CRC check.
	ErrChecksum = errors.New("zip: checksum error")
)
View Source
var ErrMaxSizeExceeded = errors.New("index maximum size exceeded")

ErrMaxSizeExceeded is returned if the maximum size of data is exceeded.

View Source
var ErrTooManyCustomEntries = errors.New("custom entry count exceeded")

ErrTooManyCustomEntries is returned when a zip file custom entry has too many entries.

View Source
var ErrTooManyFiles = errors.New("too many files")

ErrTooManyFiles is returned when a zip file contains too many files.

Functions

func RegisterDecompressor

func RegisterDecompressor(method uint16, dcomp Decompressor)

RegisterDecompressor allows custom decompressors for a specified method ID. The common methods Store (0) and Deflate (8) and Zstandard (93) are built in.

Types

type Decompressor

type Decompressor func(r io.Reader) io.ReadCloser

A Decompressor returns a new decompressing reader, reading from r. The ReadCloser's Close method must be used to release associated resources. The Decompressor itself must be safe to invoke from multiple goroutines simultaneously, but each returned reader will be used only by one goroutine at a time.

type ErrNeedMoreData

type ErrNeedMoreData struct {
	FromEnd int64
}

ErrNeedMoreData is returned by ReadDir when more data is required to read the directory. The exact number of bytes from the end of the file is provided. It is reasonable to reject numbers that are too large to not run out of memory.

func (ErrNeedMoreData) Error

func (e ErrNeedMoreData) Error() string

Error returns the error as string.

type File

type File struct {
	Name               string // Name of the file as stored in the zip.
	CompressedSize64   uint64 // Size of compressed data, excluding ZIP headers.
	UncompressedSize64 uint64 // Size of the Uncompressed data.
	Offset             int64  // Offset where file data header starts.
	CRC32              uint32 // CRC of the uncompressed data.
	Method             uint16 // Storage method.
	Flags              uint16 // General purpose bit flag

	// Custom data.
	Custom map[string]string
}

File is a sparse representation of a File inside a zip file.

func DefaultFileFilter

func DefaultFileFilter(dst *File, entry *ZipDirEntry) *File

DefaultFileFilter will filter out all entries that are not regular files and can be compressed.

func FindSerialized

func FindSerialized(b []byte, name string) (*File, error)

FindSerialized will locate a file by name and return it. This will be less resource intensive than decoding all files, if only one it requested. Expected speed scales O(n) for n files. Returns nil, io.EOF if not found.

Example

ExampleReadFile demonstrates how to read the index of a file on disk.

package main

import (
	"fmt"

	"github.com/minio/zipindex"
)

func main() {
	files, err := zipindex.ReadFile("testdata/go-with-datadesc-sig.zip", nil)
	if err != nil {
		panic(err)
	}
	files.OptimizeSize()
	serialized, err := files.Serialize()
	if err != nil {
		panic(err)
	}

	file, err := zipindex.FindSerialized(serialized, "bar.txt")
	if err != nil {
		panic(err)
	}
	fmt.Printf("bar.txt: %+v", *file)
}
Output:

bar.txt: {Name:bar.txt CompressedSize64:4 UncompressedSize64:4 Offset:57 CRC32:0 Method:0 Flags:8 Custom:map[]}

func (*File) DecodeMsg

func (z *File) DecodeMsg(dc *msgp.Reader) (err error)

DecodeMsg implements msgp.Decodable

func (*File) EncodeMsg

func (z *File) EncodeMsg(en *msgp.Writer) (err error)

EncodeMsg implements msgp.Encodable

func (*File) MarshalMsg

func (z *File) MarshalMsg(b []byte) (o []byte, err error)

MarshalMsg implements msgp.Marshaler

func (*File) Msgsize

func (z *File) Msgsize() (s int)

Msgsize returns an upper bound estimate of the number of bytes occupied by the serialized message

func (*File) Open

func (f *File) Open(r io.Reader) (io.ReadCloser, error)

Open returns a ReadCloser that provides access to the File's contents. The Reader 'r' must be forwarded to f.Offset before being provided.

func (*File) OpenRaw

func (f *File) OpenRaw(r io.Reader) (io.Reader, error)

OpenRaw returns a Reader that returns the *compressed* output of the file.

func (*File) UnmarshalMsg

func (f *File) UnmarshalMsg(bts []byte) (o []byte, err error)

UnmarshalMsg implements msgp.Unmarshaler

type FileFilter

type FileFilter = func(dst *File, entry *ZipDirEntry) *File

FileFilter allows transforming the incoming data. If the returned file is nil it will not be added. Custom fields can be added. Note the Custom field will usually be nil.

Example

ExampleFileFilter demonstrates how to filter incoming files.

package main

import (
	"fmt"

	"github.com/minio/zipindex"
)

func main() {
	files, err := zipindex.ReadFile("testdata/unix.zip",
		func(dst *zipindex.File, entry *zipindex.ZipDirEntry) *zipindex.File {
			if dst.Name == "hello" {
				// Filter out on specific properties.
				return nil
			}
			// Add custom data.
			if dst.Custom == nil {
				dst.Custom = make(map[string]string, 3)
			}
			dst.Custom["modified"] = entry.Modified.String()
			dst.Custom["perm"] = fmt.Sprintf("0%o", entry.Mode().Perm())
			if len(entry.Comment) > 0 {
				dst.Custom["comment"] = entry.Comment
			}
			return dst
		})
	if err != nil {
		panic(err)
	}
	fmt.Printf("Got %d files\n", len(files))
	for i, file := range files {
		fmt.Printf("%d: %+v\n", i, file)
	}
}
Output:

Got 3 files
0: {Name:dir/bar CompressedSize64:6 UncompressedSize64:6 Offset:71 CRC32:2055117726 Method:0 Flags:0 Custom:map[modified:2011-12-08 10:04:50 +0000 +0000 perm:0666]}
1: {Name:dir/empty/ CompressedSize64:0 UncompressedSize64:0 Offset:142 CRC32:0 Method:0 Flags:0 Custom:map[modified:2011-12-08 10:08:06 +0000 +0000 perm:0777]}
2: {Name:readonly CompressedSize64:12 UncompressedSize64:12 Offset:210 CRC32:3127775578 Method:0 Flags:0 Custom:map[modified:2011-12-08 10:06:08 +0000 +0000 perm:0444]}

type FileWithRef added in v0.5.0

type FileWithRef[T comparable] struct {
	File
	LayerRef T
}

FileWithRef represents a file with its layer reference.

type Files

type Files []File

Files is a collection of files.

func DeserializeFiles

func DeserializeFiles(b []byte) (Files, error)

DeserializeFiles will de-serialize the files.

Example
package main

import (
	"bytes"
	"fmt"
	"io"
	"log"
	"os"

	"github.com/minio/zipindex"
)

func main() {
	exitOnErr := func(err error) {
		if err != nil {
			log.Fatalln(err)
		}
	}

	b, err := os.ReadFile("testdata/big.zip")
	exitOnErr(err)
	// We only need the end of the file to parse the directory.
	// Usually this should be at least 64K on initial try.
	sz := 64 << 10
	var files zipindex.Files
	files, err = zipindex.ReadDir(b[len(b)-sz:], int64(len(b)), nil)
	// Omitted: Check if ErrNeedMoreData and retry with more data
	exitOnErr(err)

	// OptimizeSize files will make the size as efficient as possible
	// without loosing data.
	files.OptimizeSize()

	// Serialize files to binary.
	serialized, err := files.Serialize()
	exitOnErr(err)

	// This output may change if compression is improved.
	// Output is rounded up.
	fmt.Printf("Size of serialized data: %dKB\n", (len(serialized)+1023)/1024)

	// StripCRC(true) will strip CRC, even if there is no file descriptor.
	files.StripCRC(true)
	// StripFlags(1<<3) will strip all flags that aren't a file descriptor flag (bit 3).
	files.StripFlags(1 << 3)
	noCRC, err := files.Serialize()
	exitOnErr(err)

	// This output may change if compression is improved.
	// Output is rounded up.
	fmt.Printf("Size of serialized data without CRC: %dKB\n", (len(noCRC)+1023)/1024)

	// Deserialize the content (with CRC).
	files, err = zipindex.DeserializeFiles(serialized)
	exitOnErr(err)

	file := files.Find("file-10.txt")
	fmt.Printf("Reading file: %+v\n", *file)

	// Create a reader with entire zip file...
	rs := bytes.NewReader(b)
	// Seek to the file offset.
	_, err = rs.Seek(file.Offset, io.SeekStart)
	exitOnErr(err)

	// Provide the forwarded reader..
	rc, err := file.Open(rs)
	exitOnErr(err)
	defer rc.Close()

	// Read the zip file content.
	content, err := io.ReadAll(rc)
	exitOnErr(err)

	fmt.Printf("File content is '%s'\n", string(content))

}
Output:

Size of serialized data: 6KB
Size of serialized data without CRC: 1KB
Reading file: {Name:file-10.txt CompressedSize64:2 UncompressedSize64:2 Offset:410 CRC32:2707236321 Method:0 Flags:0 Custom:map[]}
File content is '10'

func ReadDir

func ReadDir(buf []byte, zipSize int64, filter FileFilter) (Files, error)

ReadDir will read the directory from the provided buffer. Regular files that are expected to be decompressable will be returned. ErrNeedMoreData may be returned if more data is required to read the directory. For initial scan at least 64KiB or the entire file if smaller should be given, but more will make it more likely that the entire directory can be read. The total size of the zip file must be provided. A custom filter can be provided. If nil DefaultFileFilter will be used.

Example
package main

import (
	"errors"
	"fmt"
	"os"

	"github.com/minio/zipindex"
)

func main() {
	b, err := os.ReadFile("testdata/big.zip")
	if err != nil {
		panic(err)
	}
	// We only need the end of the file to parse the directory.
	// Usually this should be at least 64K on initial try.
	sz := 10 << 10
	var files zipindex.Files
	for {
		files, err = zipindex.ReadDir(b[len(b)-sz:], int64(len(b)), nil)
		if err == nil {
			fmt.Printf("Got %d files\n", len(files))
			break
		}
		var terr zipindex.ErrNeedMoreData
		if errors.As(err, &terr) {
			if terr.FromEnd > 1<<20 {
				panic("we will only provide max 1MB data")
			}
			sz = int(terr.FromEnd)
			fmt.Printf("Retrying with %d bytes at the end of file\n", sz)
		} else {
			// Unable to parse...
			panic(err)
		}
	}

	fmt.Printf("First file: %+v", files[0])
}
Output:

Retrying with 57912 bytes at the end of file
Got 1000 files
First file: {Name:file-0.txt CompressedSize64:1 UncompressedSize64:1 Offset:0 CRC32:4108050209 Method:0 Flags:0 Custom:map[]}

func ReadFile

func ReadFile(name string, filter FileFilter) (Files, error)

ReadFile will read the directory from a file. If the ZIP file directory exceeds 100MB it will be rejected.

Example

ExampleReadFile demonstrates how to read the index of a file on disk.

package main

import (
	"fmt"

	"github.com/minio/zipindex"
)

func main() {
	files, err := zipindex.ReadFile("testdata/go-with-datadesc-sig.zip", nil)
	if err != nil {
		panic(err)
	}
	fmt.Printf("Got %d files\n", len(files))
	fmt.Printf("First file: %+v", files[0])
}
Output:

Got 2 files
First file: {Name:foo.txt CompressedSize64:4 UncompressedSize64:4 Offset:0 CRC32:2117232040 Method:0 Flags:8 Custom:map[]}

func ReaderAt

func ReaderAt(r io.ReaderAt, size, maxDir int64, filter FileFilter) (Files, error)

ReaderAt will read the directory from a io.ReaderAt. The total zip file must be provided. If the ZIP file directory exceeds maxDir bytes it will be rejected.

Example

ExampleReadFile demonstrates how to read the index of a file on disk.

package main

import (
	"fmt"
	"os"

	"github.com/minio/zipindex"
)

func main() {
	f, err := os.Open("testdata/big.zip")
	if err != nil {
		panic(err)
	}
	fi, err := f.Stat()
	if err != nil {
		panic(err)
	}

	// Read and allow up to 10MB index.
	files, err := zipindex.ReaderAt(f, fi.Size(), 10<<20, nil)
	if err != nil {
		panic(err)
	}
	fmt.Printf("Got %d files\n", len(files))
	fmt.Printf("First file: %+v", files[0])
}
Output:

Got 1000 files
First file: {Name:file-0.txt CompressedSize64:1 UncompressedSize64:1 Offset:0 CRC32:4108050209 Method:0 Flags:0 Custom:map[]}

func (Files) Find

func (f Files) Find(name string) *File

Find the file with the provided name. Search is linear.

func (Files) OptimizeSize

func (f Files) OptimizeSize()

OptimizeSize will sort entries and strip CRC data when the file has a file descriptor.

func (*Files) RemoveInsecurePaths added in v0.3.1

func (f *Files) RemoveInsecurePaths()

RemoveInsecurePaths will remove any file with path deemed insecure. This is files that fail either !filepath.IsLocal(file.Name) or contain a backslash.

func (Files) Serialize

func (f Files) Serialize() ([]byte, error)

Serialize the files.

func (Files) Sort

func (f Files) Sort()

Sort files by offset in zip file. Typically, directories are already sorted by offset. This will usually provide the smallest possible serialized size.

func (Files) SortByName added in v0.4.0

func (f Files) SortByName()

SortByName will sort files by file name in zip file.

func (Files) StripCRC

func (f Files) StripCRC(all bool)

StripCRC will zero out the CRC for all files if there is a data descriptor (which will contain a CRC) or optionally for all.

func (Files) StripFlags

func (f Files) StripFlags(mask uint16)

StripFlags will zero out the Flags, except the ones provided in mask.

type LayeredIndex added in v0.5.0

type LayeredIndex[T comparable] struct {
	// contains filtered or unexported fields
}

LayeredIndex represents multiple indexes layered on top of each other. Files from newer layers override files from older layers with the same path.

func DeserializeLayered added in v0.5.0

func DeserializeLayered[T comparable](data []byte, refSerializer RefSerializer[T]) (*LayeredIndex[T], error)

DeserializeLayered reconstructs a layered index from serialized data. Uses concurrent deserialization for better performance with large indexes.

func NewLayeredIndex added in v0.5.0

func NewLayeredIndex[T comparable]() *LayeredIndex[T]

NewLayeredIndex creates a new empty layered index.

func (*LayeredIndex[T]) AddDeleteLayer added in v0.5.0

func (l *LayeredIndex[T]) AddDeleteLayer(index Files, ref T) error

AddDeleteLayer adds a deletion layer with the given reference. Files in this layer will be removed from the final result. Returns an error if a layer with the same reference already exists. Files are sorted by name for efficient lookups.

func (*LayeredIndex[T]) AddLayer added in v0.5.0

func (l *LayeredIndex[T]) AddLayer(index Files, ref T) error

AddLayer adds a new index layer with the given reference. Files in this layer will override files with the same path in previous layers. Returns an error if a layer with the same reference already exists. Files are sorted by name for efficient lookups.

func (*LayeredIndex[T]) Clear added in v0.5.0

func (l *LayeredIndex[T]) Clear()

Clear removes all layers from the index.

func (*LayeredIndex[T]) FileCount added in v0.5.0

func (l *LayeredIndex[T]) FileCount() int

FileCount returns the total number of unique files after applying all layer operations.

func (*LayeredIndex[T]) Files added in v0.5.0

func (l *LayeredIndex[T]) Files() []FileWithRef[T]

Files returns all files in the layered index after applying layer operations. Files from newer layers override files from older layers with the same path. Delete layers remove files that exist in previous layers.

func (*LayeredIndex[T]) FilesIter added in v0.5.0

func (l *LayeredIndex[T]) FilesIter() iter.Seq2[T, File]

FilesIter returns an iterator over all files in the layered index. Each iteration yields the layer reference and the file. Files are returned in name order after applying all layer operations.

func (*LayeredIndex[T]) Find added in v0.5.0

func (l *LayeredIndex[T]) Find(name string) (*FileWithRef[T], bool)

Find searches for a file by name across all layers using binary search. Returns the file and its layer reference if found. Delete layers remove the file if it exists in previous layers. Empty directories are automatically considered deleted.

func (*LayeredIndex[T]) FindInLayer added in v0.5.0

func (l *LayeredIndex[T]) FindInLayer(name string, ref T) (*File, bool)

FindInLayer searches for a file by name in a specific layer using binary search. Returns the file if found in the specified layer.

func (*LayeredIndex[T]) GetLayerRef added in v0.5.0

func (l *LayeredIndex[T]) GetLayerRef(index int) (T, bool)

GetLayerRef returns the reference for the layer at the given index. Returns the zero value of T and false if the index is out of bounds.

func (*LayeredIndex[T]) HasFile added in v0.5.0

func (l *LayeredIndex[T]) HasFile(name string) bool

HasFile returns true if the file exists in the layered index after applying all operations.

func (*LayeredIndex[T]) IsEmpty added in v0.5.0

func (l *LayeredIndex[T]) IsEmpty() bool

IsEmpty returns true if the index has no files after applying all layer operations. This accounts for files that have been deleted by delete layers.

func (*LayeredIndex[T]) LayerCount added in v0.5.0

func (l *LayeredIndex[T]) LayerCount() int

LayerCount returns the number of layers in the index.

func (*LayeredIndex[T]) RemoveLayer added in v0.5.0

func (l *LayeredIndex[T]) RemoveLayer(index int) error

RemoveLayer removes the layer at the given index. Returns an error if the index is out of bounds.

func (*LayeredIndex[T]) RemoveLayerByRef added in v0.5.0

func (l *LayeredIndex[T]) RemoveLayerByRef(ref T) int

RemoveLayerByRef removes all layers with the given reference. Returns the number of layers removed.

func (*LayeredIndex[T]) SerializeLayered added in v0.5.0

func (l *LayeredIndex[T]) SerializeLayered(refSerializer RefSerializer[T]) ([]byte, error)

SerializeLayered serializes the layered index with all layers preserved. Uses concurrent serialization for better performance with large indexes.

func (*LayeredIndex[T]) ToSingleIndex added in v0.5.0

func (l *LayeredIndex[T]) ToSingleIndex() Files

ToSingleIndex merges all layers into a single Files collection. Files from newer layers override files from older layers with the same path. Files in delete layers are removed from the result.

type RefSerializer added in v0.5.0

type RefSerializer[T comparable] struct {
	// Marshal converts a reference to bytes
	Marshal func(T) ([]byte, error)
	// Unmarshal converts bytes to a reference
	Unmarshal func([]byte) (T, error)
}

RefSerializer provides functions to convert layer references to/from byte slices.

type ZipDirEntry

type ZipDirEntry struct {
	// Name is the name of the file.
	//
	// It must be a relative path, not start with a drive letter (such as "C:"),
	// and must use forward slashes instead of back slashes. A trailing slash
	// indicates that this file is a directory and should have no data.
	//
	// When reading zip files, the Name field is populated from
	// the zip file directly and is not validated for correctness.
	// It is the caller's responsibility to sanitize it as
	// appropriate, including canonicalizing slash directions,
	// validating that paths are relative, and preventing path
	// traversal through filenames ("../../../").
	Name string

	// Comment is any arbitrary user-defined string shorter than 64KiB.
	Comment string

	// NonUTF8 indicates that Name and Comment are not encoded in UTF-8.
	//
	// By specification, the only other encoding permitted should be CP-437,
	// but historically many ZIP readers interpret Name and Comment as whatever
	// the system's local character encoding happens to be.
	//
	// This flag should only be set if the user intends to encode a non-portable
	// ZIP file for a specific localized region. Otherwise, the Writer
	// automatically sets the ZIP format's UTF-8 flag for valid UTF-8 strings.
	NonUTF8 bool

	CreatorVersion uint16
	ReaderVersion  uint16
	Flags          uint16

	// Method is the compression method. If zero, Store is used.
	Method uint16

	// Modified is the modified time of the file.
	//
	// When reading, an extended timestamp is preferred over the legacy MS-DOS
	// date field, and the offset between the times is used as the timezone.
	// If only the MS-DOS date is present, the timezone is assumed to be UTC.
	//
	// When writing, an extended timestamp (which is timezone-agnostic) is
	// always emitted. The legacy MS-DOS date field is encoded according to the
	// location of the Modified time.
	Modified time.Time

	CRC32              uint32
	CompressedSize64   uint64
	UncompressedSize64 uint64
	Extra              []byte
	ExternalAttrs      uint32 // Meaning depends on CreatorVersion
	// contains filtered or unexported fields
}

ZipDirEntry describes a file within a zip file. See the zip spec for details.

func (*ZipDirEntry) Mode

func (h *ZipDirEntry) Mode() (mode os.FileMode)

Mode returns the permission and mode bits for the FileHeader.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL