torchreid.utils

Average Meter

class torchreid.utils.avgmeter.AverageMeter[source]

Computes and stores the average and current value.

Examples::
>>> # Initialize a meter to record loss
>>> losses = AverageMeter()
>>> # Update meter after every minibatch update
>>> losses.update(loss_value, batch_size)
class torchreid.utils.avgmeter.MetricMeter(delimiter='t')[source]

A collection of metrics.

Source: https://github.com/KaiyangZhou/Dassl.pytorch

Examples::
>>> # 1. Create an instance of MetricMeter
>>> metric = MetricMeter()
>>> # 2. Update using a dictionary as input
>>> input_dict = {'loss_1': value_1, 'loss_2': value_2}
>>> metric.update(input_dict)
>>> # 3. Convert to string and print
>>> print(str(metric))

Loggers

class torchreid.utils.loggers.Logger(fpath=None)[source]

Writes console output to external text file.

Imported from https://github.com/Cysu/open-reid/blob/master/reid/utils/logging.py

Parameters

fpath (str) – directory to save logging file.

Examples::
>>> import sys
>>> import os
>>> import os.path as osp
>>> from torchreid.utils import Logger
>>> save_dir = 'log/resnet50-softmax-market1501'
>>> log_name = 'train.log'
>>> sys.stdout = Logger(osp.join(args.save_dir, log_name))
class torchreid.utils.loggers.RankLogger(sources, targets)[source]

Records the rank1 matching accuracy obtained for each test dataset at specified evaluation steps and provides a function to show the summarized results, which are convenient for analysis.

Parameters
  • sources (str or list) – source dataset name(s).

  • targets (str or list) – target dataset name(s).

Examples::
>>> from torchreid.utils import RankLogger
>>> s = 'market1501'
>>> t = 'market1501'
>>> ranklogger = RankLogger(s, t)
>>> ranklogger.write(t, 10, 0.5)
>>> ranklogger.write(t, 20, 0.7)
>>> ranklogger.write(t, 30, 0.9)
>>> ranklogger.show_summary()
>>> # You will see:
>>> # => Show performance summary
>>> # market1501 (source)
>>> # - epoch 10   rank1 50.0%
>>> # - epoch 20   rank1 70.0%
>>> # - epoch 30   rank1 90.0%
>>> # If there are multiple test datasets
>>> t = ['market1501', 'dukemtmcreid']
>>> ranklogger = RankLogger(s, t)
>>> ranklogger.write(t[0], 10, 0.5)
>>> ranklogger.write(t[0], 20, 0.7)
>>> ranklogger.write(t[0], 30, 0.9)
>>> ranklogger.write(t[1], 10, 0.1)
>>> ranklogger.write(t[1], 20, 0.2)
>>> ranklogger.write(t[1], 30, 0.3)
>>> ranklogger.show_summary()
>>> # You can see:
>>> # => Show performance summary
>>> # market1501 (source)
>>> # - epoch 10   rank1 50.0%
>>> # - epoch 20   rank1 70.0%
>>> # - epoch 30   rank1 90.0%
>>> # dukemtmcreid (target)
>>> # - epoch 10   rank1 10.0%
>>> # - epoch 20   rank1 20.0%
>>> # - epoch 30   rank1 30.0%
show_summary()[source]

Shows saved results.

write(name, epoch, rank1)[source]

Writes result.

Parameters
  • name (str) – dataset name.

  • epoch (int) – current epoch.

  • rank1 (float) – rank1 result.

Generic Tools

torchreid.utils.tools.mkdir_if_missing(dirname)[source]

Creates dirname if it is missing.

torchreid.utils.tools.check_isfile(fpath)[source]

Checks if the given path is a file.

Parameters

fpath (str) – file path.

Returns

bool

torchreid.utils.tools.read_json(fpath)[source]

Reads json file from a path.

torchreid.utils.tools.write_json(obj, fpath)[source]

Writes to a json file.

torchreid.utils.tools.download_url(url, dst)[source]

Downloads file from a url to a destination.

Parameters
  • url (str) – url to download file.

  • dst (str) – destination path.

torchreid.utils.tools.read_image(path)[source]

Reads image from path using PIL.Image.

Parameters

path (str) – path to an image.

Returns

PIL image

torchreid.utils.tools.collect_env_info()[source]

Returns env info as a string.

Code source: github.com/facebookresearch/maskrcnn-benchmark

torchreid.utils.tools.listdir_nohidden(path, sort=False)[source]

List non-hidden items in a directory.

Parameters
  • path (str) – directory path.

  • sort (bool) – sort the items.

ReID Tools

torchreid.utils.reidtools.visualize_ranked_results(distmat, dataset, data_type, width=128, height=256, save_dir='', topk=10)[source]

Visualizes ranked results.

Supports both image-reid and video-reid.

For image-reid, ranks will be plotted in a single figure. For video-reid, ranks will be saved in folders each containing a tracklet.

Parameters
  • distmat (numpy.ndarray) – distance matrix of shape (num_query, num_gallery).

  • dataset (tuple) – a 2-tuple containing (query, gallery), each of which contains tuples of (img_path(s), pid, camid, dsetid).

  • data_type (str) – “image” or “video”.

  • width (int, optional) – resized image width. Default is 128.

  • height (int, optional) – resized image height. Default is 256.

  • save_dir (str) – directory to save output images.

  • topk (int, optional) – denoting top-k images in the rank list to be visualized. Default is 10.

Torch Tools

torchreid.utils.torchtools.save_checkpoint(state, save_dir, is_best=False, remove_module_from_keys=False)[source]

Saves checkpoint.

Parameters
  • state (dict) – dictionary.

  • save_dir (str) – directory to save checkpoint.

  • is_best (bool, optional) – if True, this checkpoint will be copied and named model-best.pth.tar. Default is False.

  • remove_module_from_keys (bool, optional) – whether to remove “module.” from layer names. Default is False.

Examples::
>>> state = {
>>>     'state_dict': model.state_dict(),
>>>     'epoch': 10,
>>>     'rank1': 0.5,
>>>     'optimizer': optimizer.state_dict()
>>> }
>>> save_checkpoint(state, 'log/my_model')
torchreid.utils.torchtools.load_checkpoint(fpath)[source]

Loads checkpoint.

UnicodeDecodeError can be well handled, which means python2-saved files can be read from python3.

Parameters

fpath (str) – path to checkpoint.

Returns

dict

Examples::
>>> from torchreid.utils import load_checkpoint
>>> fpath = 'log/my_model/model.pth.tar-10'
>>> checkpoint = load_checkpoint(fpath)
torchreid.utils.torchtools.resume_from_checkpoint(fpath, model, optimizer=None, scheduler=None)[source]

Resumes training from a checkpoint.

This will load (1) model weights and (2) state_dict of optimizer if optimizer is not None.

Parameters
  • fpath (str) – path to checkpoint.

  • model (nn.Module) – model.

  • optimizer (Optimizer, optional) – an Optimizer.

  • scheduler (LRScheduler, optional) – an LRScheduler.

Returns

start_epoch.

Return type

int

Examples::
>>> from torchreid.utils import resume_from_checkpoint
>>> fpath = 'log/my_model/model.pth.tar-10'
>>> start_epoch = resume_from_checkpoint(
>>>     fpath, model, optimizer, scheduler
>>> )
torchreid.utils.torchtools.open_all_layers(model)[source]

Opens all layers in model for training.

Examples::
>>> from torchreid.utils import open_all_layers
>>> open_all_layers(model)
torchreid.utils.torchtools.open_specified_layers(model, open_layers)[source]

Opens specified layers in model for training while keeping other layers frozen.

Parameters
  • model (nn.Module) – neural net model.

  • open_layers (str or list) – layers open for training.

Examples::
>>> from torchreid.utils import open_specified_layers
>>> # Only model.classifier will be updated.
>>> open_layers = 'classifier'
>>> open_specified_layers(model, open_layers)
>>> # Only model.fc and model.classifier will be updated.
>>> open_layers = ['fc', 'classifier']
>>> open_specified_layers(model, open_layers)
torchreid.utils.torchtools.count_num_param(model)[source]

Counts number of parameters in a model while ignoring self.classifier.

Parameters

model (nn.Module) – network model.

Examples::
>>> from torchreid.utils import count_num_param
>>> model_size = count_num_param(model)

Warning

This method is deprecated in favor of torchreid.utils.compute_model_complexity.

torchreid.utils.torchtools.load_pretrained_weights(model, weight_path)[source]

Loads pretrianed weights to model.

Features::
  • Incompatible layers (unmatched in name or size) will be ignored.

  • Can automatically deal with keys containing “module.”.

Parameters
  • model (nn.Module) – network model.

  • weight_path (str) – path to pretrained weights.

Examples::
>>> from torchreid.utils import load_pretrained_weights
>>> weight_path = 'log/my_model/model-best.pth.tar'
>>> load_pretrained_weights(model, weight_path)
torchreid.utils.model_complexity.compute_model_complexity(model, input_size, verbose=False, only_conv_linear=True)[source]

Returns number of parameters and FLOPs.

Note

(1) this function only provides an estimate of the theoretical time complexity rather than the actual running time which depends on implementations and hardware, and (2) the FLOPs is only counted for layers that are used at test time. This means that redundant layers such as person ID classification layer will be ignored as it is discarded when doing feature extraction. Note that the inference graph depends on how you construct the computations in forward().

Parameters
  • model (nn.Module) – network model.

  • input_size (tuple) – input size, e.g. (1, 3, 256, 128).

  • verbose (bool, optional) – shows detailed complexity of each module. Default is False.

  • only_conv_linear (bool, optional) – only considers convolution and linear layers when counting flops. Default is True. If set to False, flops of all layers will be counted.

Examples::
>>> from torchreid import models, utils
>>> model = models.build_model(name='resnet50', num_classes=1000)
>>> num_params, flops = utils.compute_model_complexity(model, (1, 3, 256, 128), verbose=True)