einops.reduce
einops.reduce combines rearrangement and reduction using reader-friendly notation.
Some examples:
>>> x = np.random.randn(100, 32, 64)
# perform max-reduction on the first axis
# Axis t does not appear on RHS - thus we reduced over t
>>> y = reduce(x, 't b c -> b c', 'max')
# same as previous, but using verbose names for axes
>>> y = reduce(x, 'time batch channel -> batch channel', 'max')
# let's pretend now that x is a batch of images
# with 4 dims: batch=10, height=20, width=30, channel=40
>>> x = np.random.randn(10, 20, 30, 40)
# 2d max-pooling with kernel size = 2 * 2 for image processing
>>> y1 = reduce(x, 'b c (h1 h2) (w1 w2) -> b c h1 w1', 'max', h2=2, w2=2)
# same as previous, using anonymous axes,
# note: only reduced axes can be anonymous
>>> y1 = reduce(x, 'b c (h1 2) (w1 2) -> b c h1 w1', 'max')
# adaptive 2d max-pooling to 3 * 4 grid,
# each element is max of 10x10 tile in the original tensor.
>>> reduce(x, 'b c (h1 h2) (w1 w2) -> b c h1 w1', 'max', h1=3, w1=4).shape
(10, 20, 3, 4)
# Global average pooling
>>> reduce(x, 'b c h w -> b c', 'mean').shape
(10, 20)
# subtracting mean over batch for each channel;
# similar to x - np.mean(x, axis=(0, 2, 3), keepdims=True)
>>> y = x - reduce(x, 'b c h w -> 1 c 1 1', 'mean')
# Subtracting per-image mean for each channel
>>> y = x - reduce(x, 'b c h w -> b c 1 1', 'mean')
# same as previous, but using empty compositions
>>> y = x - reduce(x, 'b c h w -> b c () ()', 'mean')
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor
|
Union[Tensor, List[Tensor]]
|
tensor: tensor of any supported library (e.g. numpy.ndarray, tensorflow, pytorch). list of tensors is also accepted, those should be of the same type and shape |
required |
pattern
|
str
|
string, reduction pattern |
required |
reduction
|
Reduction
|
one of available reductions ('min', 'max', 'sum', 'mean', 'prod', 'any', 'all'). Alternatively, a callable f(tensor, reduced_axes) -> tensor can be provided. This allows using various reductions like: np.max, np.nanmean, tf.reduce_logsumexp, torch.var, etc. |
required |
axes_lengths
|
Size
|
any additional specifications for dimensions |
{}
|
Returns:
Type | Description |
---|---|
Tensor
|
tensor of the same type as input |
Source code in einops/einops.py
460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 |
|