PyChubby

pychubby is a package for automated face warping. It allows the user to programatically change facial expressions and shapes of any person in an image.

Installation

From PyPI

pip install pychubby

From source

pip install git+https://github.com/jankrepl/pychubby.git

Notes

By default we are using a pretrained landmark model from https://github.com/davisking/dlib-models.

Basic Example

To illustrate the simplest use case let us assume that we start with a photo with a single face in it.

Original image

pychubby implements a class LandmarkFace which stores all relevant data that enable face warping. Namely it is the image itself and 68 landmark points. To instantiate a LandmarkFace one needs to use a utility class method estimate.

import matplotlib.pyplot as plt
from pychubby.detect import LandmarkFace

img = plt.imread("path/to/the/image")
lf = LandmarkFace.estimate(img)
lf.plot()
Face with landmarks

Note that it might be necessary to upsample the image before the estimation. For convenience the estimate method has an optional parameter n_upsamples.

Once the landmark points are estimated we can move on with performing actions on the face. Let’s try to make the person smile:

from pychubby.actions import Smile

a = Smile(scale=0.2)
new_lf, df = a.perform(lf)  # lf defined above
new_lf.plot(show_landmarks=False)
Smiling face

There are 2 important things to note. Firstly the new_lf now contains both the warped version of the original image as well as the transformed landmark points. Secondly, the perform method also returns a df which is an instance of pychubby.base.DisplacementField and represents the pixel by pixel transformation between the old and the new (smiling) image.

To see all currently available actions go to Gallery.

To create an animation of the action we can use the visualization module.

from pychubby.visualization import create_animation

ani = create_animation(df, img) # the displacement field and the original image
https://i.imgur.com/jB6Vlnc.gif

Pipelines

Rather than applying a single action at a time pychubby enables piping multiple actions together. To achieve this one can use the metaaction Pipeline.

Let us again assume that we start with an image with a single face in it.

Original image

Let’s try to make the person smile but also close her eyes slightly.

import matplotlib.pyplot as plt

from pychubby.actions import OpenEyes, Pipeline, Smile
from pychubby.detect import LandmarkFace

img = plt.imread("path/to/the/image")
lf = LandmarkFace.estimate(img)

a_s = Smile(0.1)
a_e = OpenEyes(-0.03)
a = Pipeline([a_s, a_e])

new_lf, df = a.perform(lf)
new_lf.plot(show_landmarks=False)
Warped image

To create an animation we can use the visualization module.

from pychubby.visualization import create_animation

ani = create_animation(df, img)
Animation

Multiple Faces

So far we assumed that there is a single face in the image. However, the real power of pychubby lies in its ability to handle multiple faces. Let’s start with the following image.

Original image

If more than one face is detected in an image pychubby uses the LandmarkFaces class rather than LandmarkFace. LandmarkFaces is essentially a container containing LandmarkFace instances for each face in the image.

import matplotlib.pyplot as plt
from pychubby.detect import LandmarkFace

img = plt.imread("path/to/the/image")
lfs = LandmarkFace.estimate(img) # lfs is an instance of LandmarkFaces
lfs.plot(show_landmarks=False, show_numbers=True)
Original image

Each face is assigned a unique integer (starting from 0). This ordering is very important since it allows us to specify which action to apply to which face.

In order to apply actions we use the metaaction Multiple. It has two modes:

  1. Same action on each face
  2. Face specific actions

Same action

The first possibility is to apply exactly the same action to each face in the image. Below is an example of making all faces more chubby.

from pychubby.actions import Chubbify, Multiple

a_single = Chubbify(0.2)
a = Multiple(a_single)
new_lfs, df = a.perform(lfs)
new_lfs.plot(show_landmarks=False, show_numbers=False)
Single action image
from pychubby.visualization import create_animation

ani = create_animation(df, img)
Single action gif

Different actions

Alternetively, we might want to apply different action to each face (or potentially no action). Below is an example of face specific actions.

from pychubby.actions import Chubbify, LinearTransform, Multiple, OpenEyes, Pipeline, Smile

a_0 = LinearTransform(scale_x=0.9, scale_y=0.9)
a_1 = Smile(0.14)
a_2 = None
a_3 = Pipeline([OpenEyes(0.05), LinearTransform(scale_x=1.02, scale_y=1.02), Chubbify(0.2)])

a = Multiple([a_0, a_1, a_2, a_3])
new_lfs, df = a.perform(lfs)
new_lfs.plot(show_landmarks=False, show_numbers=False)
Multiple actions image
from pychubby.visualization import create_animation

ani = create_animation(df, img)
Single action gif

Building Blocks

This page is dedicated to explaining the logic behind pychubby.

68 landmarks

pychubby relies on the standard 68 facial landmarks framework. Specifically, a pretrained dlib model is used to achieve this task. See pychubby.data for credits and references. Once the landmarks are detected one can query them via their index. Alternatively, for the ease of defining new actions a dictionary pychubby.detect.LANDMARK_NAMES defines a name for each of the 68 landmarks.

LandmarkFace

pychubby.detect.LandmarkFace is one of the most important classes that pychubby uses. To construct a LandmarkFace one needs to provide

  1. Image of the face
  2. 68 landmark points

Rather than using the lower level constructor the user will mostly create instances through the class method estimate which detect the landmark points automatically.

Once instantiated, one can use actions (pychubby.actions.Action) to generate a new (warped) LandmarkFace.

LandmarkFaces

pychubby.detect.LandmarkFaces is a container holding multiple instances of LandmarkFace. It additionally provides functionality that allows for performing the Multiple action on them.

Action

Action is a specific warping recipe that might depend on some parameters. Once instantiated, one can use their perform method to warp a LandmarkFace. To see already available actions go to Gallery or read how to create your own actions Custom Actions.

ReferenceSpace

In general, faces in images appear in different positions, angles and sizes. Defining actions purely based on coordinates of a given face in a given image is not a great idea. Mainly for two reasons:

  1. Resizing, cropping, rotating, etc of the image will render the action useless
  2. These actions are image specific and cannot be applied to any other image. One would be better off using some graphical interface.

One way to solve the above issues is to first transform all the landmarks into some reference space, define actions in this reference space and then map it back into the original domain. pychubby defines these reference spaces in pychubby.reference module. Each reference space needs to implement the following three methods:

  • estimate
  • inp2ref
  • ref2inp

The default reference space is the DefaultRS and its logic is captured in the below figure.

https://i.imgur.com/HRBKTr4.gif

Five selected landmarks are used to estimate an affine transformation between the reference and input space. This trasformation is endcoded in a 2 x 3 matrix A. Transforming from reference to input space and vice versa is then just a simple matrix multiplication.

\[\textbf{x}_{inp}A = \textbf{x}_{ref}\]
\[\textbf{x}_{ref}A^{-1} = \textbf{x}_{inp}\]

DisplacementField

Displacement field represents a 2D to 2D transformations between two images. To instantiate a pychubby.base.DisplacementField one can either use the standard constructor (delta_x, delta_y arrays). Alternatively, one can use a factory method generate that creates a DisplacemetField based on displacement of landmark points.

Custom Actions

pychubby makes it very easy to add custom actions. There are 3 main ingredients:

  1. Each action needs to be a subclass of pychubby.actions.Action

  2. All parameters of the action are specified via the constructor (__init__)

  3. The method perform needs to be implemented such that

    • It inputs an instance of the pychubby.detect.LandmarkFace
    • It returns a new instance of pychubby.detect.LandmarkFace and pychubby.base.DisplacementField representing the pixel by pixel transformation from the new image to the old one.

Clearly the main workhorse is the 3rd step. In order to avoid dealing with lower level details a good start is to use the utility action Lambda.

Lambda

The simplest way how to implement a new action is to use the Lambda action. Before explaining the action itself the reader is encouraged to review the DefaultRS reference space in ReferenceSpace which is by default used by Lambda.

https://i.imgur.com/dLcFQNI.gif

The lambda action works purely in the reference space and expects the following input:

  • scale - float representing the absolute size (norm) of the largest displacement in the reference space (this would be the chin displacement in the figure)
  • specs - dictionary where keys are landmarks (either name or number) and the values are tuples (angle, relative size)

That means that the user simply specifies for landmark of interest what is the displacement angle and relative size with respect to all other displacements through the specs dictionary. After that the scale parameter controls the absolute size of the biggest displacement while the other displacements are scaled linearly based on the provided relative sizes.

See below an example that replicates the figure:

from pychubby.actions import Action, Lambda

class CustomAction(Action):

    def __init__(self, scale=0.3):
        self.scale = scale

    def perform(self, lf):
        a_l = Lambda(scale=self.scale,
                     specs={'CHIN': (90, 2),
                            'CHIN_L': (110, 1),
                            'CHIN_R': (70, 1),
                            'OUTER_NOSTRIL_L': (-135, 1),
                            'OUTER_NOSTRIL_R': (-45, 1)
                           }
                    )

        return a_l.perform(lf)
https://i.imgur.com/VqmXtzU.gif

CLI

pychubby also offers a simple Command Line Interface that exposes some of the functionality of the Python package.

Usage

After installation of pychubby an entry point pc becomes available.

To see the basic information write pc --help:

Usage: pc [OPTIONS] COMMAND [ARGS]...

  Automated face warping tool.

Options:
  --help  Show this message and exit.

Commands:
  list     List available actions.
  perform  Take an action.

To perform actions one uses the perform subcommand. pc perform --help:

Usage: pc perform [OPTIONS] COMMAND [ARGS]...

  Take an action.

Options:
  --help  Show this message and exit.

Commands:
  Chubbify         Make a chubby face.
  LinearTransform  Linear transformation.
  OpenEyes         Open eyes.
  ...
  ...
  ...

The syntax for all actions is identical. The positional arguments are

  1. Input image path (required)
  2. Output image path (not required)

If the output image path is not provided the resulting image is simply going to be plotted.

All the options correspond to the keyword arguments of the constructor of the respective action classes in pychubby.actions module.

To give a specific example let us use the Smile action. To get info on the parameters write pc perform Smile --help:

Usage: pc perform Smile [OPTIONS] INP_IMG [OUT_IMG]

  Make a smiling face.

Options:
  --scale FLOAT
  --help         Show this message and exit.

In particular, one can then warp an image in the following fashion

pc perform Smile --scale 0.3 img_cousin.jpg img_cousin_smiling.jpg

Limitations

The features that are unavailable via the CLI are the following:

  1. AbsoluteMove, Lambda and Pipeline actions
  2. Different actions for different people
  3. Lower level control

Specifically, if the user provides a photo with multiple faces the same action will be performed on all of them.

pychubby.actions module

Definition of actions.

Note that for each action (class) the first line of the docstring as well as the default parameters of the constructor are used by the CLI.

class pychubby.actions.AbsoluteMove(x_shifts=None, y_shifts=None)

Bases: pychubby.actions.Action

Absolute offsets of any landmark points.

Parameters:
  • x_shifts (dict or None) – Keys are integers from 0 to 67 representing a chosen landmark points. The values represent the shift in the x direction to be made. If a landmark not specified assumed shift is 0.
  • y_shifts (dict or None) – Keys are integers from 0 to 67 representing a chosen landmark points. The values represent the shift in the y direction to be made. If a landmark not specified assumed shift is 0.
perform(lf)

Perform absolute move.

Specified landmarks will be shifted in either the x or y direction.

Parameters:lf (LandmarkFace) – Instance of a LandmarkFace.
Returns:
  • new_lf (LandmarkFace) – Instance of a LandmarkFace after taking the action.
  • df (DisplacementField) – Displacement field representing the transformation between the old and new image.
class pychubby.actions.Action

Bases: abc.ABC

General Action class to be subclassed.

perform(lf, **kwargs)

Perfom action on an instance of a LandmarkFace.

Parameters:
  • lf (LandmarkFace) – Instance of a LandmarkFace.
  • kwargs (dict) – Action specific parameters.
Returns:

new_lf – Instance of a LandmarkFace after a specified action was taken on the input lf.

Return type:

LandmarkFace

static pts2inst(new_points, lf, **interpolation_kwargs)

Generate instance of LandmarkFace via interpolation.

Parameters:
  • new_points (np.ndarray) – Array of shape (N, 2) representing the x and y coordinates of the new landmark points.
  • lf (LandmarkFace) – Instance of a LandmarkFace before taking any actions.
  • interpolation_kwargs (dict) – Interpolation parameters passed onto scipy.
Returns:

  • new_lf (LandmarkFace) – Instance of a LandmarkFace after taking an action.
  • df (DisplacementField) – Displacement field representing per pixel displacements between the old and new image.

class pychubby.actions.Chubbify(scale=0.2)

Bases: pychubby.actions.Action

Make a chubby face.

Parameters:scale (float) – Absolute shift size in the reference space.
perform(lf)

Perform an action.

Parameters:lf (LandmarkFace) – Instance of a LandmarkFace.
class pychubby.actions.Lambda(scale, specs, reference_space=None)

Bases: pychubby.actions.Action

Custom action for specifying actions with angles and norms in a reference space.

Parameters:
  • scale (float) – Absolute norm of the maximum shift. All the remaining shifts are scaled linearly.
  • specs (dict) –

    Dictionary where keyrs represent either the index or a name of the landmark. The values are tuples of two elements:

    1. Angle in degrees.
    2. Proportional shift. Only the relative size towards other landmarks matters.
  • reference_space (None or ReferenceSpace) – Reference space to be used.
perform(lf)

Perform action.

Parameters:lf (LandmarkFace) – Instance of a LandmarkFace before taking the action.
Returns:
  • new_lf (LandmarkFace) – Instance of a LandmarkFace after taking the action.
  • df (DisplacementField) – Displacement field representing the transformation between the old and new image.
class pychubby.actions.LinearTransform(scale_x=1.0, scale_y=1.0, rotation=0.0, shear=0.0, translation_x=0.0, translation_y=0.0, reference_space=None)

Bases: pychubby.actions.Action

Linear transformation.

Parameters:
  • scale_x (float) – Scaling of the x axis.
  • scale_y (float) – Scaling of the y axis.
  • rotation (float) – Rotation in radians.
  • shear (float) – Shear in radians.
  • translation_x (float) – Translation in the x direction.
  • translation_y (float) – Translation in the y direction.
  • reference_space (None or pychubby.reference.ReferenceSpace) – Instace of the ReferenceSpace class.
perform(lf)

Perform action.

Parameters:lf (LandmarkFace) – Instance of a LandmarkFace before taking the action.
Returns:
  • new_lf (LandmarkFace) – Instance of a LandmarkFace after taking the action.
  • df (DisplacementField) – Displacement field representing the transformation between the old and new image.
class pychubby.actions.Multiple(per_face_action)

Bases: pychubby.actions.Action

Applying actions to multiple faces.

Parameters:per_face_action (list or Action) – If list then elements are instances of some actions (subclasses of Action) that exactly match the order of LandmarkFace instances within the LandmarkFaces instance. It is also posible to use None for no action. If Action then the same action will be performed on each available LandmarkFace.
perform(lfs)

Perform actions on multiple faces.

Parameters:lfs (LandmarkFaces) – Instance of LandmarkFaces.
Returns:
  • new_lfs (LandmarkFaces) – Instance of a LandmarkFaces after taking the action on each face.
  • df (DisplacementField) – Displacement field representing the transformation between the old and new image.
class pychubby.actions.OpenEyes(scale=0.1)

Bases: pychubby.actions.Action

Open eyes.

Parameters:scale (float) – Absolute shift size in the reference space.
perform(lf)

Perform action.

Parameters:lf (LandmarkFace) – Instance of a LandmarkFace before taking the action.
Returns:
  • new_lf (LandmarkFace) – Instance of a LandmarkFace after taking the action.
  • df (DisplacementField) – Displacement field representing the transformation between the old and new image.
class pychubby.actions.Pipeline(steps)

Bases: pychubby.actions.Action

Pipe multiple actions together.

Parameters:steps (list) – List of different actions that are going to be performed in the given order.
perform(lf)

Perform action.

Parameters:lf (LandmarkFace) – Instance of a LandmarkFace before taking the action.
Returns:
  • new_lf (LandmarkFace) – Instance of a LandmarkFace after taking the action.
  • df (DisplacementField) – Displacement field representing the transformation between the old and new image.
class pychubby.actions.RaiseEyebrow(scale=0.1, side='both')

Bases: pychubby.actions.Action

Raise an eyebrow.

Parameters:
  • scale (float) – Absolute shift size in the reference space.
  • side (str, {'left', 'right', 'both'}) – Which eyebrow to raise.
perform(lf)

Perform action.

Parameters:lf (LandmarkFace) – Instance of a LandmarkFace before taking the action.
Returns:
  • new_lf (LandmarkFace) – Instance of a LandmarkFace after taking the action.
  • df (DisplacementField) – Displacement field representing the transformation between the old and new image.
class pychubby.actions.Smile(scale=0.1)

Bases: pychubby.actions.Action

Make a smiling face.

Parameters:scale (float) – Absolute shift size in the reference space.
perform(lf)

Perform action.

Parameters:lf (LandmarkFace) – Instance of a LandmarkFace before taking the action.
Returns:
  • new_lf (LandmarkFace) – Instance of a LandmarkFace after taking the action.
  • df (DisplacementField) – Displacement field representing the transformation between the old and new image.
class pychubby.actions.StretchNostrils(scale=0.1)

Bases: pychubby.actions.Action

Stratch nostrils.

Parameters:scale (float) – Absolute shift size in the reference space.
perform(lf)

Perform action.

Parameters:lf (LandmarkFace) – Instance of a LandmarkFace before taking the action.
Returns:
  • new_lf (LandmarkFace) – Instance of a LandmarkFace after taking the action.
  • df (DisplacementField) – Displacement field representing the transformation between the old and new image.

pychubby.base module

Base classes and functions.

class pychubby.base.DisplacementField(delta_x, delta_y)

Bases: object

Represents a coordinate transformation.

classmethod generate(shape, old_points, new_points, anchor_corners=True, **interpolation_kwargs)

Create a displacement field based on old and new landmark points.

Parameters:
  • shape (tuple) – Tuple representing the height and the width of the displacement field.
  • old_points (np.ndarray) – Array of shape (N, 2) representing the x and y coordinates of the old landmark points.
  • new_points (np.ndarray) – Array of shape (N, 2) representing the x and y coordinates of the new landmark points.
  • anchor_corners (bool) – If True, then assumed that the 4 corners of the image correspond.
  • interpolation_kwargs (dict) – Additional parameters related to the interpolation.
Returns:

df – DisplacementField instance representing the transformation that allows for warping the old image with old landmarks into the new coordinate space.

Return type:

DisplacementField

is_valid

Check whether both delta_x and delta_y finite.

norm

Compute per element euclidean norm.

transformation

Compute actual transformation rather then displacements.

warp(img, order=1)

Warp image into new coordinate system.

Parameters:
  • img (np.ndarray) – Image to be warped. Any number of channels and dtype either uint8 or float32.
  • order (int) –
    Interpolation order.
    • 0 - nearest neigbours
    • 1 - linear
    • 2 - cubic
Returns:

warped_img – Warped image. The same number of channels and same dtype as the img.

Return type:

np.ndarray

pychubby.data module

Collection of functions focused on obtaining data.

pychubby.data.get_pretrained_68(folder=None, verbose=True)

Get pretrained landmarks model for dlib.

Parameters:
  • folder (str or pathlib.Path or None) – Folder where to save the .dat file.
  • verbose (bool) – Print some output.

References

[1] C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic.
300 faces In-the-wild challenge: Database and results. Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation “In-The-Wild”. 2016.
[2] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic.
A semi-automatic methodology for facial landmark annotation. Proceedings of IEEE Int’l Conf. Computer Vision and Pattern Recognition (CVPR-W), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2013). Oregon, USA, June 2013.
[3] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic.
300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge. Proceedings of IEEE Int’l Conf. on Computer Vision (ICCV-W), 300 Faces in-the-Wild

pychubby.detect module

Collection of detection algorithms.

class pychubby.detect.LandmarkFace(points, img, rectangle=None)

Bases: object

Class representing a combination of a face image and its landmarks.

Parameters:
  • points (np.ndarray) – Array of shape (68, 2) where rows are different landmark points and the columns are x and y coordinates.
  • img (np.ndarray) – Representing an image of a face. Any dtype and any number of channels.
  • rectangle (tuple) – Two containing two tuples where the first one represents the top left corner of a rectangle and the second one the bottom right corner of a rectangle.
shape

Tuple representing the height and width of the image.

Type:tuple
angle(landmark_1, landmark_2, reference_vector=None, use_radians=False)

Angle between two landmarks and positive part of the x axis.

The possible values range from (-180, 180) in degrees.

Parameters:
  • landmark_1 (int) – An integer from [0,57] representing a landmark point. The start of the vector.
  • landmark_2 (int) – An integer from [0,57] representing a landmark point. The end of the vector.
  • reference_vector (None or tuple) – If None, then positive part of the x axis used (1, 0). Otherwise specified by the user.
  • use_radians (bool) – If True, then radians used. Otherwise degrees.
Returns:

angle – The angle between the two landmarks and positive part of the x axis.

Return type:

float

classmethod estimate(img, model_path=None, n_upsamples=1, allow_multiple=True)

Estimate the 68 landmarks.

Parameters:
  • img (np.ndarray) – Array representing an image of a face. Any dtype and any number of channels.
  • model_path (str or pathlib.Path, default=None) – Path to where the pretrained model is located. If None then using the CACHE_FOLDER model.
  • n_upsamples (int) – Upsample factor to apply to the image before detection. Allows to recognize more faces.
  • allow_multiple (bool) – If True, multiple faces are allowed. In case more than one face detected then instance of LandmarkFaces is returned. If False, raises error if more faces detected.
Returns:

If only one face detected, then returns instance of LandmarkFace. If multiple faces detected and allow_multiple=True then instance of LandmarFaces is returned.

Return type:

LandmarkFace or LandmarkFaces

euclidean_distance(landmark_1, landmark_2)

Euclidean distance between 2 landmarks.

Parameters:
  • landmark_1 (int) – An integer from [0,57] representing a landmark point.
  • landmark_2 (int) – An integer from [0,57] representing a landmark point.
Returns:

dist – Euclidean distance between landmark_1 and landmark_2.

Return type:

float

plot(figsize=(12, 12), show_landmarks=True)

Plot face together with landmarks.

Parameters:
  • figsize (tuple) – Size of the figure - (height, width).
  • show_landmarks (bool) – Show all 68 landmark points on the face.
class pychubby.detect.LandmarkFaces(*lf_list)

Bases: object

Class enclosing multiple instances of LandmarkFace.

Parameters:*lf_list (list) – Sequence of LandmarkFace instances.
plot(figsize=(12, 12), show_numbers=True, show_landmarks=False)

Plot.

Parameters:
  • figsize (tuple) – Size of the figure - (height, width).
  • show_numbers (bool) – If True, then a number is shown on each face representing its order. This order is useful when using the metaaction Multiple.
  • show_landmarks (bool) – Show all 68 landmark points on each of the faces.
pychubby.detect.face_rectangle(img, n_upsamples=1)

Find a face rectangle.

Parameters:img (np.ndarray) – Image of any dtype and number of channels.
Returns:
  • corners (list) – List of tuples where each tuple represents the top left and bottom right coordinates of the face rectangle. Note that these coordinates use the (row, column) convention. The length of the list is equal to the number of detected faces.
  • faces (list) – Instance of dlib.rectagles that can be used in other algorithm.
  • n_upsamples (int) – Upsample factor to apply to the image before detection. Allows to recognize more faces.
pychubby.detect.landmarks_68(img, rectangle, model_path=None)

Predict 68 face landmarks.

Parameters:
  • img (np.ndarray) – Image of any dtype and number of channels.
  • rectangle (dlib.rectangle) – Rectangle that represents the bounding box around a single face.
  • model_path (str or pathlib.Path, default=None) – Path to where the pretrained model is located. If None then using the CACHE_FOLDER model.
Returns:

  • lm_points (np.ndarray) – Array of shape (68, 2) where rows are different landmark points and the columns are x and y coordinates.
  • original (dlib.full_object_detection) – Instance of dlib.full_object_detection.

pychubby.reference module

Module focused on creation of reference spaces.

class pychubby.reference.DefaultRS

Bases: pychubby.reference.ReferenceSpace

Default reference space.

tform

Affine transformation.

Type:skimage.transform.GeometricTransform
keypoints

Defining landmarks used for estimating the parameters of the model.

Type:dict
estimate(lf)

Estimate parameters of the affine transformation.

Parameters:lf (pychubby.detect.LandmarFace) – Instance of the LandmarkFace.
inp2ref(coords)

Transform from input to reference space.

Parameters:coords (np.ndarray) – Array of shape (N, 2) where the columns represent x and y coordinates in the input space.
Returns:tformed_coords – Array of shape (N, 2) where the columns represent x and y coordinates in the reference space correspoding row-wise to coords.
Return type:np.ndarray
ref2inp(coords)

Transform from reference to input space.

Parameters:coords (np.ndarray) – Array of shape (N, 2) where the columns represent x and y reference coordinates.
Returns:tformed_coords – Array of shape (N, 2) where the columns represent x and y coordinates in the input image correspoding row-wise to coords.
Return type:np.ndarray
class pychubby.reference.ReferenceSpace

Bases: abc.ABC

Abstract class for reference spaces.

estimate(**kwargs)

Fit parameters of the model.

inp2ref(**kwargs)

Transform from input to reference.

ref2inp(**kwargs)

Transform from reference to input.

pychubby.utils module

Collection of utilities.

pychubby.utils.points_to_rectangle_mask(shape, top_left, bottom_right, width=1)

Convert two points into a rectangle boolean mask.

Parameters:
  • shape (tuple) – Represents the (height, width) of the final mask.
  • top_left (tuple) – Two element tuple representing (row, column) of the top left corner of the inner rectangle.
  • bottom_right (tuple) – Two element tuple representing (row, column) of the bottom right corner of the inner rectangle.
  • width (int) – Width of the edge of the rectangle. Note that it is generated by dilation.
Returns:

rectangle_mask – Boolean mask of shape shape where True entries represent the edge of the rectangle.

Return type:

np.ndarray

Notes

The output can be easily used for quickly visualizing a rectangle in an image. One simply does something like img[rectangle_mask] = 255.

pychubby.visualization module

Collection of tools for visualization.

pychubby.visualization.create_animation(df, img, include_backwards=True, fps=24, n_seconds=2, figsize=(8, 8), repeat=True)

Create animation from a displacement field.

Parameters:
  • df (DisplacementField) – Instance of the DisplacementField representing the coordinate transformation.
  • img (np.ndarray) – Image.
  • include_backwards (bool) – If True, then animation also played backwards after its played forwards.
  • fps (int) – Frames per second.
  • n_seconds (int) – Number of seconds to play the animation forwards.
  • figsize (tuple) – Size of the figure.
  • repeat (bool) – If True, then animation always replayed at the end.
Returns:

ani – Animation showing the transformation.

Return type:

matplotlib.animation.ArtistAnimation

Notes

To enable animation viewing in a jupyter notebook write: ` from matplotlib import rc rc('animation', html='html5') `

Indices and tables