Guodong Chen, Filip Hácha, Libor Váša, Mallesham Dasari
This repository contains the official authors implementation associated with the paper "TVMC: Time-Varying Mesh Compression Using Volume-Tracked Reference Meshes", which can be found here, accepted by 2025 ACM MMSys.
- Operating System: Windows 11 or Ubuntu 20.04
- Python: 3.8
- Dependencies:
- numpy
- open3d==0.18.0
- scikit-learn
- scipy
- trimesh==4.1.0
Clone this project:
Follow these steps to build and run the Docker image:
To begin, build the Docker image from the provided Dockerfile:
After building the image, run the Docker container with the following command:
Once inside the Docker container, grant execute permissions to the run_pipeline.sh script and execute it:
The pipeline will start, and the required tasks will be executed sequentially.
If you want to run TVMC on your own machine using your own dataset, here’s how you can set it up. We've tested this on Windows 11 and Ubuntu 20.04.
(Provide the detailed steps here for running the pipeline outside of Docker)
Install .Net 7.0
For Linux:
For windows you can use Visual Studio to install .NET 7.0 and anaconda to create python environment.
ARAP volume tracking is written in C# and .NET 7.0
Navigate to the root directory:
Build the project:
Run the tracking process:
Volume tracking results are saved in the <outDir> folder:
- .xyz files: coordinates of volume centers
- .txt files: transformations of centers between frames
Global optimization refines volume centers by removing abnormal volume centers and adjusting positions for the remains to reduce distortions.
Results are stored in <outDir>/impr.
Additional time-varying mesh sequences are available in ./arap-volume-tracking/data. Configuration files are in ./arap-volume-tracking/config. Different tracking modes (./iir, ./impr, ./max) use distinct configurations. More details here.
To run ARAP volume tracking on a custom dataset:
Navigate to TVMC root:
Run the MDS script:
The number of centers (--num_centers) must match the global optimization results. Output is stored in ./arap-volume-tracking/data/basketball-output-max-2000/impr/reference/reference_centers_aligned.xyz.
If the number of volume centers is large, experiment with different random_state values for better results.
Then, we compute the transformations for each center, mapping their original positions to the reference space, along with their inverses. These transformations are then used to deform the mesh surface based on the movement of volume centers.
For Linux, switch to .NET 5.0.
Navigate to the tvm-editing directory and build:
(For Windows the writer used .NET 8.0 and there is no need to install .NET 5.0 and run dotnet new globaljson --sdk-version 5.0.408 . If you encounter error regarding .NET version, try to install the correct version on your machine.)
Run the mesh deformation:
for Windows:
for Linux:
Navigate to TVMC root again:
Extract the reference mesh:
Navigate to the tvm-editing directory,
Then run:
For Linux:
Navigate to ./TVMC again:
The displacement fields are stored as .ply files. For compression, Draco is used to encode both the reference mesh and displacements.
Tips: So far, we've got everything we need for a group of time-varying mesh compression (A self-contact-free reference mesh and displacement fields). You can use any other compression methods to deal with them. For example, you may use video coding to compress displacements to get even better compression performance.
Navigate to TVMC root:
Clone and build Draco:
On Windows:
On Linux:
Draco paths (please change based on your project):
- Encoder: ./draco/build/Release/draco_encoder.exe / ./draco/build/draco_encoder
- Decoder: ./draco/build/Release/draco_decoder.exe / ./draco/build/draco_decoder
Navigate to TVMC:
Run the evaluation:
For Linux:
We provide scripts to generate the figures presented in the paper based on the collected results.
-
If you have went through above steps for all four datasets, execute:
python ./objective_results_all.pyThis script uses the newly generated results to produce Rate-Distortion (RD) performance and Cumulative Distribution Function (CDF) figures.
(in the Docker image you may need to store generated figures locally using docker cp)
-
If you followed the recommended commands above (which generate results only for the Basketball Player dataset),
python ./objective_results_basic.pyThis version primarily uses mostly the original data from the paper to replicate the key figures.
This provides a straightforward way to replicate the results.