CREStereo Repository for the 'Towards accurate and robust depth estimation' project
Go to file
2022-04-08 20:45:33 +09:00
doc/img Fix implementation issues 2022-04-08 17:48:29 +09:00
function_convertion_tests Initial commit 2022-04-08 00:18:27 +09:00
models #1 Added weight conversion to Pytorch 2022-04-08 11:17:32 +09:00
nets Set flow_init to optional 2022-04-08 20:45:33 +09:00
.gitattributes Initial commit 2022-04-08 00:18:27 +09:00
.gitignore Initial commit 2022-04-08 00:18:27 +09:00
convert_weights.py #1 Added weight conversion to Pytorch 2022-04-08 11:17:32 +09:00
README.md Update README.md 2022-04-08 11:36:53 +09:00
test_model.py Fix implementation issues 2022-04-08 17:48:29 +09:00

CREStereo-Pytorch

Non-official Pytorch implementation of the CREStereo (CVPR 2022 Oral) model converted from the original MegEngine implementation.

!CREStereo-Pytorch stereo detph estimation Stereo depth estimation on the cones images from the Middlebury dataset (https://vision.middlebury.edu/stereo/data/scenes2003/)

Important

  • This is just an effort to try to implement the CREStereo model into Pytorch from MegEngine due to the issues of the framework to convert to other formats (https://github.com/megvii-research/CREStereo/issues/3).
  • I am not the author of the paper, and I am don't fully understand what the model is doing. Therefore, there might be small differences with the original model that might impact the performance.
  • I have not added any license, since the repository uses code from different repositories. Check the License section below for more detail.

Pretrained model

  • Download the model from here and save it into the models folder.
  • The model was covnerted from the original MegEngine weights using the convert_weights.py script. Place the MegEngine weights (crestereo_eth3d.mge) file into the models folder before the conversion.

Licences:

References: