[
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2022 Jia Zheng\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "README.md",
    "content": "# Awesome CAD [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)\n\nA curated list of awesome Computer-Aided Design (CAD) papers, inspired by [awesome-computer-vision](https://github.com/jbhuang0604/awesome-computer-vision).\n\n## Survey\n| Papers | Venue | Links |\n|--------|-------|-------|\n| [Geometric Deep Learning for Computer-Aided Design: A Survey](https://arxiv.org/abs/2402.17695) | arXiv 2025 |  |\n| [Large Language Models for Computer-Aided Design: A Survey](https://arxiv.org/abs/2505.08137) | arXiv 2025 | [[project]](https://github.com/lichengzhanguom/LLMs-CAD-Survey-Taxonomy) |\n\n## Datasets\n\n| Papers | Venue | Links |\n|--------|-------|-------|\n| [CADOps-Net: Jointly Learning CAD Operation Types and Steps from Boundary-Representations](https://arxiv.org/abs/2208.10555) | 3DV 2022 | [[project](https://cvi2.uni.lu/cc3d-ops/)] |\n| [Fusion 360 Gallery: A Dataset and Environment for Programmatic CAD Construction from Human Design Sequences](https://arxiv.org/abs/2010.02392) | SIGGRAPH 2021 | [[project](https://github.com/AutodeskAILab/Fusion360GalleryDataset)] |\n| [AutoMate: A Dataset and Learning Approach for Automatic Mating of CAD Assemblies](https://arxiv.org/abs/2105.12238) | SIGGRAPH Asia 2021 | [[project](https://github.com/deGravity/automate)] |\n| [PVDeconv: Point-voxel deconvolution for autoencoding cad construction in 3D](https://arxiv.org/abs/2101.04493) | ICIP 2020 | [[project](https://cvi2.uni.lu/cc3d-dataset/)] |\n| [SketchGraphs: A Large-Scale Dataset for Modeling Relational Geometry in Computer-Aided Design](https://arxiv.org/abs/2007.08506) | ICML Workshop 2020 | [[project]](https://github.com/PrincetonLIPS/SketchGraphs) |\n| [A Large-scale Annotated Mechanical Components Benchmark for Classification and Retrieval Tasks with Deep Neural Networks (MCB)](https://www.cs.utexas.edu/~huangqx/mcb_benchmark.pdf) | ECCV 2020 | [[project](https://engineering.purdue.edu/cdesign/wp/a-large-scale-annotated-mechanical-components-benchmark-for-classification-and-retrieval-tasks-with-deep-neural-networks/)] |\n| [ABC: A Big CAD Model Dataset For Geometric Deep Learning](https://arxiv.org/abs/1812.06216) | CVPR 2019 | [[project](https://deep-geometry.github.io/abc-dataset/)] |\n\n## CAD Reconstruction\n\n| Papers | Venue | Links |\n|--------|-------|-------|\n| [BRep Boundary and Junction Detection for CAD Reverse Engineering](https://arxiv.org/abs/2409.14087) | ICMI 2024 | [[project](https://skazizali.com/brepdetnet.github.io/)] [[code](https://github.com/saali14/Scan-to-BRep)] |\n| [CAD-GPT: Synthesising CAD Construction Sequence with Spatial Reasoning-Enhanced Multimodal LLMs](https://arxiv.org/abs/2412.19663) | arXiv 2025 | |\n| [CAD-Recode: Reverse Engineering CAD Code from Point Clouds](https://arxiv.org/abs/2412.14042) | arXiv 2025 | |\n| [From 2D CAD Drawings to 3D Parametric Models: A Vision-Language Approach](https://arxiv.org/abs/2412.11892) | AAAI 2025 | [[project](https://manycore-research.github.io/CAD2Program/)] |\n| [PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views with Learnt Shape Programs](https://arxiv.org/abs/2308.05744) | ICCV 2023 | [[project](https://manycore-research.github.io/PlankAssembly/)] [[code](https://github.com/manycore-research/PlankAssembly/)] |\n| [SolidGen: An Autoregressive Model for Direct B-rep Synthesis](https://arxiv.org/abs/2203.13944) | TMLR 2023 | |\n| [Reconstructing Editable Prismatic CAD from Rounded Voxel Models](https://arxiv.org/abs/2209.01161) | SIGGRAPH Asia 2022 | |\n| [ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation](https://arxiv.org/abs/2205.14573) | SIGGRAPH 2022 | [[project](https://haopan.github.io/complexgen.html)] [[code](https://github.com/guohaoxiang/ComplexGen)] |\n| [Point2Cyl: Reverse Engineering 3D Objects from Point Clouds to Extrusion Cylinders](https://arxiv.org/abs/2112.09329) | CVPR 2022 | [[project](https://point2cyl.github.io/)] [[code](https://github.com/mikacuy/point2cyl)] |\n| [PC2WF: 3D Wireframe Reconstruction from Raw Point Clouds](https://arxiv.org/abs/2103.02766) | ICLR 2021 | [[code](https://github.com/YujiaLiu76/PC2WF)] |\n| [PIE-NET: Parametric Inference of Point Cloud Edges](https://arxiv.org/abs/2007.04883) | NeurIPS 2020 | [[code](https://github.com/wangxiaogang866/PIE-NET)] |\n| [ParSeNet: A Parametric Surface Fitting Network for 3D Point Clouds](https://arxiv.org/abs/2003.12181) | ECCV 2020 | [[project](https://hippogriff.github.io/parsenet/)] [[code](https://github.com/Hippogriff/parsenet-codebase)] |\n| [Supervised Fitting of Geometric Primitives to 3D Point Clouds](https://arxiv.org/abs/1811.08988) | CVPR 2019 | [[code](https://github.com/lingxiaoli94/SPFN)] |\n\n## CAD Generation\n\n| Papers | Venue | Links |\n|--------|-------|-------|\n| [BrepDiff: Single-stage B-rep Diffusion Model](https://drive.google.com/file/d/1ZkdjmljmbJer5Lbn55UwKRqR9AcHBydA/view) | SIGGRAPH 2025 | [[project](https://brepdiff.github.io/)] [[code](https://github.com/brepdiff/brepdiff)] |\n| [HoLa: B-Rep Generation using a Holistic Latent Representation](https://arxiv.org/abs/2504.14257) | SIGGRAPH 2025 | [[project](https://vcc.tech/research/2025/HolaBrep)] |\n| [Text-to-CadQuery: A New Paradigm for CAD Generation with Scalable Large Model Capabilities](https://www.arxiv.org/abs/2505.06507) | arXiv 2025 | [[code](https://github.com/Text-to-CadQuery/Text-to-CadQuery)] |\n| [FlexCAD: Unified and Versatile Controllable CAD Generation with Fine-tuned Large Language Models](https://arxiv.org/abs/2411.05823) | ICLR 2025 | | \n| [Don’t Mesh with Me: Generating Constructive Solid Geometry Instead of Meshes by Fine-Tuning a Code-Generation LLM](https://arxiv.org/abs/2411.15279) | arXiv 2024 | |\n| [Text2CAD: Text to 3D CAD Generation via Technical Drawings](https://arxiv.org/abs/2411.06206) | NeurIPS 2024 | [[project](https://sadilkhan.github.io/text2cad-project/)] |\n| [CadVLM: Bridging Language and Vision in the Generation of Parametric CAD Sketches](https://arxiv.org/abs/2409.17457) | NeurIPS Workshop 2024 | |\n| [BrepGen: A B-rep Generative Diffusion Model with Structured Latent Geometry](https://arxiv.org/abs/2401.15563) | SIGGRAPH 2024 | [[code](https://github.com/samxuxiang/BrepGen)] |\n| [3DALL-E: Integrating Text-to-Image AI in 3D Design Workflows](https://arxiv.org/abs/2210.11603)| arXiv 2022 | |\n| Free2CAD: Parsing Freehand Drawings into CAD Commands | SIGGRAPH 2022 | [[project](https://geometry.cs.ucl.ac.uk/projects/2022/free2cad/)] [[code](https://github.com/Enigma-li/Free2CAD)] |\n| [SkexGen: Autoregressive Generation of CAD Construction Sequences with Disentangled Codebooks](https://arxiv.org/abs/2207.04632) | ICML 2022 | [[project](https://samxuxiang.github.io/skexgen)] [[code](https://github.com/samxuxiang/SkexGen)] |\n| [Vitruvion: A Generative Model of Parametric CAD Sketches](https://arxiv.org/abs/2109.14124) | ICLR 2022 | [[project]](https://lips.cs.princeton.edu/vitruvion/) [[code](https://github.com/PrincetonLIPS/vitruvion)] |\n| [JoinABLe: Learning Bottom-up Assembly of Parametric CAD Joints](https://arxiv.org/abs/2111.12772) | CVPR 2022 | [[code](https://github.com/AutodeskAILab/JoinABLe)] |\n| [SketchGen: Generating Constrained CAD Sketches](https://arxiv.org/abs/2106.02711) | NeurIPS 2021 | |\n| [Computer-Aided Design as Language](https://arxiv.org/abs/2105.02769) | NeurIPS 2021 | [[data](http://github.com/deepmind/deepmind-research/blob/master/cadl)] |\n| [DeepCAD: A Deep Generative Network for Computer-Aided Design Models](https://arxiv.org/abs/2105.09492) | ICCV 2021 | [[project](http://www.cs.columbia.edu/cg/deepcad/)] [[code](https://github.com/ChrisWu1997/DeepCAD)] |\n| [Engineering Sketch Generation for Computer-Aided Design](https://arxiv.org/abs/2104.09621) | CVPR Workshop 2021 | |\n| [Sketch2CAD: Sequential CAD Modeling by Sketching in Context](https://arxiv.org/abs/2009.04927) | SIGGRAPH Asia 2020 | [[project](http://geometry.cs.ucl.ac.uk/projects/2020/sketch2cad/)] [[code](https://github.com/Enigma-li/Sketch2CAD)] |\n| [PolyGen: An Autoregressive Generative Model of 3D Meshes](https://arxiv.org/abs/2002.10880) | ICML 2020 | [[code](https://github.com/deepmind/deepmind-research/blob/master/polygen/)] |\n\n## CAD Representation\n\n| Papers | Venue | Links |\n|--------|-------|-------|\n| [DualCSG: Learning Dual CSG Trees for General and Compact CAD Modeling](https://arxiv.org/abs/2301.11497) | arXiv 2023 | |\n| [Discovering Design Concepts for CAD Sketches](https://arxiv.org/abs/2210.14451) | NeurIPS 2022 | [[code](https://github.com/yyuezhi/SketchConcept)] |\n| [Self-Supervised Representation Learning for CAD](https://arxiv.org/abs/2210.10807) | CVPR 2023 | |\n| [CADOps-Net: Jointly Learning CAD Operation Types and Steps from Boundary-Representations](https://arxiv.org/abs/2208.10555) | 3DV 2022 | |\n| [CSG-Stump: A Learning Friendly CSG-Like Representation for Interpretable Shape Parsing](https://arxiv.org/abs/2108.11305) | ICCV 2021 | [[code](https://github.com/kimren227/CSGStumpNet)] |\n| [UV-Net: Learning from Boundary Representations](https://arxiv.org/abs/2006.10211) | CVPR 2021 | [[code](https://github.com/AutodeskAILab/UV-Net)] |\n| [BRepNet: A Topological Message Passing System for Solid Models](https://arxiv.org/abs/2104.00706) | CVPR 2021 | [[code](https://github.com/AutodeskAILab/BRepNet)] |\n| [CSGNet: Neural Shape Parser for Constructive Solid Geometry](https://arxiv.org/abs/1712.08290) | CVPR 2018 | [[code](https://github.com/Hippogriff/CSGNet)] |\n\n## CAD Recognition\n\n| Papers | Venue | Links |\n|--------|-------|-------|\n| [ArchCAD-400K: An Open Large-Scale Architectural CAD Dataset and New Baseline for Panoptic Symbol Spotting](https://arxiv.org/abs/2503.22346) | arXiv 2025 | |\n| [CADSpotting: Robust Panoptic Symbol Spotting on Large-Scale CAD Drawings](https://arxiv.org/abs/2412.07377) | arXiv 2024 | |\n| [Symbol as Points: Panoptic Symbol Spotting via Point-based Representation](https://arxiv.org/abs/2401.10556) | ICLR 2024 | [[code](https://github.com/nicehuster/SymPoint)] |\n| [VectorFloorSeg: Two-Stream Graph Attention Network for Vectorized Roughcast Floorplan Segmentation](https://openaccess.thecvf.com/content/CVPR2023/html/Yang_VectorFloorSeg_Two-Stream_Graph_Attention_Network_for_Vectorized_Roughcast_Floorplan_Segmentation_CVPR_2023_paper.html) | CVPR 2023 | [[code](https://github.com/DrZiji/VecFloorSeg)] |\n| [CADTransformer: Panoptic Symbol Spotting Transformer for CAD Drawings](https://openaccess.thecvf.com/content/CVPR2022/papers/Fan_CADTransformer_Panoptic_Symbol_Spotting_Transformer_for_CAD_Drawings_CVPR_2022_paper.pdf)| CVPR 2022 | [[code](https://github.com/VITA-Group/CADTransformer)] |\n| [GAT-CADNet: Graph Attention Network for Panoptic Symbol Spotting in CAD Drawings](https://arxiv.org/abs/2201.00625) | CVPR 2022 | |\n| [FloorPlanCAD: A Large-Scale CAD Drawing Dataset for Panoptic Symbol Spotting](https://arxiv.org/abs/2105.07147) | ICCV 2021 | [[project](https://floorplancad.github.io/)] |\n"
  }
]