Repository: microsoft/dynamics365patternspractices Branch: main Commit: 7a860bb27b25 Files: 95 Total size: 181.6 KB Directory structure: gitextract_akttpnjf/ ├── .github/ │ ├── ISSUE_TEMPLATE/ │ │ ├── catalog-change-request.yml │ │ ├── config.yml │ │ ├── content-request.yml │ │ ├── error-report.yml │ │ ├── new-business-process-area-draft.yml │ │ ├── new-business-process-draft.yml │ │ └── new-pattern-draft.yml │ └── workflows/ │ └── auto-assign.yml ├── .gitignore ├── CODEOWNERS ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── LICENSE-ASSETS ├── README.md ├── SECURITY.md ├── SUPPORT.md ├── architectures/ │ ├── custom-copilot-agent-dynamics-365-power-apps-architecture.pptx │ ├── customer-service-integrate-booking-system-appointments.vsdx │ ├── dynamics-365-azure-powered-manufacturing-sales-framework.vsdx │ ├── dynamics-365-finance-operations-process-outbound-goods-external-service-providers-reference-architecture.pptx │ ├── healthcare-writeback-dynamics-365-identifiers-fhir-server-architecture.pptx │ ├── readme.md │ ├── reference-architecture-dynamics-365-travel-hospitality.pptx │ ├── saga-pattern-with-dataverse-or-dynamics-365.ppt │ ├── sales-opportunity-health.pptx │ └── scheduled-data-exports-from-dynamics-ce-to-csv-diagram.vsdx ├── business-process-catalog/ │ └── README.md ├── graphics/ │ ├── Acquire to dispose process flow steps (AI Generated).xlsx │ ├── Acquire to dispose visio flow diagrams (AI Generated).vsdx │ ├── Administer to operate process flow data (AI Generated).xlsx │ ├── Administer to operate process flow diagrams (AI Generated).vsdx │ ├── Case to resolution process flow data (AI Generated).xlsx │ ├── Case to resolution process flow diagrams (AI Generated).vsdx │ ├── Concept to market process flow data (AI Generated).xlsx │ ├── Concept to market process flow diagrams (AI Generated).vsdx │ ├── Design to retire process flow data (AI Generated).xlsx │ ├── Design to retire process flow diagrams (AI Generated).vsdx │ ├── Forecast to plan process flow data (AI Generated).xlsx │ ├── Forecast to plan process flow diagrams (AI Generated).vsdx │ ├── Hire to retire process flow data (AI Generated).xlsx │ ├── Hire to retire process flow diagrams (AI Generated).vsdx │ ├── Inventory to deliver process flow data (AI Generated).xlsx │ ├── Inventory to deliver process flow diagrams (AI Generated).vsdx │ ├── Order to cash process flow data (AI Generated).xlsx │ ├── Order to cash process flow diagrams (AI Generated).vsdx │ ├── Plan to produce process flow data (AI Generated).xlsx │ ├── Plan to produce process flow diagrams (AI Generated).vsdx │ ├── Project to profit process flow diagrams (AI Generated).vsdx │ ├── Project to profit process flow steps (AI Generated).xlsx │ ├── Prospect to quote process flow diagrams (AI Generated).vsdx │ ├── Prospect to quote process flow steps (AI Generated).xlsx │ ├── README.md │ ├── Record to report process flow data (AI Generated).xlsx │ ├── Record to report process flow diagrams (AI Generated).vsdx │ ├── Service to deliver process flow data (AI Generated).xlsx │ ├── Service to deliver process flow diagrams (AI Generated).vsdx │ ├── Source to pay process flow data (AI Generated).xlsx │ └── Source to pay process flow diagrams (AI Generated).vsdx ├── sample-solutions/ │ └── README.md ├── submit-architecture/ │ └── placeholder.md ├── submit-business-processes/ │ └── placeholder.md ├── templates/ │ ├── Azure-DevOps-templates/ │ │ ├── 1_ADO_Creation_Script (Preview).py │ │ ├── 2_ADO_Page_Layout_Script_Threaded (Preview).py │ │ ├── 3_ADO_Teams_Areas_Script (Preview).py │ │ ├── 4_ADO_Backlog_Config_Script (Preview).py │ │ ├── ADO template guideline (Preview).xlsx │ │ ├── ADO template guideline.xlsx │ │ └── README.md │ ├── business-processes/ │ │ ├── Business Process Guide Graphics Template.potx │ │ ├── L1 End-to-end business process template.dotx │ │ ├── L2 Business process area template.dotx │ │ ├── L3 Business process template.dotx │ │ ├── L4 Pattern - Import Data-Entity Template.dotx │ │ ├── L4 Pattern or Practice Template.dotx │ │ ├── README.md │ │ ├── Reference architecture template.dotx │ │ └── import-business-processes-ADO.md │ ├── implementation-projects/ │ │ └── FDD_TDD_template.docx │ └── reference-architectures.md └── workshops/ ├── Acquire to dispose workshops.docx ├── Case to resolution workshops.docx ├── Concept to market workshops.docx ├── Design to retire workshops.docx ├── Forecast to plan workshops.docx ├── Hire to retire workshops.docx ├── Inventory to deliver workshops.docx ├── Order to cash workshops.docx ├── Plan to Produce Workshops.docx ├── Project to profit workshops.docx ├── Prospect to quote workshops.docx ├── README.md ├── Record to report workshops.docx ├── Service to deliver workshops.docx └── Source to pay workshops.docx ================================================ FILE CONTENTS ================================================ ================================================ FILE: .github/ISSUE_TEMPLATE/catalog-change-request.yml ================================================ name: Business process catalog change request description: Use this issue type to register that you would like to request a change to the business process catalog. This can include adding a new row, updating an existing row, or removal of a row. title: "[CATALOG]: " labels: ["catalog", "triage"] assignees: - rachel-profitt body: - type: markdown attributes: value: | Thank you for contributing to the [Dynamics 365 business process catalog](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/)! Registering your request here is the first step in contributing. Your request will be reviewed and approved. Not all request may be approved. Approved requests will be added to the business process catalog. Find the catalog and templates at [https://github.com/microsoft/dynamics365patternspractices/templates/business-processes](https://github.com/microsoft/dynamics365patternspractices/tree/main/templates/business-processes). If you want to contribute with architectural patterns for Dynamics 365 implementations, use the templates at [https://github.com/microsoft/dynamics365patternspractices/templates/architecture](https://github.com/microsoft/dynamics365patternspractices/tree/main/templates/architecture). We do not recommend starting work on a new article for a new business process catalog row until the catalog request is approved. You can use this request to add new business processes or new patterns and practices to the catalog. Use one issue request for each new row you want to add or update. - type: input id: contact attributes: label: Contact details description: How can we get in touch with you? placeholder: myemail@example.com validations: required: true - type: dropdown id: organization-type attributes: label: "Organization type" description: Which type of organization do you work for? options: - "Microsoft MVP" - "Partner / ISV / Independant consultant" - "Customer organization" - "Microsoft employee" validations: required: true - type: dropdown id: endtoend attributes: label: "End-to-end business process" description: Which end-to-end business process is the article related to? options: - "Acquire to dispose" - "Administer to operate" - "Case to resolution" - "Concept to market" - "Design to retire" - "Forecast to plan" - "Hire to retire" - "Inventory to deliver" - "Order to cash" - "Plan to produce" - "Procure to pay" - "Project to profit" - "Prospect to quote" - "Record to report" - "Service to cash" validations: required: true - type: input id: area-name attributes: label: "Which business process area is this article related to?" description: "Make sure you use the same name that is listed in the business process catalog." value: "Create and manage sales" validations: required: true - type: input id: process-name attributes: label: "Which business process is this article related to?" description: "Make sure you use the same name that is listed in the business process catalog." value: "Create a sales order" validations: required: true - type: input id: pattern-name attributes: label: "Which pattern or practice is this request related to?" description: "Make sure you use the same name that is listed in the business process catalog. If this is a new pattern or practice, enter the suggested name for the pattern or practice." value: "Create a sales order in the retail point of sale" validations: required: false - type: textarea id: comments attributes: label: "Please describe the suggested change. If this is an update to an existing row in the catalog, please be sure to indicate in the comments." description: "Write your comments" value: "Comments go here" validations: required: true - type: checkboxes id: terms attributes: label: Code of Conduct description: By submitting this issue, you agree to follow the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/), or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. options: - label: I agree to follow this project's Code of Conduct required: true - type: markdown attributes: value: "## Thanks!" - type: markdown attributes: value: Thank you for contributing to the [business process guidance](https://learn.microsoft.com/en-us/dynamics365/guidance/). ================================================ FILE: .github/ISSUE_TEMPLATE/config.yml ================================================ blank_issues_enabled: false contact_links: - name: Submit content requests url: https://github.com/microsoft/dynamics365patternspractices/issues/new/choose about: Choose the right template for your feedback from the list above - name: Discussions url: https://github.com/microsoft/dynamics365patternspractices/discussions about: Please ask and answer questions here. - name: Email the Microsoft Dynamics 365 Guidance team url: mailto:bizprocessguide@microsoft.com about: For additional support, please email the team. ================================================ FILE: .github/ISSUE_TEMPLATE/content-request.yml ================================================ name: Content request description: Request a new feature or article for the Dynamics 365 guidance hub title: "[CONTENT REQUEST]: " labels: ["feature", "triage"] assignees: - edupont04 body: - type: markdown attributes: value: | Thanks for submitting this content request. We triage this feedback regularly and use it to plan for future documentation improvements. Not all content requests are implemented, but we'll prioritize based on current work priorities and the tooling available. - type: input id: contact attributes: label: Contact Details description: How can we get in touch with you if we need more info? placeholder: myemail@example.com validations: required: true - type: textarea id: description attributes: label: Describe the type of article or feature on the Microsoft Learn website that you would like to see in the Dynamics 365 guidance documentation description: Also, tell us what you expected to see? placeholder: Tell us what you want to see! value: "A great new feature!" validations: required: true - type: checkboxes id: terms attributes: label: Code of Conduct description: By submitting this issue, you agree to follow the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. options: - label: I agree to follow this project's Code of Conduct required: true ================================================ FILE: .github/ISSUE_TEMPLATE/error-report.yml ================================================ name: Error Report description: File a an error report for Dynamics 365 guidance content title: "[ERROR]: " labels: ["bug", "triage"] assignees: - rachel-profitt body: - type: markdown attributes: value: | Thanks for taking the time to submit this error report! - type: input id: contact attributes: label: Contact Details description: How can we get in touch with you if we need more info? placeholder: myemail@example.com validations: required: true - type: textarea id: description attributes: label: Describe the error you see in the documentation, or the correction you suggest description: Also, tell us what you expected to see? placeholder: Tell us what you want to see! value: "Something is wrong in the content!" validations: required: true - type: input id: siteURL attributes: label: Link to the article where the error is description: Please copy and paste the link to the article or place on Microsoft Learn where the error or correction is needed. validations: required: true - type: checkboxes id: terms attributes: label: Code of Conduct description: By submitting this issue, you agree to follow the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. options: - label: I agree to follow this project's Code of Conduct required: true ================================================ FILE: .github/ISSUE_TEMPLATE/new-business-process-area-draft.yml ================================================ name: New business process area description: Use this issue type to indicate that you are beginning work on a new business process area article. The business process area should already be listed in the business process catalog. title: "[AREA]: " labels: ["area", "triage"] assignees: - rachel-profitt body: - type: markdown attributes: value: | Thanks for taking the to contribute to the [Dynamics 365 business process guidance](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/). Registering your work here is the first step in contributing. Learn more [here](https://learn.microsoft.com/en-us/dynamics365/get-started/contribute#register-your-work). - type: input id: contact attributes: label: Contact details description: How can we get in touch with you? placeholder: myemail@example.com validations: required: true - type: dropdown id: organization-type attributes: label: "Organization type" description: Which type of organization do you work for? options: - "Partner / ISV / Independant consultant" - "Customer organization" - "Microsoft MVP" - "Microsoft employee" validations: required: true - type: dropdown id: endtoend attributes: label: "End-to-end business process" description: Which end-to-end business process is the article related to? options: - "Acquire to dispose" - "Administer to operate" - "Case to resolution" - "Concept to market" - "Design to retire" - "Forecast to plan" - "Hire to retire" - "Inventory to deliver" - "Order to cash" - "Plan to produce" - "Procure to pay" - "Project to profit" - "Prospect to quote" - "Record to report" - "Service to cash" validations: required: true - type: input id: area-name attributes: label: "Which business process area is this article related to?" description: "Make sure to use the same name that is listed in the business process catalog." value: "Create and manage sales" validations: required: true - type: textarea id: comments attributes: label: "Enter any additional comments or information you want us to know." description: "Write the comments" value: "Comments go here" validations: required: false - type: input id: expected-date attributes: label: "Specify the date you expect the article to be completed and ready for review." description: "Please include the month, date, and year in the format mm/dd/yyyy" value: "01/24/2024" validations: required: true - type: checkboxes id: terms attributes: label: Code of Conduct description: By submitting this issue, you agree to follow the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. options: - label: I agree to follow this project's Code of Conduct required: true - type: markdown attributes: value: "## Thanks!" - type: markdown attributes: value: Thank you for contributing to the [business process guidance](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/). ================================================ FILE: .github/ISSUE_TEMPLATE/new-business-process-draft.yml ================================================ name: New business process description: Use this issue type to indicate that you are beginning work on a new business process article. The business process should already exist in the business process catalog. title: "[BUSINESS PROCESS]: " labels: ["business process", "triage"] assignees: - rachel-profitt body: - type: markdown attributes: value: | Thanks for contributing to the business process guidance! Registering your work here is the first step in contributing. Learn more [here](https://learn.microsoft.com/en-us/dynamics365/get-started/contribute#register-your-work). - type: input id: contact attributes: label: Contact details description: How can we get in touch with you? placeholder: myemail@example.com validations: required: true - type: dropdown id: organization-type attributes: label: "Organization type" description: Which type of organization do you work for? options: - "Parnter / ISV / Independant consultant" - "Customer organization" - "Microsoft MVP" - "Microsoft employee" validations: required: true - type: dropdown id: endtoend attributes: label: "End-to-end business process" description: Which end-to-end business process is the article related to? options: - "Acquire to dispose" - "Administer to operate" - "Case to resolution" - "Concept to market" - "Design to retire" - "Forecast to plan" - "Hire to retire" - "Inventory to deliver" - "Order to cash" - "Plan to produce" - "Procure to pay" - "Project to profit" - "Prospect to quote" - "Record to report" - "Service to cash" validations: required: true - type: input id: area-name attributes: label: "Which business process area is this article related to?" description: "Make sure you use the same name that is listed in the business process catalog." value: "Create and manage sales" validations: required: true - type: input id: process-name attributes: label: "Enter the name of the business process you are starting work on" description: "Make sure you use the same name that is listed in the business process catalog." value: "Create a sales order" validations: required: true - type: textarea id: comments attributes: label: "Enter any additional comments or information you want us to know." description: "Write the comments" value: "Comments go here" validations: required: false - type: input id: expected-date attributes: label: "Specify the date you expect the article to be completed and ready for review." description: "Please include the month, date, and year in the format mm/dd/yyyy" value: "01/24/2024" validations: required: true - type: checkboxes id: terms attributes: label: Code of Conduct description: By submitting this issue, you agree to follow the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. options: - label: I agree to follow this project's Code of Conduct required: true - type: markdown attributes: value: "## Thanks!" - type: markdown attributes: value: Thank you for contributing to the [business process guidance](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/overview)! ================================================ FILE: .github/ISSUE_TEMPLATE/new-pattern-draft.yml ================================================ name: New pattern or practice description: Use this issue type to register that you have started work on a new pattern or practice article for Dynamics 365 implementations. title: "[PATTERN]: " labels: ["pattern", "triage"] assignees: - rachel-profitt body: - type: markdown attributes: value: | Thank you for contributing to the [Dynamics 365 business process guidance](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/)! Registering your work here is the first step in contributing. Your new article should already be listed in the business process catalog. Find the catalog and templates at [https://github.com/microsoft/dynamics365patternspractices/templates/business-processes](https://github.com/microsoft/dynamics365patternspractices/tree/main/templates/business-processes). If you want to contribute with architectural patterns for Dynamics 365 implementations, use the templates at [https://github.com/microsoft/dynamics365patternspractices/templates/architecture](https://github.com/microsoft/dynamics365patternspractices/tree/main/templates/architecture). - type: input id: contact attributes: label: Contact details description: How can we get in touch with you? placeholder: myemail@example.com validations: required: true - type: dropdown id: organization-type attributes: label: "Organization type" description: Which type of organization do you work for? options: - "Microsoft MVP" - "Partner / ISV / Independant consultant" - "Customer organization" - "Microsoft employee" validations: required: true - type: dropdown id: endtoend attributes: label: "End-to-end business process" description: Which end-to-end business process is the article related to? options: - "Acquire to dispose" - "Administer to operate" - "Case to resolution" - "Concept to market" - "Design to retire" - "Forecast to plan" - "Hire to retire" - "Inventory to deliver" - "Order to cash" - "Plan to produce" - "Procure to pay" - "Project to profit" - "Prospect to quote" - "Record to report" - "Service to cash" validations: required: true - type: input id: area-name attributes: label: "Which business process area is this article related to?" description: "Make sure you use the same name that is listed in the business process catalog." value: "Create and manage sales" validations: required: true - type: input id: process-name attributes: label: "Which business process is this article related to?" description: "Make sure you use the same name that is listed in the business process catalog." value: "Create a sales order" validations: required: true - type: input id: pattern-name attributes: label: "Which pattern or practice is this request related to?" description: "Enter the name of the pattern or practice you are starting work on. Make sure you use the same name that is listed in the business process catalog." value: "Create a sales order in the retail point of sale" validations: required: true - type: textarea id: comments attributes: label: "Enter any additional comments or information you want us to know." description: "Write your comments" value: "Comments go here" validations: required: false - type: input id: expected-date attributes: label: "Specify the date you expect the article to be completed and ready for review." description: "Please include the month, date, and year in the format mm/dd/yyyy" value: "01/24/2024" validations: required: true - type: checkboxes id: terms attributes: label: Code of Conduct description: By submitting this issue, you agree to follow the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/), or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. options: - label: I agree to follow this project's Code of Conduct required: true - type: markdown attributes: value: "## Thanks!" - type: markdown attributes: value: Thank you for contributing to the [business process guidance](https://learn.microsoft.com/en-us/dynamics365/guidance/). ================================================ FILE: .github/workflows/auto-assign.yml ================================================ name: Automatic Issue Assignment on: issues: types: [opened] jobs: assign: runs-on: ubuntu-latest steps: - name: Assign issue based on business process uses: actions/github-script@v5 with: script: | const issueBody = context.payload.issue.body; const processMap = { //'Acquire to dispose': 'Harshad', //'Administer to operate': 'Harsh', //'Case to resolution': 'Vinoth', //'Concept to market': 'Jinal', //'Design to retire': 'Alejandra', 'Forecast to plan': 'riblack-microsoft', //'Hire to retire': 'Priyanka', //'Inventory to deliver': 'Nicole', //'Order to cash': 'Nikhil', //'Plan to produce': 'Phillip', 'Procure to pay': 'AdiVijayashankar', //'Project to profit': 'Lalitha', //'Prospect to quote': 'Kody', 'Record to report': 'kgiardini', 'Service to cash': 'Dean-Hardy' }; const match = /End-to-End Business Process.*\n.*\[(.*)\]/i.exec(issueBody); const selectedProcess = match ? match[1].trim() : null; const assignee = processMap[selectedProcess]; if (assignee) { github.issues.addAssignees({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.payload.issue.number, assignees: [assignee] }); } ================================================ FILE: .gitignore ================================================ ## Ignore Visual Studio temporary files, build results, and ## files generated by popular Visual Studio add-ons. ## ## Get latest from https://github.com/github/gitignore/blob/master/VisualStudio.gitignore # User-specific files *.rsuser *.suo *.user *.userosscache *.sln.docstates # User-specific files (MonoDevelop/Xamarin Studio) *.userprefs # Mono auto generated files mono_crash.* # Build results [Dd]ebug/ [Dd]ebugPublic/ [Rr]elease/ [Rr]eleases/ x64/ x86/ [Aa][Rr][Mm]/ [Aa][Rr][Mm]64/ bld/ [Bb]in/ [Oo]bj/ [Ll]og/ [Ll]ogs/ # Visual Studio 2015/2017 cache/options directory .vs/ # Uncomment if you have tasks that create the project's static files in wwwroot #wwwroot/ # Visual Studio 2017 auto generated files Generated\ Files/ # MSTest test Results [Tt]est[Rr]esult*/ [Bb]uild[Ll]og.* # NUnit *.VisualState.xml TestResult.xml nunit-*.xml # Build Results of an ATL Project [Dd]ebugPS/ [Rr]eleasePS/ dlldata.c # Benchmark Results BenchmarkDotNet.Artifacts/ # .NET Core project.lock.json project.fragment.lock.json artifacts/ # StyleCop StyleCopReport.xml # Files built by Visual Studio *_i.c *_p.c *_h.h *.ilk *.meta *.obj *.iobj *.pch *.pdb *.ipdb *.pgc *.pgd *.rsp *.sbr *.tlb *.tli *.tlh *.tmp *.tmp_proj *_wpftmp.csproj *.log *.vspscc *.vssscc .builds *.pidb *.svclog *.scc # Chutzpah Test files _Chutzpah* # Visual C++ cache files ipch/ *.aps *.ncb *.opendb *.opensdf *.sdf *.cachefile *.VC.db *.VC.VC.opendb # Visual Studio profiler *.psess *.vsp *.vspx *.sap # Visual Studio Trace Files *.e2e # TFS 2012 Local Workspace $tf/ # Guidance Automation Toolkit *.gpState # ReSharper is a .NET coding add-in _ReSharper*/ *.[Rr]e[Ss]harper *.DotSettings.user # TeamCity is a build add-in _TeamCity* # DotCover is a Code Coverage Tool *.dotCover # AxoCover is a Code Coverage Tool .axoCover/* !.axoCover/settings.json # Visual Studio code coverage results *.coverage *.coveragexml # NCrunch _NCrunch_* .*crunch*.local.xml nCrunchTemp_* # MightyMoose *.mm.* AutoTest.Net/ # Web workbench (sass) .sass-cache/ # Installshield output folder [Ee]xpress/ # DocProject is a documentation generator add-in DocProject/buildhelp/ DocProject/Help/*.HxT DocProject/Help/*.HxC DocProject/Help/*.hhc DocProject/Help/*.hhk DocProject/Help/*.hhp DocProject/Help/Html2 DocProject/Help/html # Click-Once directory publish/ # Publish Web Output *.[Pp]ublish.xml *.azurePubxml # Note: Comment the next line if you want to checkin your web deploy settings, # but database connection strings (with potential passwords) will be unencrypted *.pubxml *.publishproj # Microsoft Azure Web App publish settings. Comment the next line if you want to # checkin your Azure Web App publish settings, but sensitive information contained # in these scripts will be unencrypted PublishScripts/ # NuGet Packages *.nupkg # NuGet Symbol Packages *.snupkg # The packages folder can be ignored because of Package Restore **/[Pp]ackages/* # except build/, which is used as an MSBuild target. !**/[Pp]ackages/build/ # Uncomment if necessary however generally it will be regenerated when needed #!**/[Pp]ackages/repositories.config # NuGet v3's project.json files produces more ignorable files *.nuget.props *.nuget.targets # Microsoft Azure Build Output csx/ *.build.csdef # Microsoft Azure Emulator ecf/ rcf/ # Windows Store app package directories and files AppPackages/ BundleArtifacts/ Package.StoreAssociation.xml _pkginfo.txt *.appx *.appxbundle *.appxupload # Visual Studio cache files # files ending in .cache can be ignored *.[Cc]ache # but keep track of directories ending in .cache !?*.[Cc]ache/ # Others ClientBin/ ~$* *~ *.dbmdl *.dbproj.schemaview *.jfm *.pfx *.publishsettings orleans.codegen.cs # Including strong name files can present a security risk # (https://github.com/github/gitignore/pull/2483#issue-259490424) #*.snk # Since there are multiple workflows, uncomment next line to ignore bower_components # (https://github.com/github/gitignore/pull/1529#issuecomment-104372622) #bower_components/ # RIA/Silverlight projects Generated_Code/ # Backup & report files from converting an old project file # to a newer Visual Studio version. Backup files are not needed, # because we have git ;-) _UpgradeReport_Files/ Backup*/ UpgradeLog*.XML UpgradeLog*.htm ServiceFabricBackup/ *.rptproj.bak # SQL Server files *.mdf *.ldf *.ndf # Business Intelligence projects *.rdl.data *.bim.layout *.bim_*.settings *.rptproj.rsuser *- [Bb]ackup.rdl *- [Bb]ackup ([0-9]).rdl *- [Bb]ackup ([0-9][0-9]).rdl # Microsoft Fakes FakesAssemblies/ # GhostDoc plugin setting file *.GhostDoc.xml # Node.js Tools for Visual Studio .ntvs_analysis.dat node_modules/ # Visual Studio 6 build log *.plg # Visual Studio 6 workspace options file *.opt # Visual Studio 6 auto-generated workspace file (contains which files were open etc.) *.vbw # Visual Studio LightSwitch build output **/*.HTMLClient/GeneratedArtifacts **/*.DesktopClient/GeneratedArtifacts **/*.DesktopClient/ModelManifest.xml **/*.Server/GeneratedArtifacts **/*.Server/ModelManifest.xml _Pvt_Extensions # Paket dependency manager .paket/paket.exe paket-files/ # FAKE - F# Make .fake/ # CodeRush personal settings .cr/personal # Python Tools for Visual Studio (PTVS) __pycache__/ *.pyc # Cake - Uncomment if you are using it # tools/** # !tools/packages.config # Tabs Studio *.tss # Telerik's JustMock configuration file *.jmconfig # BizTalk build output *.btp.cs *.btm.cs *.odx.cs *.xsd.cs # OpenCover UI analysis results OpenCover/ # Azure Stream Analytics local run output ASALocalRun/ # MSBuild Binary and Structured Log *.binlog # NVidia Nsight GPU debugger configuration file *.nvuser # MFractors (Xamarin productivity tool) working folder .mfractor/ # Local History for Visual Studio .localhistory/ # BeatPulse healthcheck temp database healthchecksdb # Backup folder for Package Reference Convert tool in Visual Studio 2017 MigrationBackup/ # Ionide (cross platform F# VS Code tools) working folder .ionide/ ================================================ FILE: CODEOWNERS ================================================ /templates/business-processes/ @rachel-profitt /templates/architecture/ @edupont04 ================================================ FILE: CODE_OF_CONDUCT.md ================================================ # Microsoft Open Source Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). Resources: - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) - Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns ================================================ FILE: CONTRIBUTING.md ================================================ # Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [https://cla.opensource.microsoft.com](https://cla.opensource.microsoft.com). When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (such as a status check or comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. - If you want to contribute with business process content, use the templates at [https://github.com/microsoft/dynamics365patternspractices/tree/main/templates/business-processes](https://github.com/microsoft/dynamics365patternspractices/tree/main/templates/business-processes) Submit your contribution at [https://aka.ms/D365SubmitPnP](https://aka.ms/D365SubmitPnP) - If you want to contribute with architectural guidance, use the templates at [https://github.com/microsoft/dynamics365patternspractices/tree/main/templates/architecture](https://github.com/microsoft/dynamics365patternspractices/tree/main/templates/architecture) Submit your contribution as a pull request here in this repo. Learn more about getting started in the [Get started](#get-started) section - If you want to contribute with purely conceptual content, use the templates at [https://github.com/MicrosoftDocs/dynamics365-docs-templates](https://github.com/MicrosoftDocs/dynamics365-docs-templates) Learn more at [https://learn.microsoft.com/dynamics365/get-started/contribute](https://learn.microsoft.com/dynamics365/get-started/contribute). ## Get started 1. Fork this repo To submit guidance content for Dynamics 365, you cannot work directly in the repo, so the first thing you need to do is create a fork of the repo under your GitHub account. A fork basically is copy of this repo that lets you work freely on the content without affecting the original *dynamics365patternspractices* repo. For more information, see [Fork a Repo](https://help.github.com/articles/fork-a-repo/). 2. Install GitHub Desktop (optional) and clone your forked repo. GitHub Desktop makes is easy to work and collaborate with repos locally from your own desktop. For more information, see [GitHub Desktop](https://desktop.github.com/). 3. Choose the relevant template, and copy it to a relevant location on your device. Optionally, use Visual Studio Code and the Microsoft Learn Authoring Pack to author your Markdown files. If you want to submit Word documents with business process content, follow the guidance at [https://learn.microsoft.com/dynamics365/get-started/contribute](https://learn.microsoft.com/dynamics365/get-started/contribute). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies. ================================================ FILE: LICENSE ================================================ Attribution 4.0 International ======================================================================= Creative Commons Corporation ("Creative Commons") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an "as-is" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible. Using Creative Commons Public Licenses Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses. Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC- licensed material, or material used under an exception or limitation to copyright. More considerations for licensors: wiki.creativecommons.org/Considerations_for_licensors Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor's permission is not necessary for any reason--for example, because of any applicable exception or limitation to copyright--then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More_considerations for the public: wiki.creativecommons.org/Considerations_for_licensees ======================================================================= Creative Commons Attribution 4.0 International Public License By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. Section 1 -- Definitions. a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. c. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. d. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. e. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License. g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. h. Licensor means the individual(s) or entity(ies) granting rights under this Public License. i. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. j. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. k. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. Section 2 -- Scope. a. License grant. 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: a. reproduce and Share the Licensed Material, in whole or in part; and b. produce, reproduce, and Share Adapted Material. 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. 3. Term. The term of this Public License is specified in Section 6(a). 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a) (4) never produces Adapted Material. 5. Downstream recipients. a. Offer from the Licensor -- Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. b. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). b. Other rights. 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. 2. Patent and trademark rights are not licensed under this Public License. 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties. Section 3 -- License Conditions. Your exercise of the Licensed Rights is expressly made subject to the following conditions. a. Attribution. 1. If You Share the Licensed Material (including in modified form), You must: a. retain the following if it is supplied by the Licensor with the Licensed Material: i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); ii. a copyright notice; iii. a notice that refers to this Public License; iv. a notice that refers to the disclaimer of warranties; v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; b. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and c. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. 4. If You Share Adapted Material You produce, the Adapter's License You apply must not prevent recipients of the Adapted Material from complying with this Public License. Section 4 -- Sui Generis Database Rights. Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database; b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. Section 5 -- Disclaimer of Warranties and Limitation of Liability. a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. Section 6 -- Term and Termination. a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or 2. upon express reinstatement by the Licensor. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. Section 7 -- Other Terms and Conditions. a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. Section 8 -- Interpretation. a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. ======================================================================= Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark "Creative Commons" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses. Creative Commons may be contacted at creativecommons.org. ================================================ FILE: LICENSE-ASSETS ================================================ Creative Commons Attribution-ShareAlike 4.0 International Public License By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. Section 1 – Definitions. a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. c. BY-SA Compatible License means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially the equivalent of this Public License. d. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. e. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. f. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. g. License Elements means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution and ShareAlike. h. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License. i. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. j. Licensor means the individual(s) or entity(ies) granting rights under this Public License. k. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. l. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. m. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. Section 2 – Scope. a. License grant. 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: A. reproduce and Share the Licensed Material, in whole or in part; and B. produce, reproduce, and Share Adapted Material. 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. 3. Term. The term of this Public License is specified in Section 6(a). 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material. 5. Downstream recipients. A. Offer from the Licensor – Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. B. Additional offer from the Licensor – Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply. C. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). b. Other rights. 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. 2. Patent and trademark rights are not licensed under this Public License. 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties. Section 3 – License Conditions. Your exercise of the Licensed Rights is expressly made subject to the following conditions. a. Attribution. 1. If You Share the Licensed Material (including in modified form), You must: A. retain the following if it is supplied by the Licensor with the Licensed Material: i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); ii. a copyright notice; iii. a notice that refers to this Public License; iv. a notice that refers to the disclaimer of warranties; v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. b. ShareAlike. In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply. 1. The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-SA Compatible License. 2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material. 3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply. Section 4 – Sui Generis Database Rights. Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database; b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. Section 5 – Disclaimer of Warranties and Limitation of Liability. a. Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You. b. To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You. c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. Section 6 – Term and Termination. a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or 2. upon express reinstatement by the Licensor. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. Section 7 – Other Terms and Conditions. a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. Section 8 – Interpretation. a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. ================================================ FILE: README.md ================================================ # Dynamics 365 Patterns and Practices Welcome to the repository for patterns, practices, business process guides, and other types of guidance content for Microsoft Dynamics 365! This repo provides a way for you to actively contribute to the Dynamics 365 guidance content, and we welcome your contributions. Register your plans for a contribution as an [Issue](https://github.com/microsoft/dynamics365patternspractices/issues/new/choose), and submit the contribution as a [pull request](https://github.com/microsoft/dynamics365patternspractices/pulls). Learn more at [Contribute to Microsoft's content for Dynamics 365](https://learn.microsoft.com/en-us/dynamics365/get-started/contribute#dynamics-365-guidance-content). The *main* branch is the default branch with approved templates and other artifacts. If you have any questions, you can submit feedback as an Issue or a pull request. ## Microsoft Open Source Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies. ================================================ FILE: SECURITY.md ================================================ # Security Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. ## Reporting Security Issues **Please do not report security vulnerabilities through public GitHub issues.** Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) * Full paths of source file(s) related to the manifestation of the issue * The location of the affected source code (tag/branch/commit or direct URL) * Any special configuration required to reproduce the issue * Step-by-step instructions to reproduce the issue * Proof-of-concept or exploit code (if possible) * Impact of the issue, including how an attacker might exploit the issue This information will help us triage your report more quickly. If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. ## Preferred Languages We prefer all communications to be in English. ## Policy Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). ================================================ FILE: SUPPORT.md ================================================ # Support ## How to file issues and get help This project uses GitHub Issues to track bugs and feature requests. Please search the existing issues before filing new issues to avoid duplicates. For new issues, file your bug or feature request as a new Issue. For help and questions about using this project, please reach out to us on Yammer, if you're external to Microsoft, or in Teams if you're internal to Microsoft. ## Microsoft Support Policy Support for this project is limited to the resources listed above. ================================================ FILE: architectures/readme.md ================================================ # Download reference architectures This folder contains downloadable reference architectures for solutions with Microsoft Dynamics 365 apps. Get an overview of the available architectures in the [Microsoft Dynamics 365 guidance hub](https://learn.microsoft.com/dynamics365/guidance/reference-architectures/). ## Fetch a download To download an architecture, choose the file in the explorer, and then choose the **Download raw file** icon. ## Contribute Fetch the appropriate Markdown templates from the [guidance-templates](https://github.com/MicrosoftDocs/dynamics365-docs-templates/tree/main/guidance-templates) folder in the [dynamics365-docs-templates](https://github.com/MicrosoftDocs/dynamics365-docs-templates/) GitHub repo. Learn more at [Contribute to Microsoft content for Dynamics 365](https://learn.microsoft.com/dynamics365/get-started/contribute#architectures). ================================================ FILE: business-process-catalog/README.md ================================================ # Microsoft's business process catalog The term *business process* covers a wide range of structured, often sequenced, activities or tasks to achieve a predetermined organizational goal. The term can also refer to the cumulative effects of all steps progressing toward a business goal. In our articles, we illustrate this sequence of steps in flowcharts. Dynamics 365 is a suite of applications that are designed to help organization meet the organizational goals aligned to a variety of business processes focused on specific industries. Learn more at [About business processes](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about). ## Download the catalog Download the latest version of the catalog from [https://aka.ms/BusinessProcessCatalog](https://aka.ms/BusinessProcessCatalog). We update the catalog at least four times each year. Learn more at [Introduction to the business process catalog](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about). Learn how to use the catalog in Azure DevOps at [Use the business process catalog as a template in Azure DevOps Services](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about-import-catalog-devops). ================================================ FILE: graphics/README.md ================================================ --- date: 11/27/2025 --- # Dynamics 365 Patterns and Practices graphics This folder contains source files for the diagrams in the [business process guide](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about) in the Microsoft Dynamics 365 guidance hub. Microsoft uploads new versions of the files with each update of the business process catalog. We always recommend that you download the latest version each time you start a new project. The graphics are created in Visio based on the data in the associated Excel workbook. Each end-to-end process includes one Visio file. The graphics inside each file are listed by [catalog ID](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about#catalog-ids). We recommend that you download the graphics to help accelerate the deployment of your fit-gap anaylsis process and to understand how the process works by default with Dynamics 365 applications out of the box. Keep in mind that not all variations of business processes may be documented in a given diagram. You can easily customize the graphics for your business requirements by modifying the arrows, adding steps, or removing steps as required. ================================================ FILE: sample-solutions/README.md ================================================ # Sample solutions for implementation projects with Dynamics 365 apps This folder contains sample solutions that you can choose to use in an implementation project. More details coming soon about how to upload sample solutions. ## Currently in this folder The folder currently contains the following sample solutions: |Name of subfolder |Description | |---------|---------| |**businessprocesscatalog-mavim** | Contains files that you can import into Power Automate so that you can import the business process catalog in Mavim. Learn more at [Import the business process catalog in Mavim using a Power Automate flow](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about-import-catalog-mavim). | ================================================ FILE: submit-architecture/placeholder.md ================================================ --- title: Placeholder only description: Don't use this file because it doesn't do anything. author: edupont04 ms.author: edupont ms.topic: article ms.reviewer: raprofit ms.date: 06/16/2023 --- # Placeholder for contributions for Dynamics 365 guidance content I'm just a placeholder. You can submit a pull request with contributions to the Microsoft Dynamics 365 architecture and patterns content to this folder. Learn more at [Contribute to Microsoft's content for Dynamics 365](https://learn.microsoft.com/en-us/dynamics365/get-started/contribute#dynamics-365-guidance-content). ================================================ FILE: submit-business-processes/placeholder.md ================================================ --- title: Placeholder only description: Don't use this file because it doesn't do anything. author: edupont04 ms.author: edupont ms.topic: article ms.reviewer: raprofit ms.date: 06/16/2023 --- # Placeholder for contributions for Dynamics 365 guidance content I'm just a placeholder. You can submit a pull request with contributions to the Microsoft Dynamics 365 business process content to this folder. Learn more at [Contribute to Microsoft's content for Dynamics 365](https://learn.microsoft.com/en-us/dynamics365/get-started/contribute#dynamics-365-guidance-content). ================================================ FILE: templates/Azure-DevOps-templates/1_ADO_Creation_Script (Preview).py ================================================ import pandas as pd import requests import base64 import json import time import urllib.parse import datetime import os import sys # === USER CONFIGURATION === # Fill in these variables with your Azure DevOps details and file paths. ADO_ORG_URL = "https://dev.azure.com/" # e.g. "https://dev.azure.com/Contoso" ADO_PROJECT = "" # e.g. "Business process catalog" PROCESS_NAME = "" # e.g. "Business process catalog" PAT = "" # Azure DevOps PAT with full access EXCEL_FILE = "ADO template guideline (Preview).xlsx" # Path to the Excel template file LOG_FILE = "1_ADO_Creation_Script_Log.txt" # === AUTHENTICATION SETUP === # Prepare HTTP headers for Azure DevOps REST API calls. authorization = str.encode(':' + PAT) b64_auth = base64.b64encode(authorization).decode() headers = { "Content-Type": "application/json", "Authorization": f"Basic {b64_auth}" } # Resolve relative path to the script directory and check existence base_dir = os.path.dirname(os.path.abspath(__file__)) excel_path = EXCEL_FILE if os.path.isabs(EXCEL_FILE) else os.path.join(base_dir, EXCEL_FILE) if not os.path.exists(excel_path): print(f"Error: Excel file not found: {excel_path}") print(f"Script directory: {base_dir}") print("Files in script directory:") for f in sorted(os.listdir(base_dir)): print(" ", f) sys.exit(1) def log_api_call(url, payload, response): """ Logs details of an API call to the log file. Args: url (str): The API endpoint URL. payload (dict): The request payload. response (requests.Response): The HTTP response object. """ with open(LOG_FILE, "a", encoding="utf-8") as f: f.write(f"\n[{datetime.datetime.now()}]\n") f.write(f"URL: {url}\n") f.write(f"Payload: {json.dumps(payload, indent=2)}\n") f.write(f"Response [{response.status_code}]: {response.text}\n") f.write("-" * 60 + "\n") def log(msg): """ Writes a message to the log file. Args: msg (str): The message to log. """ with open(LOG_FILE, "a", encoding="utf-8") as f: f.write(msg + "\n") def url_encode(name): """ URL-encodes a string for safe use in API endpoints. Args: name (str): The string to encode. Returns: str: The URL-encoded string. """ return urllib.parse.quote(name) def get_agile_process_id(): """ Retrieves the process type ID for the built-in Agile process. Returns: str: The Agile process type ID. Raises: Exception: If Agile process is not found. """ url = f"{ADO_ORG_URL}/_apis/work/processes?api-version=7.1-preview.2" resp = requests.get(url, headers=headers) resp.raise_for_status() for proc in resp.json().get("value", []): if proc["name"].lower() == "agile": return proc["typeId"] raise Exception("Agile process not found.") def get_process_id_by_name(process_name): """ Gets the process type ID for a given process name. Args: process_name (str): The name of the process. Returns: str or None: The process type ID, or None if not found. """ url = f"{ADO_ORG_URL}/_apis/work/processes?api-version=7.1-preview.2" resp = requests.get(url, headers=headers) resp.raise_for_status() for proc in resp.json().get("value", []): if proc["name"].strip().lower() == process_name.strip().lower(): return proc["typeId"] return None def create_process(process_name): """ Creates a new custom process based on Agile if it does not exist. Args: process_name (str): The name of the process to create. Returns: str: The process type ID. """ agile_id = get_agile_process_id() url = f"{ADO_ORG_URL}/_apis/work/processes?api-version=7.1-preview.2" payload = { "name": process_name, "description": f"Custom process based on Agile: {process_name}", "parentProcessTypeId": agile_id } resp = requests.post(url, headers=headers, json=payload) log_api_call(url, payload, resp) if resp.status_code in [200, 201]: print(f"Created process: {process_name}") return resp.json().get("typeId") elif resp.status_code == 409: print(f"Process already exists: {process_name}") return get_process_id_by_name(process_name) else: print(f"Failed to create process: {resp.status_code} - {resp.text}") raise Exception("Process creation failed.") def get_project_id_by_name(project_name): """ Gets the project ID for a given project name. Args: project_name (str): The name of the project. Returns: str or None: The project ID, or None if not found. """ url = f"{ADO_ORG_URL}/_apis/projects?api-version=7.1-preview.4" resp = requests.get(url, headers=headers) resp.raise_for_status() for proj in resp.json().get("value", []): if proj["name"].strip().lower() == project_name.strip().lower(): return proj["id"] return None def create_project(project_name, process_id): """ Creates a new Azure DevOps project using the specified process. Args: project_name (str): The name of the project. process_id (str): The process type ID to use. Returns: str or None: The project ID, or None if creation is asynchronous. """ url = f"{ADO_ORG_URL}/_apis/projects?api-version=7.1-preview.4" payload = { "name": project_name, "description": f"Project for process {process_id}", "capabilities": { "versioncontrol": {"sourceControlType": "Git"}, "processTemplate": {"templateTypeId": process_id} } } resp = requests.post(url, headers=headers, json=payload) log_api_call(url, payload, resp) if resp.status_code in [202]: # Project creation is async print(f"Project creation started: {project_name}") return None elif resp.status_code == 409: print(f"Project already exists: {project_name}") return get_project_id_by_name(project_name) else: print(f"Failed to create project: {resp.status_code} - {resp.text}") raise Exception("Project creation failed.") def build_reference_name(wit_name): """ Builds a reference name for a work item type by removing spaces and special characters. Args: wit_name (str): The work item type name. Returns: str: The reference name. """ safe_process = PROCESS_NAME.replace(" ", "") # Remove spaces from process name safe_name = wit_name.replace(" ", "").replace("-", "").replace("_", "") return f"{safe_process}.{safe_name}" def safe_json_value(val, default=""): """ Safely converts a value to string, handling NaN and None. Args: val: The value to convert. default: The default value if val is NaN or None. Returns: str: The safe string value. """ if pd.isna(val) or val is None: return default if isinstance(val, float) and (val != val): return default return str(val) # Clear the log file if os.path.exists(LOG_FILE): os.remove(LOG_FILE) # === PROCESS AND PROJECT CREATION === # Ensure the process and project exist before proceeding. process_id = get_process_id_by_name(PROCESS_NAME) if not process_id: process_id = create_process(PROCESS_NAME) project_id = get_project_id_by_name(ADO_PROJECT) if not project_id: create_project(ADO_PROJECT, process_id) print("Waiting 30 seconds for project creation to complete...") time.sleep(30) else: print(f"Project already exists: {ADO_PROJECT}") ADO_PROCESS_ID = process_id # === WORK ITEM TYPES CREATION/UPDATE === print("Starting work item type creation...") df = pd.read_excel(excel_path, sheet_name="Work item types") df.columns = df.columns.str.strip() df = df.drop_duplicates(subset=["Work item type"]) for col, default in [("Description", ""), ("Color", "0078D4"), ("Icon", "icon_gear")]: if col in df.columns: df[col] = df[col].fillna(default) # Fetch all existing work item types in one call to avoid per-item 500 errors wit_list_url = f"{ADO_ORG_URL}/_apis/work/processes/{ADO_PROCESS_ID}/workitemtypes?api-version=7.1-preview.2" wit_list_response = requests.get(wit_list_url, headers=headers) log_api_call(wit_list_url, {}, wit_list_response) existing_wits = {} if wit_list_response.status_code == 200: for wit in wit_list_response.json().get("value", []): existing_wits[wit["referenceName"]] = wit # Also index by the short name (last segment) for matching spreadsheet ref names short_name = wit["referenceName"].rsplit(".", 1)[-1] if "." in wit["referenceName"] else wit["referenceName"] existing_wits[short_name] = wit for idx, row in df.iterrows(): wit_name = row["Work item type"] description = row["Help text"] if "Help text" in row and pd.notna(row["Help text"]) else "" inherit_from = row.get("Inherit from", None) color = row["Color"] icon = row["Icon"] if pd.notna(row["Icon"]) and str(row["Icon"]).strip() != "" else "icon_test_case" custom_flag = str(row.get("Custom work item type", "")).strip().lower() ref_name = row.get("Reference name", build_reference_name(wit_name)).strip() # Check if work item type already exists using the pre-fetched list exists = ref_name in existing_wits # Skip standard work item types (no action needed) if custom_flag == "no": print(f"Skipped: {wit_name} (Standard work item, no changes)") continue # Disable work item types marked as disabled if custom_flag == "disabled": if exists: full_ref = existing_wits[ref_name]["referenceName"] already_disabled = existing_wits[ref_name].get("isDisabled", False) if already_disabled: print(f"Already disabled: {wit_name}") else: disable_url = f"{ADO_ORG_URL}/_apis/work/processes/{ADO_PROCESS_ID}/workitemtypes/{full_ref}?api-version=7.1-preview.2" disable_payload = {"isDisabled": True} response = requests.patch(disable_url, json=disable_payload, headers=headers) log_api_call(disable_url, disable_payload, response) if response.status_code in [200, 204]: print(f"Disabled: {wit_name}") else: print(f"ERROR disabling {wit_name}: {response.status_code} - {response.text}") else: print(f"Skipped: {wit_name} (marked disabled but not found in process)") continue # Create or update custom work item type if custom_flag == "yes": payload = { "name": wit_name, "description": description, "referenceName": ref_name, "color": color, "icon": icon } if pd.notna(inherit_from) and str(inherit_from).strip() != "": payload["inherits"] = inherit_from if exists: # Use the full reference name from ADO for the update URL full_ref = existing_wits[ref_name]["referenceName"] update_url = f"{ADO_ORG_URL}/_apis/work/processes/{ADO_PROCESS_ID}/workitemtypes/{full_ref}?api-version=7.1-preview.2" response = requests.patch(update_url, json=payload, headers=headers) log_api_call(update_url, payload, response) print(f"Updated: {wit_name}") else: create_url = f"{ADO_ORG_URL}/_apis/work/processes/{ADO_PROCESS_ID}/workitemtypes?api-version=7.1-preview.2" response = requests.post(create_url, json=payload, headers=headers) log_api_call(create_url, payload, response) print(f"Created: {wit_name}") print("Work item type creation complete.") # === FIELD AND PICKLIST CREATION/UPDATE === print("Starting Azure DevOps field creation script...") print(f"Reading spreadsheet: {EXCEL_FILE}") # Load fields from Excel df_fields = pd.read_excel(excel_path, sheet_name="Fields") df_fields.columns = df_fields.columns.str.strip() df_fields = df_fields.drop_duplicates(subset=["Reference name"]) df_fields = df_fields.fillna("") print(f"Loaded {len(df_fields)} fields from spreadsheet.") # Load picklists from Excel picklist_df = pd.read_excel(excel_path, sheet_name="Picklists") picklist_df.columns = picklist_df.columns.str.strip() picklist_dict = {} for col in picklist_df.columns: values = [safe_json_value(v) for v in picklist_df[col].dropna().tolist()] values = [v for v in values if v != ""] if values: picklist_dict[col] = values print(f"Loaded {len(picklist_dict)} picklists from Picklists tab.") # Get existing organization fields fields_url = f"{ADO_ORG_URL}/_apis/wit/fields?api-version=7.1-preview.2" fields_response = requests.get(fields_url, headers=headers) log_api_call(fields_url, {}, fields_response) if fields_response.status_code == 200: try: fields_json = fields_response.json() existing_fields = {field["referenceName"]: field for field in fields_json.get("value", [])} existing_fields_by_name = {field["name"]: field for field in fields_json.get("value", [])} except Exception as e: log(f"Error decoding fields JSON: {e}") existing_fields = {} existing_fields_by_name = {} else: log(f"Fields endpoint returned status {fields_response.status_code}: {fields_response.text}") existing_fields = {} existing_fields_by_name = {} # Get existing picklists lists_url = f"{ADO_ORG_URL}/_apis/work/processes/lists?api-version=7.1" picklist_ids = {} existing_lists_resp = requests.get(lists_url, headers=headers) log_api_call(lists_url, {}, existing_lists_resp) existing_lists = {} if existing_lists_resp.status_code == 200: try: lists_json = existing_lists_resp.json() for lst in lists_json.get("value", []): existing_lists[lst["name"]] = lst except Exception as e: log(f"Error decoding lists JSON: {e}") # Create or update picklists for label, values in picklist_dict.items(): payload = { "name": label, "type": "String", "items": values } if label in existing_lists: picklist_id = existing_lists[label]["id"] update_url = f"{ADO_ORG_URL}/_apis/work/processes/lists/{picklist_id}?api-version=7.1" print(f"Updating picklist: {label} with values: {values}") log(f"Updating picklist: {label} with values: {values}") response = requests.put(update_url, json=payload, headers=headers) log_api_call(update_url, payload, response) if response.status_code in [200, 201]: picklist_ids[label] = picklist_id log(f" Picklist updated with id: {picklist_id}") else: print(f" Failed to update picklist: {label}") log(f" Failed to update picklist: {label}") log(f" Response: {response.text}") else: print(f"Creating picklist: {label} with values: {values}") log(f"Creating picklist: {label} with values: {values}") response = requests.post(lists_url, json=payload, headers=headers) log_api_call(lists_url, payload, response) if response.status_code in [200, 201]: picklist_id = response.json().get("id") picklist_ids[label] = picklist_id log(f" Picklist created with id: {picklist_id}") else: print(f" Failed to create picklist: {label}") log(f" Failed to create picklist: {label}") log(f" Response: {response.text}") # Create or update fields fields_url_create = f"{ADO_ORG_URL}/_apis/wit/fields?api-version=7.1" for idx, row in df_fields.iterrows(): field_name = safe_json_value(row.get("Field name")) # Unique name in ADO (suffixed with MS BPC) reference_name = safe_json_value(row.get("Reference name")) # Unique reference name field_label = safe_json_value(row.get("Label")) # User-friendly display label field_type = safe_json_value(row.get("Field type")) custom_flag = str(safe_json_value(row.get("Custom field"))).strip().lower() description = safe_json_value(row.get("Description")) # Skip standard (OOB) fields if custom_flag == "no": print(f"Processing field: {field_label} (name: {field_name}, ref: {reference_name}) type: {field_type} Standard OOB field-skipping") log(f"Processing field: {field_label} (name: {field_name}, ref: {reference_name}) type: {field_type} Standard OOB field-skipping") continue # Handle picklist fields — match picklist by Label (matches Picklists tab column headers) if field_type in ["PicklistString", "PicklistInteger"]: picklist_id = picklist_ids.get(field_label) if not picklist_id: print(f" No picklist id found for label '{field_label}', skipping field creation.") log(f" No picklist id found for label '{field_label}', skipping field creation.") continue field_payload = { "name": field_name, "referenceName": reference_name, "type": "String", "isPicklist": True, "picklistId": picklist_id, "description": description } else: field_payload = { "name": field_name, "type": field_type, "referenceName": reference_name, "description": description } # Check for existing fields by both reference name and name ref_match = existing_fields.get(reference_name) name_match = existing_fields_by_name.get(field_name) ref_exists = ref_match is not None name_exists = name_match is not None if ref_exists and name_exists: # Both match — verify they point to the same field, then update if ref_match["referenceName"] == name_match["referenceName"]: print(f" Field '{reference_name}' (name: {field_name}) already exists. Updating...") log(f"Field '{reference_name}' (name: {field_name}) already exists. Updating...") log("Payload: " + json.dumps(field_payload, indent=2, allow_nan=False)) update_field_url = f"{ADO_ORG_URL}/_apis/wit/fields/{url_encode(reference_name)}?api-version=7.1" response = requests.patch(update_field_url, json=field_payload, headers=headers) log_api_call(update_field_url, field_payload, response) print(f" Field update response: {response}") else: # Reference name and name each exist but belong to different fields — conflict error_msg = (f"ERROR: Conflict for field '{field_label}' — reference name '{reference_name}' matches " f"existing field '{ref_match['name']}', but name '{field_name}' matches a different " f"existing field with ref '{name_match['referenceName']}'. Please review and correct.") print(f" {error_msg}") log(error_msg) elif ref_exists and not name_exists: # Only reference name matches — partial conflict error_msg = (f"ERROR: Partial match for field '{field_label}' — reference name '{reference_name}' already " f"exists with name '{ref_match['name']}', but the expected name '{field_name}' was not found. " f"Please review and correct the spreadsheet.") print(f" {error_msg}") log(error_msg) elif not ref_exists and name_exists: # Only name matches — partial conflict error_msg = (f"ERROR: Partial match for field '{field_label}' — name '{field_name}' already exists with " f"reference name '{name_match['referenceName']}', but the expected reference name " f"'{reference_name}' was not found. Please review and correct the spreadsheet.") print(f" {error_msg}") log(error_msg) else: # Neither exists — create new field print(f"Creating field: {field_label} (name: {field_name}, ref: {reference_name})") log(f"Creating field: {field_label} (name: {field_name}, ref: {reference_name})") log("Payload: " + json.dumps(field_payload, indent=2, allow_nan=False)) response = requests.post(fields_url_create, json=field_payload, headers=headers) log_api_call(fields_url_create, field_payload, response) print(f" Field creation response: {response}") print("Script finished. See log file for details:") print(f" {LOG_FILE}") log("Done!") ================================================ FILE: templates/Azure-DevOps-templates/2_ADO_Page_Layout_Script_Threaded (Preview).py ================================================ import requests import pandas as pd import json from typing import Optional import os import urllib.parse import sys import base64 import time import threading from concurrent.futures import ThreadPoolExecutor, as_completed # === CONFIGURATION === ADO_ORG_URL = "https://dev.azure.com/" # e.g. "https://dev.azure.com/Contoso" ADO_PROJECT = "" # e.g. "Business process catalog" PROCESS_NAME = "" # e.g. "Business process catalog" PAT = "" # Azure DevOps PAT with full access EXCEL_FILE = "ADO template guideline (Preview).xlsx" # Path to the Excel template file LOG_FILE = "2_ADO_Page_Layout_Script_Threaded_Log.txt" SYSTEM_WORK_ITEM_TYPE = "Microsoft.VSTS.WorkItemTypes" # === THREADING CONFIGURATION === # Maximum number of parallel threads for processing work item types # Recommended: 5-10 to balance speed vs Azure DevOps API rate limits MAX_WORKERS = 8 # === AUTHENTICATION SETUP === authorization = str.encode(':' + PAT) b64_auth = base64.b64encode(authorization).decode() headers = { "Content-Type": "application/json", "Authorization": f"Basic {b64_auth}" } # === THREAD-SAFE LOGGING === log_lock = threading.Lock() def log(msg: str): """Thread-safe logging to file and console.""" with log_lock: with open(LOG_FILE, "a", encoding="utf-8") as f: f.write(msg + "\n") print(msg) # === THREAD-SAFE CACHES === layout_cache_lock = threading.Lock() layout_cache: dict[str, Optional[dict]] = {} locked_layout_wits: set[str] = set() # === Retry logic for requests === def make_request_with_retry(method, url, max_retries=3, retry_delay=2, **kwargs): """ Makes an HTTP request with retry logic for transient errors. Args: method: HTTP method ('GET', 'POST', 'PATCH', 'PUT', 'DELETE') url: Request URL max_retries: Maximum retry attempts retry_delay: Initial delay in seconds (exponential backoff) **kwargs: Additional arguments to pass to requests (headers, json, etc.) Returns: Response object """ resp = None for attempt in range(max_retries): try: if method.upper() == 'GET': resp = requests.get(url, **kwargs) elif method.upper() == 'POST': resp = requests.post(url, **kwargs) elif method.upper() == 'PATCH': resp = requests.patch(url, **kwargs) elif method.upper() == 'PUT': resp = requests.put(url, **kwargs) elif method.upper() == 'DELETE': resp = requests.delete(url, **kwargs) else: raise ValueError(f"Unsupported HTTP method: {method}") # Handle rate limiting and service unavailability if resp.status_code in (429, 503, 504): if attempt < max_retries - 1: wait_time = retry_delay * (2 ** attempt) log(f" Service unavailable (status {resp.status_code}). Retrying in {wait_time} seconds... (Attempt {attempt + 1}/{max_retries})") time.sleep(wait_time) continue return resp except requests.exceptions.RequestException as e: if attempt < max_retries - 1: wait_time = retry_delay * (2 ** attempt) log(f" Request error: {e}. Retrying in {wait_time} seconds... (Attempt {attempt + 1}/{max_retries})") time.sleep(wait_time) else: raise return resp # === Process ID Lookup === def get_process_id_by_name(process_name): """ Gets the process type ID for a given process name. Args: process_name (str): The name of the process. Returns: str: The process type ID. Raises: Exception: If process is not found. """ url = f"{ADO_ORG_URL}/_apis/work/processes?api-version=7.1-preview.2" resp = requests.get(url, headers=headers) resp.raise_for_status() for proc in resp.json().get("value", []): if proc["name"].strip().lower() == process_name.strip().lower(): log(f"Found process '{process_name}' with ID: {proc['typeId']}") return proc["typeId"] raise Exception(f"Process '{process_name}' not found. Please create it first using ADO_Creation_Script.py") def build_reference_name(wit_name): """ Builds a reference name for a work item type by removing spaces and special characters. Args: wit_name (str): The work item type name. Returns: str: The reference name. """ safe_process = PROCESS_NAME.replace(" ", "") # Remove spaces from process name safe_name = wit_name.replace(" ", "").replace("-", "").replace("_", "") return f"{safe_process}.{safe_name}" def is_system_work_item_type(wit_ref_name: str) -> bool: """ Determines if a work item type is an OOTB system type (locked layout). Args: wit_ref_name (str): The reference name of the work item type. Returns: bool: True if it's a system work item type, False otherwise. """ return wit_ref_name.startswith("Microsoft.VSTS.WorkItemTypes.") # === Utility helpers === def safe_json_value(val, default=""): if pd.isna(val) or val is None: return default if isinstance(val, float) and (val != val): # NaN return default return str(val) def parse_required(val): v = safe_json_value(val).strip().lower() if v == "yes": return True if v == "conditional": return False return False def parse_default_value(val): v = safe_json_value(val) if v.strip().lower() == "none": return "" return v # === Azure DevOps API helpers (Thread-Safe) === LOCKED_LAYOUT_MARKER = "FormLayoutInfoNotAvailableException" def invalidate_layout_cache(wit_ref_name: str) -> None: """Thread-safe cache invalidation.""" with layout_cache_lock: layout_cache.pop(wit_ref_name, None) def get_layout(wit_ref_name: str, process_id: str, force_refresh: bool = False) -> Optional[dict]: """Get the layout for a work item type with retry logic (thread-safe).""" with layout_cache_lock: if not force_refresh: if wit_ref_name in locked_layout_wits: return None cached = layout_cache.get(wit_ref_name) if cached is not None: return cached url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workItemTypes/{wit_ref_name}/layout?api-version=7.1-preview.1" resp = make_request_with_retry('GET', url, headers=headers) if resp.status_code in (400, 403) and LOCKED_LAYOUT_MARKER in resp.text: log(f" Layout for '{wit_ref_name}' is locked (likely OOTB). Skipping layout changes.") with layout_cache_lock: locked_layout_wits.add(wit_ref_name) layout_cache.pop(wit_ref_name, None) return None resp.raise_for_status() result = resp.json() with layout_cache_lock: layout_cache[wit_ref_name] = result return result def get_section_id(layout: Optional[dict], page_id: str, section_label: Optional[str] = None) -> Optional[str]: """ Given a layout and a page_id, return the section id for the given section label (e.g., "Section 1"). If not found, return the first section id on the page. """ if not layout: return None excel_to_ado_section = { "left": "section1", "middle": "section2", "right": "section3" } section_label_clean = section_label.strip().lower() if section_label else None mapped_label = excel_to_ado_section.get(section_label_clean, section_label_clean) if section_label_clean else None for page in layout.get("pages", []): if page.get("id") == page_id: for section in page.get("sections", []): api_label = section.get("id", "").strip().lower() if mapped_label and api_label == mapped_label: return section.get("id") if page.get("sections"): return page["sections"][0]["id"] return None def find_page_by_label(layout: Optional[dict], page_label: str) -> Optional[dict]: if not layout: return None for page in layout.get("pages", []): if page.get("label") == page_label: return page return None def ensure_group_on_page(layout: Optional[dict], page_id: str, group_label: str) -> tuple[Optional[str], Optional[str]]: """ Check if a group with the given label exists anywhere on the page (in any section). Returns (group_id, section_id) if found, otherwise (None, None). """ if not layout: return None, None for page in layout.get("pages", []): if page.get("id") == page_id: for section in page.get("sections", []): for group in section.get("groups", []): if group.get("label") == group_label: return group.get("id"), section.get("id") return None, None def ensure_group_in_section(layout: Optional[dict], page_id: str, section_id: str, group_label: str) -> Optional[str]: """ Return group_id if a group with the given label exists in the specific section; otherwise None. """ if not layout: return None for page in layout.get("pages", []): if page.get("id") == page_id: for section in page.get("sections", []): if section.get("id") == section_id: for group in section.get("groups", []): if group.get("label") == group_label: return group.get("id") return None def add_page_if_missing(wit_ref_name: str, process_id: str, page_label: str, order: int) -> Optional[str]: log(f"[add_page_if_missing] Checking for page '{page_label}' on '{wit_ref_name}'") layout = get_layout(wit_ref_name, process_id) if layout is None: return None existing = find_page_by_label(layout, page_label) if existing: log(f" Page '{page_label}' already exists on '{wit_ref_name}' (id: {existing.get('id')})") return existing.get("id") url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workItemTypes/{wit_ref_name}/layout/pages?api-version=7.1-preview.1" payload = {"label": page_label, "order": order, "visible": True, "inherited": True} log(f" Creating page '{page_label}' on '{wit_ref_name}' with payload: {json.dumps(payload)}") resp = make_request_with_retry('POST', url, headers=headers, json=payload) if resp.status_code in [200, 201]: invalidate_layout_cache(wit_ref_name) pid = resp.json().get("id") log(f" Added page '{page_label}' to '{wit_ref_name}' (id: {pid})") return pid elif resp.status_code == 409: layout = get_layout(wit_ref_name, process_id, force_refresh=True) if layout is None: return None existing = find_page_by_label(layout, page_label) if existing: return existing.get("id") log(f" ERROR: Failed to add page '{page_label}': {resp.status_code} - {resp.text}") return None def add_group_if_missing(wit_ref_name: str, process_id: str, page_id: str, section_id: str, group_label: str) -> Optional[str]: """ Ensures a group exists on the page. First checks the entire page for the group. If found in a different section, logs a warning and returns that group_id. If not found anywhere, creates it in the target section. """ log(f"[add_group_if_missing] Checking for group '{group_label}' on page '{page_id}' for '{wit_ref_name}'") layout = get_layout(wit_ref_name, process_id) if layout is None: return None group_id, found_section_id = ensure_group_on_page(layout, page_id, group_label) if group_id: if found_section_id == section_id: log(f" Group '{group_label}' already exists in target section '{section_id}' on page '{page_id}' (id: {group_id})") else: log(f" WARNING: Group '{group_label}' already exists in section '{found_section_id}' instead of target section '{section_id}' on page '{page_id}' (id: {group_id}). Using existing group.") return group_id url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workItemTypes/{wit_ref_name}/layout/pages/{page_id}/sections/{section_id}/groups?api-version=7.1-preview.1" payload = {"label": group_label, "visible": True, "inherited": True} log(f" Creating group '{group_label}' in section '{section_id}' on page '{page_id}' with payload: {json.dumps(payload)}") resp = make_request_with_retry('POST', url, headers=headers, json=payload) if resp.status_code in [200, 201]: invalidate_layout_cache(wit_ref_name) layout = get_layout(wit_ref_name, process_id, force_refresh=True) if layout is None: return None group_id = ensure_group_in_section(layout, page_id, section_id, group_label) if group_id: log(f" Added group '{group_label}' to section '{section_id}' on page '{page_id}' (id: {group_id})") return group_id elif resp.status_code == 409: layout = get_layout(wit_ref_name, process_id, force_refresh=True) if layout is None: return None group_id, found_section_id = ensure_group_on_page(layout, page_id, group_label) if group_id: if found_section_id != section_id: log(f" WARNING: Group '{group_label}' was created in section '{found_section_id}' instead of target section '{section_id}' (409 conflict)") return group_id log(f" ERROR: Failed to add group '{group_label}': {resp.status_code} - {resp.text}") return None def get_control_type(field_type: Optional[str], field_ref_name: str, picklist_name: Optional[str]) -> str: ft = (field_type or "").strip().lower() ref = (field_ref_name or "").strip().lower() pick = (picklist_name or "").strip() if ref in ["system.areaid", "system.area", "system.areapath", "system.iterationid", "system.iteration", "system.iterationpath"]: return "WorkItemClassificationControl" if ref in ["system.assignedto", "system.createdby", "system.changedby", "system.authorizedas", "system.owner", "system.requestedby"]: return "IdentityControl" if ref in ["system.createddate", "system.changeddate", "system.resolveddate", "system.closeddate"] or ft == "datetime": return "DateTimeControl" if ft == "boolean": return "BooleanControl" if ft == "html": return "HtmlFieldControl" if pick: if ft == "integer": return "PickListIntegerControl" return "PickListStringControl" if ft == "identity": return "IdentityControl" return "Field" def add_control_if_missing(wit_ref_name: str, process_id: str, page_id: str, section_id: str, group_id: str, field_ref_name: str, label: str, order: int, field_type: Optional[str] = None, picklist_name: Optional[str] = None): log(f"[add_control_if_missing] Checking for control '{label}' ({field_ref_name}) " f"in group '{group_id}' on page '{page_id}' for '{wit_ref_name}'") if not section_id: log(f" ERROR: section_id is empty for page '{page_id}'. Cannot add control.") return layout = get_layout(wit_ref_name, process_id) if layout is None: log(f" Skipping control '{label}' on '{wit_ref_name}' because layout is unavailable") return found_group = False for page in layout.get("pages", []): if page.get("id") == page_id: for section in page.get("sections", []): if section.get("id") == section_id: for group in section.get("groups", []): if group.get("id") == group_id: found_group = True for control in group.get("controls", []): if control.get("id") == field_ref_name: log(f" Control '{label}' ({field_ref_name}) already exists in group '{group_id}' on page '{page_id}'") return break if not found_group: log(f" ERROR: group_id '{group_id}' not found in section '{section_id}' on page '{page_id}'.") return control_type = get_control_type(field_type, field_ref_name, picklist_name) payload = { "id": field_ref_name, "label": label, "order": order, "controlType": control_type, "visible": True, "inherited": True } enc_wit = urllib.parse.quote(wit_ref_name, safe='') enc_group = urllib.parse.quote(group_id, safe='') url = (f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workItemTypes/{enc_wit}" f"/layout/groups/{enc_group}/controls" f"?api-version=7.1-preview.1") log(f" Creating control '{label}' ({field_ref_name}) in group '{group_id}' on page '{page_id}' " f"in section '{section_id}'") resp = make_request_with_retry('POST', url, headers=headers, json=payload) if resp.status_code in [200, 201]: invalidate_layout_cache(wit_ref_name) log(f" Added control '{label}' ({field_ref_name}) to group '{group_id}' on page '{page_id}' " f"in section '{section_id}' for '{wit_ref_name}'") elif resp.status_code == 409: log(f" Control '{label}' ({field_ref_name}) already exists in group '{group_id}' on page '{page_id}' " f"in section '{section_id}' for '{wit_ref_name}' (409 conflict)") else: body = resp.text log(f" ERROR: Failed to add control '{label}': {resp.status_code} - {body}") def process_work_item_type(wit_row, process_id: str, field_labels: list, reference_names: dict, field_name_map: dict, field_types: dict, picklist_names: dict, required_flags: dict, default_values: dict, field_layout_map: dict, existing_fields: dict) -> dict: """ Process a single work item type - adds fields and updates layout. This function is designed to run in a separate thread. Returns: dict with 'wit_name', 'status', 'fields_added', 'errors' """ result = { 'wit_name': '', 'status': 'success', 'fields_added': 0, 'errors': [] } try: custom_type_flag = safe_json_value(wit_row.get("Custom work item type")).strip().lower() wit_name_raw = safe_json_value(wit_row.get("Work item type")).strip() wit_ref_name_excel = safe_json_value(wit_row.get("Reference name")).strip() result['wit_name'] = wit_name_raw # Build reference name using consistent logic if custom_type_flag == "yes": wit_ref_name = build_reference_name(wit_name_raw) else: wit_ref_name = wit_ref_name_excel if wit_ref_name_excel else f"{SYSTEM_WORK_ITEM_TYPE}.{wit_name_raw}" log(f"[Thread] Processing work item type: {wit_name_raw} (Reference: {wit_ref_name})") # Check if this is a system work item type (OOTB with locked layout) is_system_wit = is_system_work_item_type(wit_ref_name) if is_system_wit: log(f" Work item type '{wit_ref_name}' is a system (OOTB) type with locked layout. Fields will be added but layout updates will be skipped.") with layout_cache_lock: locked_layout_wits.add(wit_ref_name) # Get existing fields on the WIT encoded_wit_name = urllib.parse.quote(wit_name_raw, safe='') wit_fields_url = f"{ADO_ORG_URL}/{ADO_PROJECT}/_apis/wit/workitemtypes/{encoded_wit_name}/fields?api-version=7.0" wit_fields_resp = requests.get(wit_fields_url, headers=headers) wit_existing_fields = set() if wit_fields_resp.status_code == 200: wit_fields_json = wit_fields_resp.json() wit_existing_fields = set(f["referenceName"] for f in wit_fields_json.get("value", [])) elif wit_fields_resp.status_code == 404: log(f" No fields found for work item type '{wit_ref_name}' in project '{ADO_PROJECT}' (404). Continuing with additions.") else: log(f" Failed to get fields for work item type '{wit_ref_name}': {wit_fields_resp.status_code}") result['status'] = 'error' result['errors'].append(f"Failed to get fields: {wit_fields_resp.status_code}") return result wit_field_flags = {label: safe_json_value(wit_row.get(label)).strip().upper() for label in field_labels} if all(flag != "X" for flag in wit_field_flags.values()): log(f" Skipping '{wit_name_raw}' — no fields flagged for addition.") result['status'] = 'skipped' return result # Loop through fields to add (using Label to match WIT sheet columns) for field_label in field_labels: if wit_field_flags.get(field_label) != "X": continue field_layout_info = field_layout_map.get(field_label) if not field_layout_info: log(f" WARNING: No layout metadata for field '{field_label}'. Skipping layout update.") continue ref_name = reference_names.get(field_label) field_name = field_name_map.get(field_label) # MS BPC-suffixed name field_type = safe_json_value(field_types.get(field_label)).strip() picklist_name = safe_json_value(picklist_names.get(field_label)) if picklist_names else "" required = parse_required(required_flags.get(field_label)) if required_flags else False default_value = parse_default_value(default_values.get(field_label)) if default_values else "" # Ensure field exists at org level if ref_name not in existing_fields: log(f" ERROR: Field '{field_label}' (name: {field_name}, ref: {ref_name}) does not exist at organization level. Cannot add to WIT '{wit_ref_name}'.") result['errors'].append(f"Field '{field_label}' not at org level") continue # Add field to WIT if not present if ref_name in wit_existing_fields: log(f" Field '{field_label}' (name: {field_name}, ref: {ref_name}) already exists on WIT '{wit_ref_name}'. Skipping field addition.") else: payload = { "referenceName": ref_name, "required": required, "visible": True } if field_type.lower() == "identity": payload["allowGroups"] = True if default_value: payload["defaultValue"] = default_value add_field_url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workItemTypes/{wit_ref_name}/fields?api-version=7.0" log(f" Adding field '{field_label}' (name: {field_name}, ref: {ref_name}) to WIT '{wit_ref_name}'") add_resp = make_request_with_retry('POST', add_field_url, headers=headers, json=payload) if add_resp.status_code in [200, 201]: log(f" Successfully added field '{field_label}' (name: {field_name}, ref: {ref_name}) to '{wit_ref_name}'.") wit_existing_fields.add(ref_name) result['fields_added'] += 1 else: log(f" ERROR: Failed to add field '{field_label}' (name: {field_name}, ref: {ref_name}) to '{wit_ref_name}': {add_resp.status_code} - {add_resp.text}") result['errors'].append(f"Failed to add field '{field_label}'") continue # === Layout update section === with layout_cache_lock: if wit_ref_name in locked_layout_wits: log(f" Skipping layout updates for '{wit_ref_name}' because its layout is locked.") continue if field_type.lower() == "html": log(f" SKIPPED: HTML field '{field_label}' ({ref_name}) cannot be added to the form layout via API.") continue page_name = safe_json_value(field_layout_info["Page name"]) group_sequence = int(field_layout_info["Group sequence"]) if not pd.isna(field_layout_info["Group sequence"]) else 1 group_name = safe_json_value(field_layout_info["Group name"]) field_sequence = int(field_layout_info["Field sequence"]) if not pd.isna(field_layout_info["Field sequence"]) else 1 section_label_raw = safe_json_value(field_layout_info["Group location"]) # 1) Ensure page exists layout = get_layout(wit_ref_name, process_id) if layout is None: with layout_cache_lock: locked_layout_wits.add(wit_ref_name) continue page = find_page_by_label(layout, page_name) page_id = page.get("id") if page else None if not page_id: page_id = add_page_if_missing(wit_ref_name, process_id, page_name, group_sequence) if not page_id: log(f" ERROR: Could not resolve or create page '{page_name}' for WIT '{wit_ref_name}'.") result['errors'].append(f"Could not create page '{page_name}'") continue # 2) Resolve section id layout = get_layout(wit_ref_name, process_id) if layout is None: with layout_cache_lock: locked_layout_wits.add(wit_ref_name) continue section_id = get_section_id(layout, page_id, section_label=section_label_raw) if not section_id: log(f" ERROR: Could not resolve section for label '{section_label_raw}' on page '{page_name}'.") result['errors'].append(f"Could not resolve section '{section_label_raw}'") continue # 3) Check if group exists existing_group = ensure_group_on_page(layout, page_id, group_name) if existing_group: group_id, actual_section_id = existing_group if actual_section_id and actual_section_id != section_id: log(f" WARNING: Group '{group_name}' exists in section '{actual_section_id}' instead of target section '{section_id}'.") section_id = actual_section_id elif not actual_section_id: group_id = add_group_if_missing(wit_ref_name, process_id, page_id, section_id, group_name) if not group_id: log(f" ERROR: Could not create group '{group_name}'.") result['errors'].append(f"Could not create group '{group_name}'") continue else: group_id = add_group_if_missing(wit_ref_name, process_id, page_id, section_id, group_name) if not group_id: log(f" ERROR: Could not create group '{group_name}'.") result['errors'].append(f"Could not create group '{group_name}'") continue # 4) Add control to the group — use Label for form display add_control_if_missing( wit_ref_name, process_id, page_id, section_id, group_id, ref_name, field_label, field_sequence, field_type=field_type, picklist_name=picklist_name ) log(f"[Thread] Finished processing WIT '{wit_name_raw}'") except Exception as e: result['status'] = 'error' result['errors'].append(str(e)) log(f"[Thread] ERROR processing WIT: {e}") return result # Resolve relative path to the script directory and check existence base_dir = os.path.dirname(os.path.abspath(__file__)) excel_path = EXCEL_FILE if os.path.isabs(EXCEL_FILE) else os.path.join(base_dir, EXCEL_FILE) if not os.path.exists(excel_path): print(f"Error: Excel file not found: {excel_path}") print(f"Script directory: {base_dir}") print("Files in script directory:") for f in sorted(os.listdir(base_dir)): print(" ", f) sys.exit(1) # === Main flow === def main(): start_time = time.time() log("=" * 60) log("Starting MULTITHREADED Azure DevOps script") log(f"Max parallel workers: {MAX_WORKERS}") log("=" * 60) log(f"Looking up process: {PROCESS_NAME}") process_id = get_process_id_by_name(PROCESS_NAME) log(f"Found process ID: {process_id}") log(f"Reading spreadsheet: {EXCEL_FILE}") # Read Excel sheets wit_df = pd.read_excel(excel_path, sheet_name="Work item types") wit_df.columns = wit_df.columns.str.strip() df = pd.read_excel(excel_path, sheet_name="Fields") df.columns = df.columns.str.strip() # Sort Fields sheet sort_columns = [] if "Page name" in df.columns: sort_columns.append("Page name") if "Group location" in df.columns: sort_columns.append("Group location") if "Group sequence" in df.columns: sort_columns.append("Group sequence") if "Field sequence" in df.columns: sort_columns.append("Field sequence") if sort_columns: df = df.sort_values(sort_columns) log(f"Sorted Fields sheet by: {', '.join(sort_columns)}") # Get existing organization fields fields_url = f"{ADO_ORG_URL}/_apis/wit/fields?api-version=7.0" fields_response = requests.get(fields_url, headers=headers) if fields_response.status_code == 200: fields_json = fields_response.json() existing_fields = {field["referenceName"]: field for field in fields_json.get("value", [])} log(f"Retrieved {len(existing_fields)} organization fields.") else: log(f"Failed to get organization fields: {fields_response.status_code} - {fields_response.text}") existing_fields = {} # Get all picklists picklists_url = f"{ADO_ORG_URL}/_apis/work/processes/lists?api-version=7.0" picklists_response = requests.get(picklists_url, headers=headers) if picklists_response.status_code == 200: log(f"Retrieved {len(picklists_response.json().get('value', []))} picklists.") else: log(f"Failed to get picklists: {picklists_response.status_code} - {picklists_response.text}") # Build lookups from Fields sheet, keyed by Label (matches WIT sheet column headers) field_labels = df["Label"].tolist() reference_names = df.set_index("Label")["Reference name"].to_dict() field_name_map = df.set_index("Label")["Field name"].to_dict() # Label -> Field name (MS BPC-suffixed) field_types = df.set_index("Label")["Field type"].to_dict() picklist_names = df.set_index("Label")["Picklist name"].to_dict() if "Picklist name" in df.columns else {} required_flags = df.set_index("Label")["Required"].to_dict() if "Required" in df.columns else {} default_values = df.set_index("Label")["Default value"].to_dict() if "Default value" in df.columns else {} if df["Label"].duplicated().any(): dupes = df[df["Label"].duplicated() == True]["Label"].unique() dupes_str = ", ".join(str(name) for name in dupes) log(f" WARNING: Duplicate layout rows found for labels: {dupes_str}. Using the first occurrence of each.") layout_rows = df.drop_duplicates(subset="Label", keep="first") field_layout_map = layout_rows.set_index("Label").to_dict(orient="index") # Clear caches with layout_cache_lock: layout_cache.clear() locked_layout_wits.clear() # Prepare work item types for parallel processing wit_rows = [row for _, row in wit_df.iterrows()] total_wits = len(wit_rows) log(f"Processing {total_wits} work item types with {MAX_WORKERS} parallel workers...") # Process work item types in parallel results = [] with ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor: # Submit all tasks future_to_wit = { executor.submit( process_work_item_type, wit_row, process_id, field_labels, reference_names, field_name_map, field_types, picklist_names, required_flags, default_values, field_layout_map, existing_fields ): wit_row for wit_row in wit_rows } # Collect results as they complete completed = 0 for future in as_completed(future_to_wit): completed += 1 try: result = future.result() results.append(result) log(f"Progress: {completed}/{total_wits} work item types processed") except Exception as e: log(f"ERROR: Thread raised exception: {e}") results.append({'wit_name': 'Unknown', 'status': 'error', 'errors': [str(e)]}) # Summary elapsed_time = time.time() - start_time log("=" * 60) log("SUMMARY") log("=" * 60) successful = sum(1 for r in results if r['status'] == 'success') skipped = sum(1 for r in results if r['status'] == 'skipped') failed = sum(1 for r in results if r['status'] == 'error') total_fields_added = sum(r.get('fields_added', 0) for r in results) log(f"Total work item types: {total_wits}") log(f" Successful: {successful}") log(f" Skipped: {skipped}") log(f" Failed: {failed}") log(f"Total fields added: {total_fields_added}") log(f"Elapsed time: {elapsed_time:.2f} seconds ({elapsed_time/60:.2f} minutes)") log("=" * 60) if failed > 0: log("Failed work item types:") for r in results: if r['status'] == 'error': log(f" - {r['wit_name']}: {', '.join(r['errors'])}") log("Script finished. See log file for details:") log(f" {LOG_FILE}") if __name__ == "__main__": # Ensure log file is empty at start if os.path.exists(LOG_FILE): os.remove(LOG_FILE) main() ================================================ FILE: templates/Azure-DevOps-templates/3_ADO_Teams_Areas_Script (Preview).py ================================================ import os import sys import base64 import urllib.parse from collections import defaultdict import pandas as pd import requests from requests.auth import HTTPBasicAuth # === CONFIGURATION === ADO_ORG_URL = "https://dev.azure.com/" # e.g. "https://dev.azure.com/Contoso" ADO_PROJECT = "" # e.g. "Business process catalog" PAT = "" # Azure DevOps PAT with full access EXCEL_FILE = "ADO template guideline (Preview).xlsx" # Path to the Excel template file LOG_FILE = "3_ADO_Teams_Areas_Log.txt" Sprints_TEAM_NAME = "Sprints" HEADERS_JSON = {"Content-Type": "application/json"} def log(msg: str): with open(LOG_FILE, "a", encoding="utf-8") as f: f.write(msg + "\n") print(msg) def resolve_excel_path() -> str: base_dir = os.path.dirname(os.path.abspath(__file__)) return EXCEL_FILE if os.path.isabs(EXCEL_FILE) else os.path.join(base_dir, EXCEL_FILE) # --------------------------------------------------------------------------- # Part 1: Create Teams (logic from Script 4) # --------------------------------------------------------------------------- def get_project_id() -> str: proj_url = f"{ADO_ORG_URL}/_apis/projects/{ADO_PROJECT}?api-version=7.1" resp = requests.get(proj_url, headers=HEADERS_JSON, auth=HTTPBasicAuth('', PAT)) if resp.status_code != 200: log(f"ERROR: Could not fetch project ID for '{ADO_PROJECT}': {resp.status_code} - {resp.text}") sys.exit(1) project_id = resp.json().get("id") if not project_id: log(f"ERROR: Project ID not found in response for '{ADO_PROJECT}'") sys.exit(1) return project_id def create_teams_from_excel(excel_path: str, project_id: str) -> list[str]: try: df = pd.read_excel(excel_path, sheet_name="Area paths") except Exception as e: log(f"ERROR: Unable to read 'Area paths' sheet for team extraction: {e}") sys.exit(1) df = df.iloc[:, 1:6] df.columns = ["L1", "L2", "L3", "L4", "Teams"] if "Teams" not in df.columns: log("ERROR: 'Teams' column not found in Area paths sheet") sys.exit(1) team_names = [str(x).strip() for x in df["Teams"].dropna().unique() if str(x).strip()] if Sprints_TEAM_NAME not in team_names: team_names.append(Sprints_TEAM_NAME) created_or_existing = [] for team_name in team_names: log(f"Processing team: {team_name}") url = f"{ADO_ORG_URL}/_apis/projects/{project_id}/teams?api-version=7.1" payload = {"name": team_name} resp = requests.post(url, headers=HEADERS_JSON, auth=HTTPBasicAuth('', PAT), json=payload) if resp.status_code in [200, 201]: log(f" ✔ Successfully created team '{team_name}'.") created_or_existing.append(team_name) elif resp.status_code == 409: log(f" ✔ Team '{team_name}' already exists (409 conflict). Skipping create.") created_or_existing.append(team_name) else: log(f" ✖ ERROR: Failed to create team '{team_name}': {resp.status_code} - {resp.text}") return created_or_existing # --------------------------------------------------------------------------- # Part 2: Create Area Paths and Assign Teams (logic from Script 5) # --------------------------------------------------------------------------- BASE_URL = f"https://dev.azure.com/{ADO_ORG_URL.split('/')[-1]}/{ADO_PROJECT}/_apis/wit/classificationnodes/Areas" AUTH_HEADER = { "Authorization": "Basic " + base64.b64encode(f':{PAT}'.encode()).decode(), "Content-Type": "application/json", } def create_area(path_list: list[str]): parent = "/".join(path_list[:-1]) new_node = path_list[-1] if parent == "": url = f"{BASE_URL}?api-version=7.1" else: url = f"{BASE_URL}/{parent}?api-version=7.1" body = {"name": new_node} r = requests.post(url, headers=AUTH_HEADER, json=body) if r.status_code in [200, 201]: log(f" ✔ Created or exists: {parent}/{new_node}") elif r.status_code == 409: log(f" ✔ Already exists: {parent}/{new_node}") else: log(f" ✖ Error creating area '{parent}/{new_node}': {r.status_code} {r.text}") def _format_area_path(relative_path: str) -> str: relative = relative_path.replace("/", "\\") return f"{ADO_PROJECT}\\{relative}" if relative else ADO_PROJECT def _build_team_payload(default_path: str, all_paths: set[str]) -> dict: default_formatted = _format_area_path(default_path) values = [{"value": default_formatted, "includeChildren": False}] for path in sorted(all_paths): if path == default_path: continue values.append({"value": _format_area_path(path), "includeChildren": False}) return {"defaultValue": default_formatted, "values": values} def set_team_area(team_name: str, default_path: str, paths: set[str]) -> None: encoded_team = urllib.parse.quote(team_name) org = ADO_ORG_URL.split("/")[-1] url = f"https://dev.azure.com/{org}/{ADO_PROJECT}/{encoded_team}/_apis/work/teamsettings/teamfieldvalues?api-version=7.1" payload = _build_team_payload(default_path, paths) response = requests.patch(url, json=payload, headers={"Content-Type": "application/json"}, auth=HTTPBasicAuth("", PAT)) if response.status_code in [200, 204]: log(f" ✔ Assigned Team '{team_name}' → {payload['defaultValue']} with {len(payload['values'])} entries") else: log(f" ✖ FAILED to assign team '{team_name}': {response.status_code} - {response.text}") def create_areas_and_assign_teams_from_excel(excel_path: str, allowed_teams: set[str]): try: df = pd.read_excel(excel_path, sheet_name="Area paths") except Exception as e: log(f"ERROR: Unable to read 'Area paths' sheet: {e}") return # Expect columns B..F => L1, L2, L3, L4, Teams df = df.iloc[:, 1:6] df.columns = ["L1", "L2", "L3", "L4", "Teams"] current_L1 = None current_L2 = None current_L3 = None team_assignments: dict[str, dict[str, set[str]]] = defaultdict(lambda: {"default": None, "paths": set()}) def track_team(team_name: str | None, path_list: list[str], default_value: str | None = None): if not team_name or not path_list: return team_name = str(team_name).strip() if team_name == "": return if team_name not in allowed_teams and team_name != Sprints_TEAM_NAME: log(f" • Skipping team assignment for unknown team '{team_name}'") return entry = team_assignments[team_name] if entry["default"] is None: entry["default"] = default_value if default_value else path_list[0] entry["paths"].add("/".join(path_list)) # Create areas and gather team-area relations for _, row in df.iterrows(): L1, L2, L3, L4 = row["L1"], row["L2"], row["L3"], row["L4"] team = row["Teams"] if pd.notna(row["Teams"]) else None if pd.notna(L1): current_L1 = str(L1).strip() path = [current_L1] create_area(path) track_team(team, path, current_L1) current_L2 = None current_L3 = None continue if pd.notna(L2) and current_L1: current_L2 = str(L2).strip() path = [current_L1, current_L2] create_area(path) track_team(team, path, current_L2) current_L3 = None continue if pd.notna(L3) and current_L2: current_L3 = str(L3).strip() path = [current_L1, current_L2, current_L3] create_area(path) track_team(team, path, current_L3) continue if pd.notna(L4) and current_L3: child_L4 = str(L4).strip() path = [current_L1, current_L2, current_L3, child_L4] create_area(path) track_team(team, path, child_L4) continue # Assign teams to area paths for team_name, data in team_assignments.items(): default_path = data["default"] paths = data["paths"] if not default_path or not paths: continue if default_path not in paths: paths.add(default_path) set_team_area(team_name, default_path, paths) # Ensure Sprints team gets access to every area path if Sprints_TEAM_NAME in allowed_teams: all_paths = set() for entry in team_assignments.values(): all_paths.update(entry["paths"]) if all_paths: default_for_sprints = next(iter(sorted(all_paths))) set_team_area(Sprints_TEAM_NAME, default_for_sprints, all_paths) else: log("WARNING: No area paths collected; unable to assign Sprints team") # --------------------------------------------------------------------------- # Orchestration # --------------------------------------------------------------------------- def main(): # Reset log if os.path.exists(LOG_FILE): os.remove(LOG_FILE) excel_path = resolve_excel_path() if not os.path.exists(excel_path): log(f"Error: Excel file not found: {excel_path}") sys.exit(1) # 1) Create Teams from Area Paths sheet project_id = get_project_id() teams = create_teams_from_excel(excel_path, project_id) allowed_teams = set(teams) # 2) Create Areas and Assign Teams create_areas_and_assign_teams_from_excel(excel_path, allowed_teams) if __name__ == "__main__": main() ================================================ FILE: templates/Azure-DevOps-templates/4_ADO_Backlog_Config_Script (Preview).py ================================================ """ Script 4: ADO Backlog Configuration (Private Preview) Configures backlog levels, WIT-to-backlog mappings, iteration paths, and team settings for the Microsoft Business Process Catalog ADO template. Run AFTER Script 1 (creation), Script 2 (page layout), and Script 3 (teams/areas). Reads from these Excel sheets: - "Backlogs" → Backlog level definitions (name, type, color, default WIT, rename from) - "Work item types" → WIT-to-backlog mapping (Backlog name column) - "Iteration paths" → Hierarchical iteration path definitions - "Teams" → Team settings (bug behavior, include sub areas, backlog iteration) - "Area paths" → Area path team assignments (for include sub areas update) """ import os import sys import base64 import json import time import urllib.parse from collections import defaultdict import pandas as pd import requests from requests.auth import HTTPBasicAuth # === USER CONFIGURATION === ADO_ORG_URL = "https://dev.azure.com/" # e.g. "https://dev.azure.com/Contoso" ADO_PROJECT = "" # e.g. "Business process catalog" PROCESS_NAME = "" # e.g. "Business process catalog" PAT = "" # Azure DevOps PAT with full access EXCEL_FILE = "ADO template guideline (Preview).xlsx" # Path to the Excel template file LOG_FILE = "4_ADO_Backlog_Config_Log.txt" # === AUTHENTICATION SETUP === authorization = str.encode(':' + PAT) b64_auth = base64.b64encode(authorization).decode() HEADERS = { "Content-Type": "application/json", "Authorization": f"Basic {b64_auth}" } # --------------------------------------------------------------------------- # Utilities # --------------------------------------------------------------------------- def log(msg: str): with open(LOG_FILE, "a", encoding="utf-8") as f: f.write(msg + "\n") print(msg) def resolve_excel_path() -> str: base_dir = os.path.dirname(os.path.abspath(__file__)) return EXCEL_FILE if os.path.isabs(EXCEL_FILE) else os.path.join(base_dir, EXCEL_FILE) def make_request_with_retry(method: str, url: str, max_retries: int = 3, **kwargs): """Make an HTTP request with retry logic for transient errors (429, 503).""" for attempt in range(max_retries): resp = requests.request(method, url, **kwargs) if resp.status_code == 429: retry_after = int(resp.headers.get("Retry-After", 5)) log(f" Rate limited (429). Retrying in {retry_after}s... (Attempt {attempt+1}/{max_retries})") time.sleep(retry_after) continue if resp.status_code == 503: wait = 2 ** attempt log(f" Service unavailable (503). Retrying in {wait}s... (Attempt {attempt+1}/{max_retries})") time.sleep(wait) continue return resp return resp def get_process_id() -> str: """Retrieve the process ID by name.""" url = f"{ADO_ORG_URL}/_apis/work/processes?api-version=7.1-preview.2" resp = make_request_with_retry("GET", url, headers=HEADERS) if resp.status_code != 200: log(f"ERROR: Failed to list processes: {resp.status_code} - {resp.text}") sys.exit(1) for proc in resp.json().get("value", []): if proc["name"] == PROCESS_NAME: return proc["typeId"] log(f"ERROR: Process '{PROCESS_NAME}' not found.") sys.exit(1) # --------------------------------------------------------------------------- # Part 1: Configure Backlog Levels (Behaviors) # --------------------------------------------------------------------------- # Map from spreadsheet "Backlog type" to the parent behavior ref name for new portfolio backlogs PORTFOLIO_PARENT_BEHAVIOR = "System.PortfolioBacklogBehavior" # Map from spreadsheet "Rename from" values to known behavior display names and ref names KNOWN_BEHAVIOR_RENAMES = { "Epic": "Epics", # ADO default name is "Epics" (plural) "Feature": "Features", # ADO default name is "Features" (plural) } # Known behavior reference names for fallback lookup when old name was already changed KNOWN_BEHAVIOR_REFS = { "Epic": "Microsoft.VSTS.Agile.EpicBacklogBehavior", "Feature": "Microsoft.VSTS.Agile.FeatureBacklogBehavior", } def get_existing_behaviors(process_id: str) -> list[dict]: """Fetch all behaviors for the process.""" url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/behaviors?api-version=7.1-preview.2" resp = make_request_with_retry("GET", url, headers=HEADERS) if resp.status_code != 200: log(f"ERROR: Failed to get behaviors: {resp.status_code} - {resp.text}") return [] return resp.json().get("value", []) def configure_backlog_levels(process_id: str, backlogs_df: pd.DataFrame) -> dict: """ Configure backlog levels by renaming existing behaviors and creating new ones. Returns a dict mapping backlog_name -> behavior_refName for use in WIT assignment. """ log("\n" + "=" * 60) log("PART 1: Configure Backlog Levels (Behaviors)") log("=" * 60) existing = get_existing_behaviors(process_id) behavior_by_name = {b["name"]: b for b in existing} behavior_by_ref = {b["referenceName"]: b for b in existing} log(f"Found {len(existing)} existing behaviors: {[b['name'] for b in existing]}") backlog_to_behavior_ref = {} # backlog_name -> behavior referenceName for _, row in backlogs_df.iterrows(): backlog_name = str(row["Backlog name"]).strip() backlog_type = str(row["Backlog type"]).strip() color = str(row["Color"]).strip().lstrip("#") if pd.notna(row["Color"]) else None rename_from = str(row["Rename from"]).strip() if pd.notna(row["Rename from"]) else None log(f"\nProcessing backlog level: '{backlog_name}' (type: {backlog_type}, rename_from: {rename_from})") # Check if a behavior with the target name already exists (idempotency) if backlog_name in behavior_by_name: ref = behavior_by_name[backlog_name]["referenceName"] log(f" Behavior '{backlog_name}' already exists (ref: {ref}). Skipping.") backlog_to_behavior_ref[backlog_name] = ref continue # RENAME: Find the existing behavior by its old name and rename it if rename_from and rename_from not in ("(new)", "(rename)"): # Try exact match first, then the known plural form old_display = KNOWN_BEHAVIOR_RENAMES.get(rename_from, rename_from) source_behavior = behavior_by_name.get(old_display) if not source_behavior: # Try exact rename_from value source_behavior = behavior_by_name.get(rename_from) # Also try lookup by known referenceName (in case behavior was previously renamed) if not source_behavior: known_ref = KNOWN_BEHAVIOR_REFS.get(rename_from) if known_ref: source_behavior = behavior_by_ref.get(known_ref) if source_behavior: old_display = source_behavior["name"] # Use its current name for logging if source_behavior: ref = source_behavior["referenceName"] payload = {"name": backlog_name} if color: payload["color"] = color.lstrip("#") url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/behaviors/{ref}?api-version=7.1-preview.2" resp = make_request_with_retry("PUT", url, headers=HEADERS, json=payload) if resp.status_code in [200, 204]: log(f" Renamed behavior '{old_display}' → '{backlog_name}' (ref: {ref})") backlog_to_behavior_ref[backlog_name] = ref # Update local cache behavior_by_name[backlog_name] = source_behavior behavior_by_name[backlog_name]["name"] = backlog_name if old_display in behavior_by_name and old_display != backlog_name: del behavior_by_name[old_display] else: log(f" ERROR renaming '{old_display}' → '{backlog_name}': {resp.status_code} - {resp.text}") continue else: log(f" WARNING: Could not find behavior named '{old_display}' or '{rename_from}' to rename. Will try to create instead.") # RENAME by type: For "(rename)" entries, match by backlog type if rename_from == "(rename)": # Match by the backlog type category type_to_ref = { "Requirements backlog": "System.RequirementBacklogBehavior", "Iteration backlog": "System.TaskBacklogBehavior", } target_ref = type_to_ref.get(backlog_type) if target_ref and target_ref in behavior_by_ref: source_behavior = behavior_by_ref[target_ref] current_name = source_behavior["name"] if current_name == backlog_name: log(f" Behavior already named '{backlog_name}' (ref: {target_ref}). Skipping.") backlog_to_behavior_ref[backlog_name] = target_ref continue payload = {"name": backlog_name} if color: payload["color"] = color.lstrip("#") url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/behaviors/{target_ref}?api-version=7.1-preview.2" resp = make_request_with_retry("PUT", url, headers=HEADERS, json=payload) if resp.status_code in [200, 204]: log(f" Renamed behavior '{current_name}' → '{backlog_name}' (ref: {target_ref})") backlog_to_behavior_ref[backlog_name] = target_ref else: log(f" ERROR renaming '{current_name}' → '{backlog_name}': {resp.status_code} - {resp.text}") continue else: log(f" WARNING: Could not find behavior for type '{backlog_type}' to rename.") # CREATE NEW: For "(new)" entries, create a new portfolio behavior if rename_from == "(new)" or rename_from is None: payload = { "name": backlog_name, "inherits": PORTFOLIO_PARENT_BEHAVIOR, } if color: payload["color"] = color.lstrip("#") url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/behaviors?api-version=7.1-preview.2" resp = make_request_with_retry("POST", url, headers=HEADERS, json=payload) if resp.status_code in [200, 201]: new_ref = resp.json().get("referenceName", "unknown") log(f" Created new behavior '{backlog_name}' (ref: {new_ref})") backlog_to_behavior_ref[backlog_name] = new_ref # Update local cache behavior_by_name[backlog_name] = resp.json() behavior_by_ref[new_ref] = resp.json() elif resp.status_code == 409: log(f" Behavior '{backlog_name}' already exists (409). Fetching ref name...") refreshed = get_existing_behaviors(process_id) for b in refreshed: if b["name"] == backlog_name: backlog_to_behavior_ref[backlog_name] = b["referenceName"] break else: log(f" ERROR creating behavior '{backlog_name}': {resp.status_code} - {resp.text}") continue log(f"\nBacklog level configuration complete. Mappings: {json.dumps(backlog_to_behavior_ref, indent=2)}") return backlog_to_behavior_ref # --------------------------------------------------------------------------- # Part 2: Assign WITs to Backlog Levels # --------------------------------------------------------------------------- def get_all_wit_refs(process_id: str) -> dict: """Fetch all WIT reference names from the process. Returns dict: short_name -> full_ref_name.""" url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workitemtypes?api-version=7.1-preview.2" resp = make_request_with_retry("GET", url, headers=HEADERS) if resp.status_code != 200: log(f"ERROR: Failed to list WITs: {resp.status_code}") return {} result = {} for wit in resp.json().get("value", []): ref = wit["referenceName"] name = wit["name"] result[name] = ref # Also index by lowercase for case-insensitive matching result[name.lower()] = ref return result def assign_wits_to_backlogs(process_id: str, wit_df: pd.DataFrame, backlog_to_behavior_ref: dict): """Assign each WIT to its backlog level behavior based on the spreadsheet.""" log("\n" + "=" * 60) log("PART 2: Assign Work Item Types to Backlog Levels") log("=" * 60) wit_refs = get_all_wit_refs(process_id) log(f"Found {len(wit_refs)} work item types in process.") # Build the default WIT lookup from Backlogs sheet (passed via backlog_to_behavior_ref context) # We need the backlogs_df for default WIT info — it's passed indirectly via the global scope assigned_count = 0 skipped_count = 0 error_count = 0 for _, row in wit_df.iterrows(): wit_name = str(row["Work item type"]).strip() backlog_name = str(row.get("Backlog name", "")).strip() custom_flag = str(row.get("Custom work item type", "")).strip().lower() # Skip WITs with no backlog or "No associated backlog" if not backlog_name or backlog_name == "No associated backlog" or pd.isna(row.get("Backlog name")): continue # Skip disabled WITs if custom_flag == "disabled": log(f" Skipping '{wit_name}' — disabled WIT.") skipped_count += 1 continue # Find the behavior ref for this backlog behavior_ref = backlog_to_behavior_ref.get(backlog_name) if not behavior_ref: log(f" WARNING: No behavior found for backlog '{backlog_name}' (WIT: {wit_name}). Skipping.") skipped_count += 1 continue # Find the WIT ref name (try exact, then case-insensitive) wit_ref = wit_refs.get(wit_name) or wit_refs.get(wit_name.lower()) if not wit_ref: log(f" WARNING: WIT '{wit_name}' not found in process. Skipping.") skipped_count += 1 continue # Check current behavior assignment check_url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workitemtypesbehaviors/{wit_ref}/behaviors?api-version=7.1-preview.1" check_resp = make_request_with_retry("GET", check_url, headers=HEADERS) current_behaviors = [] if check_resp.status_code == 200: current_behaviors = check_resp.json().get("value", []) already_assigned = False for cb in current_behaviors: if cb.get("behavior", {}).get("id") == behavior_ref: already_assigned = True break if already_assigned: log(f" '{wit_name}' already assigned to '{backlog_name}'. Skipping.") skipped_count += 1 continue # Remove existing behavior assignments before adding new one for cb in current_behaviors: old_ref = cb.get("behavior", {}).get("id") if old_ref: del_url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workitemtypesbehaviors/{wit_ref}/behaviors/{old_ref}?api-version=7.1-preview.1" del_resp = make_request_with_retry("DELETE", del_url, headers=HEADERS) if del_resp.status_code in [200, 204]: log(f" Removed old behavior '{old_ref}' from '{wit_name}'.") else: log(f" WARNING: Could not remove old behavior '{old_ref}' from '{wit_name}': {del_resp.status_code}") # Assign to new behavior payload = { "behavior": {"id": behavior_ref}, "isDefault": False } add_url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workitemtypesbehaviors/{wit_ref}/behaviors?api-version=7.1-preview.1" resp = make_request_with_retry("POST", add_url, headers=HEADERS, json=payload) if resp.status_code in [200, 201]: log(f" Assigned '{wit_name}' → '{backlog_name}' (behavior: {behavior_ref})") assigned_count += 1 elif resp.status_code == 409: log(f" '{wit_name}' already assigned (409). Skipping.") skipped_count += 1 else: log(f" ERROR assigning '{wit_name}' → '{backlog_name}': {resp.status_code} - {resp.text}") error_count += 1 log(f"\nWIT assignment complete. Assigned: {assigned_count}, Skipped: {skipped_count}, Errors: {error_count}") # Now set default WITs for each backlog level log("\nSetting default work item types for backlog levels...") set_default_wits(process_id, backlog_to_behavior_ref, wit_refs) def set_default_wits(process_id: str, backlog_to_behavior_ref: dict, wit_refs: dict): """Set the default WIT for each backlog level using the Backlogs sheet.""" # Re-read the Backlogs sheet for default WIT info excel_path = resolve_excel_path() backlogs_df = pd.read_excel(excel_path, sheet_name="Backlogs") backlogs_df.columns = backlogs_df.columns.str.strip() for _, row in backlogs_df.iterrows(): backlog_name = str(row["Backlog name"]).strip() default_wit_name = str(row["Default work item type"]).strip() behavior_ref = backlog_to_behavior_ref.get(backlog_name) if not behavior_ref: continue default_wit_ref = wit_refs.get(default_wit_name) or wit_refs.get(default_wit_name.lower()) if not default_wit_ref: log(f" WARNING: Default WIT '{default_wit_name}' for backlog '{backlog_name}' not found in process.") continue # Check if this WIT is already assigned and set isDefault check_url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workitemtypesbehaviors/{default_wit_ref}/behaviors?api-version=7.1-preview.1" check_resp = make_request_with_retry("GET", check_url, headers=HEADERS) if check_resp.status_code != 200: log(f" WARNING: Could not check behaviors for '{default_wit_name}': {check_resp.status_code}") continue current = check_resp.json().get("value", []) already_default = False for cb in current: if cb.get("behavior", {}).get("id") == behavior_ref and cb.get("isDefault"): already_default = True break if already_default: log(f" '{default_wit_name}' is already the default for '{backlog_name}'. Skipping.") continue # Remove and re-add with isDefault=True for cb in current: if cb.get("behavior", {}).get("id") == behavior_ref: del_url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workitemtypesbehaviors/{default_wit_ref}/behaviors/{behavior_ref}?api-version=7.1-preview.1" make_request_with_retry("DELETE", del_url, headers=HEADERS) break payload = { "behavior": {"id": behavior_ref}, "isDefault": True } add_url = f"{ADO_ORG_URL}/_apis/work/processes/{process_id}/workitemtypesbehaviors/{default_wit_ref}/behaviors?api-version=7.1-preview.1" resp = make_request_with_retry("POST", add_url, headers=HEADERS, json=payload) if resp.status_code in [200, 201]: log(f" Set '{default_wit_name}' as default for '{backlog_name}'") else: log(f" ERROR setting default '{default_wit_name}' for '{backlog_name}': {resp.status_code} - {resp.text}") # --------------------------------------------------------------------------- # Part 3: Create Iteration Paths # --------------------------------------------------------------------------- def create_iteration_paths(iterations_df: pd.DataFrame) -> dict: """ Create hierarchical iteration paths from the Iteration paths sheet. Returns a dict of iteration_path -> identifier (GUID) for team assignment. """ log("\n" + "=" * 60) log("PART 3: Create Iteration Paths") log("=" * 60) encoded_project = urllib.parse.quote(ADO_PROJECT) base_url = f"{ADO_ORG_URL}/{encoded_project}/_apis/wit/classificationnodes/Iterations" # Parse hierarchical structure (same pattern as area paths in Script 3) levels = [c for c in iterations_df.columns if c.startswith("Level")] if not levels: log("WARNING: No 'Level' columns found in Iteration paths sheet. Skipping.") return {} log(f"Found {len(levels)} levels in Iteration paths sheet: {levels}") current_parents = {} # level_index -> current parent name created_paths = [] for _, row in iterations_df.iterrows(): for i, level_col in enumerate(levels): val = row[level_col] if pd.isna(val): continue node_name = str(val).strip() if not node_name: continue # Build the path components path_parts = [] for j in range(i): parent = current_parents.get(j) if parent: path_parts.append(parent) path_parts.append(node_name) # Update current parent tracking current_parents[i] = node_name # Clear children levels for j in range(i + 1, len(levels)): current_parents.pop(j, None) # Create the iteration node parent_path = "/".join(path_parts[:-1]) if parent_path: url = f"{base_url}/{urllib.parse.quote(parent_path, safe='/')}?api-version=7.1" else: url = f"{base_url}?api-version=7.1" payload = {"name": node_name} resp = make_request_with_retry("POST", url, headers=HEADERS, json=payload) full_path = "/".join(path_parts) if resp.status_code in [200, 201]: log(f" Created iteration: {full_path}") created_paths.append(full_path) elif resp.status_code == 409: log(f" Already exists: {full_path}") created_paths.append(full_path) else: log(f" ERROR creating iteration '{full_path}': {resp.status_code} - {resp.text}") # Fetch all iteration nodes with GUIDs for team assignment log("\nFetching iteration node identifiers...") iteration_map = {} fetch_url = f"{base_url}?$depth=10&api-version=7.1" resp = make_request_with_retry("GET", fetch_url, headers=HEADERS) if resp.status_code == 200: _collect_iteration_ids(resp.json(), "", iteration_map) log(f" Collected {len(iteration_map)} iteration node identifiers.") else: log(f" WARNING: Could not fetch iteration tree: {resp.status_code}") log(f"\nIteration path creation complete. Created/verified: {len(created_paths)}") return iteration_map def _collect_iteration_ids(node: dict, parent_path: str, result: dict): """Recursively collect iteration node paths and their identifiers.""" name = node.get("name", "") current_path = f"{parent_path}/{name}" if parent_path else name identifier = node.get("identifier") if identifier: result[current_path] = identifier # Also store by name only for simple lookups result[name] = identifier for child in node.get("children", []): _collect_iteration_ids(child, current_path, result) # --------------------------------------------------------------------------- # Part 4: Configure Team Settings # --------------------------------------------------------------------------- def configure_team_settings(teams_df: pd.DataFrame, iteration_map: dict): """ Configure team settings: bug behavior, backlog iteration, iterations, and include sub areas. """ log("\n" + "=" * 60) log("PART 4: Configure Team Settings") log("=" * 60) encoded_project = urllib.parse.quote(ADO_PROJECT) # Get the root iteration identifier (project root) root_iteration_id = iteration_map.get(ADO_PROJECT) # Collect all iteration node IDs (non-root) for assigning all iterations to teams all_iteration_ids = [] for path, guid in iteration_map.items(): if path != ADO_PROJECT and guid != root_iteration_id: all_iteration_ids.append({"id": guid, "path": path}) success_count = 0 error_count = 0 for _, row in teams_df.iterrows(): team_name = str(row["Teams"]).strip() bug_behavior = str(row.get("Bug behavior", "asRequirements")).strip() include_sub_areas = str(row.get("Include sub areas", "Yes")).strip().lower() in ("yes", "true", "1") backlog_iteration_value = str(row.get("Backlog iteration", "@currentIteration")).strip() encoded_team = urllib.parse.quote(team_name) log(f"\nConfiguring team: '{team_name}'") # --- 4a: Update team settings (bug behavior, backlog iteration) --- settings_url = f"{ADO_ORG_URL}/{encoded_project}/{encoded_team}/_apis/work/teamsettings?api-version=7.1" # First GET current settings to check if team exists get_resp = make_request_with_retry("GET", settings_url, headers=HEADERS) if get_resp.status_code == 404: log(f" Team '{team_name}' not found (404). Skipping.") error_count += 1 continue elif get_resp.status_code != 200: log(f" ERROR getting settings for '{team_name}': {get_resp.status_code} - {get_resp.text}") error_count += 1 continue current_settings = get_resp.json() # Build PATCH payload for team settings settings_payload = {} # Bug behavior current_bug = current_settings.get("bugsBehavior", "") if current_bug != bug_behavior: settings_payload["bugsBehavior"] = bug_behavior # Backlog iteration — set to root iteration (all iterations visible) if backlog_iteration_value.lower() == "@currentiteration": # Use the root iteration node as the backlog iteration if root_iteration_id: current_backlog_iter = current_settings.get("backlogIteration", {}).get("id", "") if current_backlog_iter != root_iteration_id: settings_payload["backlogIteration"] = root_iteration_id # Set default iteration macro current_macro = current_settings.get("defaultIterationMacro", "") if current_macro != "@CurrentIteration": settings_payload["defaultIterationMacro"] = "@CurrentIteration" else: # Use a specific iteration path iter_id = iteration_map.get(backlog_iteration_value) if iter_id: settings_payload["backlogIteration"] = iter_id if settings_payload: patch_resp = make_request_with_retry("PATCH", settings_url, headers=HEADERS, json=settings_payload) if patch_resp.status_code in [200, 204]: log(f" Updated team settings: {list(settings_payload.keys())}") else: log(f" ERROR updating settings: {patch_resp.status_code} - {patch_resp.text}") error_count += 1 else: log(f" Team settings already configured correctly.") # --- 4b: Add iterations to team --- iterations_url = f"{ADO_ORG_URL}/{encoded_project}/{encoded_team}/_apis/work/teamsettings/iterations?api-version=7.1" # Get current team iterations iter_resp = make_request_with_retry("GET", iterations_url, headers=HEADERS) current_iter_ids = set() if iter_resp.status_code == 200: for it in iter_resp.json().get("value", []): current_iter_ids.add(it.get("id", "")) added = 0 for iter_info in all_iteration_ids: if iter_info["id"] in current_iter_ids: continue add_payload = {"id": iter_info["id"]} add_resp = make_request_with_retry("POST", iterations_url, headers=HEADERS, json=add_payload) if add_resp.status_code in [200, 201]: added += 1 elif add_resp.status_code == 409: pass # Already exists else: log(f" WARNING: Could not add iteration '{iter_info['path']}' to team: {add_resp.status_code}") if added > 0: log(f" Added {added} iterations to team.") else: log(f" All iterations already assigned.") # --- 4c: Update area paths to include sub areas --- if include_sub_areas: _update_team_area_include_children(encoded_project, encoded_team, team_name) success_count += 1 log(f"\nTeam settings configuration complete. Success: {success_count}, Errors: {error_count}") def _update_team_area_include_children(encoded_project: str, encoded_team: str, team_name: str): """Update team area paths to include children.""" areas_url = f"{ADO_ORG_URL}/{encoded_project}/{encoded_team}/_apis/work/teamsettings/teamfieldvalues?api-version=7.1" resp = make_request_with_retry("GET", areas_url, headers=HEADERS) if resp.status_code != 200: log(f" WARNING: Could not get area settings for '{team_name}': {resp.status_code}") return current = resp.json() default_value = current.get("defaultValue", "") values = current.get("values", []) # Check if any area needs includeChildren updated needs_update = False updated_values = [] for v in values: new_v = {"value": v["value"], "includeChildren": True} if not v.get("includeChildren", False): needs_update = True updated_values.append(new_v) if not needs_update: log(f" Area paths already include children.") return patch_payload = { "defaultValue": default_value, "values": updated_values } patch_resp = make_request_with_retry("PATCH", areas_url, headers=HEADERS, json=patch_payload) if patch_resp.status_code in [200, 204]: log(f" Updated area paths to include children.") else: log(f" WARNING: Could not update area children: {patch_resp.status_code} - {patch_resp.text}") # --------------------------------------------------------------------------- # Orchestration # --------------------------------------------------------------------------- def main(): # Reset log if os.path.exists(LOG_FILE): os.remove(LOG_FILE) start_time = time.time() log(f"ADO Backlog Configuration Script started at {time.strftime('%Y-%m-%d %H:%M:%S')}") log(f"Organization: {ADO_ORG_URL}") log(f"Project: {ADO_PROJECT}") log(f"Process: {PROCESS_NAME}") excel_path = resolve_excel_path() if not os.path.exists(excel_path): log(f"ERROR: Excel file not found: {excel_path}") sys.exit(1) # Get process ID process_id = get_process_id() log(f"Process ID: {process_id}") # Read spreadsheet sheets try: backlogs_df = pd.read_excel(excel_path, sheet_name="Backlogs") backlogs_df.columns = backlogs_df.columns.str.strip() log(f"Read {len(backlogs_df)} rows from 'Backlogs' sheet.") except Exception as e: log(f"ERROR: Could not read 'Backlogs' sheet: {e}") sys.exit(1) try: wit_df = pd.read_excel(excel_path, sheet_name="Work item types") wit_df.columns = wit_df.columns.str.strip() log(f"Read {len(wit_df)} rows from 'Work item types' sheet.") except Exception as e: log(f"ERROR: Could not read 'Work item types' sheet: {e}") sys.exit(1) try: iterations_df = pd.read_excel(excel_path, sheet_name="Iteration paths") iterations_df.columns = iterations_df.columns.str.strip() log(f"Read {len(iterations_df)} rows from 'Iteration paths' sheet.") except Exception as e: log(f"WARNING: Could not read 'Iteration paths' sheet: {e}. Skipping iteration creation.") iterations_df = None try: teams_df = pd.read_excel(excel_path, sheet_name="Teams") teams_df.columns = teams_df.columns.str.strip() log(f"Read {len(teams_df)} rows from 'Teams' sheet.") except Exception as e: log(f"WARNING: Could not read 'Teams' sheet: {e}. Skipping team settings.") teams_df = None # Part 1: Configure backlog levels backlog_to_behavior_ref = configure_backlog_levels(process_id, backlogs_df) # Part 2: Assign WITs to backlog levels assign_wits_to_backlogs(process_id, wit_df, backlog_to_behavior_ref) # Part 3: Create iteration paths iteration_map = {} if iterations_df is not None and not iterations_df.empty: iteration_map = create_iteration_paths(iterations_df) else: log("\nSkipping iteration path creation (no data).") # Still fetch existing iterations for team settings encoded_project = urllib.parse.quote(ADO_PROJECT) fetch_url = f"{ADO_ORG_URL}/{encoded_project}/_apis/wit/classificationnodes/Iterations?$depth=10&api-version=7.1" resp = make_request_with_retry("GET", fetch_url, headers=HEADERS) if resp.status_code == 200: _collect_iteration_ids(resp.json(), "", iteration_map) # Part 4: Configure team settings if teams_df is not None and not teams_df.empty: configure_team_settings(teams_df, iteration_map) else: log("\nSkipping team settings configuration (no data).") # Summary elapsed = time.time() - start_time log("\n" + "=" * 60) log("SUMMARY") log("=" * 60) log(f"Elapsed time: {elapsed:.1f} seconds ({elapsed/60:.2f} minutes)") log(f"Script finished. See log file for details:") log(f" {LOG_FILE}") log("=" * 60) if __name__ == "__main__": main() ================================================ FILE: templates/Azure-DevOps-templates/README.md ================================================ # Azure DevOps template for the Microsoft business process catalog (Preview) This folder contains four python scripts and one Excel template that define an Azure DevOps process, project, work item types, fields, area paths, and much more. To learn how to use the python scripts, visit the Dynamics 365 Guidance Hub. Each script has a set of instructions and steps that must be performed manually before running the next script. Below are the high-level steps to follow. 1. [Automate Azure DevOps project, process, work item types, fields, and picklists from Excel with Python](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about-configure-azure-devops-project-processes) 2. [Automate Azure DevOps page layout creation with Python](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about-configure-azure-devops-page-layout) 3. [Automate the creation of Azure DevOps teams and area paths with Python scripts](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about-configure-azure-devops-teams-area-paths) 4. [Azure DevOps backlog configuration for the Microsoft Business Process Catalog](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about-configure-azure-devops-backlog) To find guidance for troubleshooting common issues with the Python scripts, see [Troubleshooting the Azure DevOps Python Scripts (Preview)](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about-configure-azure-devops-troubleshooting). Once you have configured and setup your Azure DevOps project, you can import the business process catalog into your Azure DevOps. Download the latest version of the business process catalog by visiting [https://aka.ms/businessprocesscatalog](https://aka.ms/businessprocesscatalog). To learn more about how to import the catalog, see [Import the catalog into Azure DevOps](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about-import-catalog-devops). ================================================ FILE: templates/business-processes/README.md ================================================ If you have landed on this page looking for the business process catalog, you can find the catalog on the Microsoft Download Center by navigating to [https://aka.ms/businessprocesscatalog](https://aka.ms/businessprocesscatalog) ================================================ FILE: templates/business-processes/import-business-processes-ADO.md ================================================ --- date: 11/14/2023 author: rachel-profitt --- # Import the business process catalog into Azure DevOps This article is replaced by an article in the [Dynamics 365 guidance hub](https://learn.microsoft.com/en-us/dynamics365/guidance/). Learn more at [Import the business process catalog into Azure DevOps](https://learn.microsoft.com/en-us/dynamics365/guidance/business-processes/about-import-catalog-devops). ================================================ FILE: templates/reference-architectures.md ================================================ # Reference architectures and design patterns We welcome contributions of architectural guidance, including solution ideas and design patterns. If you have a best practice or reference implementation, submit your proposal either to [the Azure team](https://learn.microsoft.com/contribute/architecture-center/aac-contribute) or to us in [Dynamics 365](https://learn.microsoft.com/dynamics365/get-started/contribute#dynamics-365-guidance-content). Fetch the appropriate Markdown templates from the [guidance-templates](https://github.com/MicrosoftDocs/dynamics365-docs-templates/tree/main/guidance-templates) folder in the [dynamics365-docs-templates](https://github.com/MicrosoftDocs/dynamics365-docs-templates/) GitHub repo. Learn more at [Contribute to Microsoft content for Dynamics 365](https://learn.microsoft.com/dynamics365/get-started/contribute#dynamics-365-guidance-content). ================================================ FILE: workshops/README.md ================================================ # Workshops templates The business process catalog includes comprehensive set of workshop templates designed to streamline collaboration and decision-making across key business areas. These templates are structured to enhance engagement and ensure precise alignment with business goals. Below, you’ll find the core structure of the workshops, a detailed list of processes that include workshops in this release, and an overview of the three distinct workshop types: Storyboard, Storyline Design Review, and Deep-Dive Design. ## Workshop template structure Each workshop template includes: - A clear agenda to guide participants through the session’s objectives. - Pre-defined tools and resources tailored to facilitate effective collaboration. - Comprehensive instructions to ensure consistency and alignment with best practices. The templates are designed to be flexible and scalable, accommodating the unique needs of sellers, technical sellers, solution architects, and business analysts or functional consultants. ## Processes featuring workshops The following end-to-end processes include Word document templates for the business process catalog - Acquire to dispose - Design to retire - Forecast to plan - Inventory to deliver - Hire to retire - Order to cash - Prospect to quote (Public Preview) - Source to pay ## Workshop types Three workshop types are included for each end-to-end process. Each workshop template is tailored to specific stages of the business process design and refinement. - Storyboard design workshops These workshops are highly visual and focus on mapping out the entire business process. Participants collaborate to create a high-level overview of the business processes, scenarios, and objectives, identifying key milestones and potential bottlenecks. The objective is to establish a shared vision and roadmap. These workshops are recommended to be run early in the pre-sales stage of an engagement to help the team get a better understanding of the customers business needs and create a demo plan. Each storyboard design workshop includes the following components: - A storyboard graphic in the Visio file that is available for the end-to-end business process in the GitHub repository. The Visio files can be downloaded at https://aka.ms/businessprocessflow. - A Word document template for the workshop. The Word document template files can be downloaded at https://aka.ms/businessprocessworkshops. - Work items in the Azure DevOps template. This includes one Workshop type work item for the overall workshop including the details of the workshop from the template, and Tasks that are children work items under the parent Workshop work item. - Storyline design review workshops In these sessions, the primary scenario is demonstrated to the customer in Dynamics 365. Teams delve into the specifics of the business process in Dynamics 365. The goal is to review and refine the storyline behind the process and conduct a fit-to-standard analysis, ensuring all steps are aligned with strategic objectives. Feedback loops are built into the session to address gaps or inconsistencies. Each Storyline design review workshop includes the following components: - A Word document template for the workshop. The Word document template files can be downloaded at https://aka.ms/businessprocessworkshops. - Work items in the Azure DevOps template. This includes one Workshop type work item for the overall workshop including the details of the workshop from the template, and Tasks that are children work items under the parent Workshop work item. - Deep-dive design workshops These workshops are designed for thorough exploration of intricate process details. The focus is on addressing complex challenges, testing assumptions, and finalizing designs. They are particularly useful for processes requiring cross-functional alignment and technical input. These workshops are intended to be run by the implementation team in the Implement phase of a project. Each level two business process area in the catalog includes at least one Deep-dive design workshop. However, some business process areas may include multiple workshops. - A Word document template for the workshop. The Word document template files can be downloaded at https://aka.ms/businessprocessworkshops. - Work items in the Azure DevOps template. This includes one Workshop type work item for the overall workshop including the details of the workshop from the template. These workshops do not include detailed tasks. However, the catalog includes Configuration deliverables which make up much of the work functional consultants would need to do in order for the process to work according to the customers business requirements. ## Using the workshop templates in Azure DevOps While there is not one specific way to use the templates and work items provided in the business process catalog, the following list of tips and tricks can be used to help ensure good management and governance of the process: - Document each business requirement using Requirement type work items. - Document Risks, Issues, Actions, and Decisions (RAID) log items using the related work item types. - Create Task type work items to track specific tasks that need to be completed or followed up on by the project team. - Work items should be linked to the lowest possible level of the business process catalog, typically level four scenarios. However, if a business requirement is more general, it may be appropriate to link it to a higher-level business process or area.