Repository: moqui/moqui-framework
Branch: master
Commit: d12a86e1ac73
Files: 336
Total size: 4.8 MB
Directory structure:
gitextract_s15y4lew/
├── .github/
│ └── workflows/
│ └── gradle-wrapper-validation.yml
├── .gitignore
├── .travis.yml
├── .whitesource
├── AUTHORS
├── LICENSE.md
├── MoquiInit.properties
├── Procfile
├── Procfile.README
├── README.md
├── ReleaseNotes.md
├── SECURITY.md
├── addons.xml
├── build.gradle
├── build.xml
├── docker/
│ ├── README.md
│ ├── build-compose-up.sh
│ ├── clean.sh
│ ├── compose-down.sh
│ ├── compose-up.sh
│ ├── elasticsearch/
│ │ ├── data/
│ │ │ └── README
│ │ └── moquiconfig/
│ │ ├── elasticsearch.yml
│ │ └── log4j2.properties
│ ├── kibana/
│ │ └── kibana.yml
│ ├── moqui-acme-postgres.yml
│ ├── moqui-cluster1-compose.yml
│ ├── moqui-mysql-compose.yml
│ ├── moqui-postgres-compose.yml
│ ├── moqui-run.sh
│ ├── mysql-compose.yml
│ ├── nginx/
│ │ └── my_proxy.conf
│ ├── opensearch/
│ │ └── data/
│ │ └── nodes/
│ │ └── README
│ ├── postgres-compose.yml
│ ├── postgres_backup.sh
│ └── simple/
│ ├── Dockerfile
│ └── docker-build.sh
├── framework/
│ ├── build.gradle
│ ├── data/
│ │ ├── CommonL10nData.xml
│ │ ├── CurrencyData.xml
│ │ ├── EntityTypeData.xml
│ │ ├── GeoCountryData.xml
│ │ ├── MoquiSetupData.xml
│ │ ├── SecurityTypeData.xml
│ │ └── UnitData.xml
│ ├── entity/
│ │ ├── BasicEntities.xml
│ │ ├── EntityEntities.xml
│ │ ├── OlapEntities.xml
│ │ ├── ResourceEntities.xml
│ │ ├── Screen.eecas.xml
│ │ ├── ScreenEntities.xml
│ │ ├── SecurityEntities.xml
│ │ ├── ServerEntities.xml
│ │ ├── ServiceEntities.xml
│ │ └── TestEntities.xml
│ ├── lib/
│ │ └── btm-4.0.1.jar
│ ├── screen/
│ │ ├── AddedEmailAuthcFactor.xml
│ │ ├── EmailAuthcFactorSent.xml
│ │ ├── NotificationEmail.xml
│ │ ├── PasswordReset.xml
│ │ ├── ScreenRenderEmail.xml
│ │ └── SingleUseCode.xml
│ ├── service/
│ │ └── org/
│ │ └── moqui/
│ │ ├── EmailServices.xml
│ │ ├── EntityServices.xml
│ │ ├── SmsServices.xml
│ │ ├── impl/
│ │ │ ├── BasicServices.xml
│ │ │ ├── ElFinderServices.xml
│ │ │ ├── EmailServices.xml
│ │ │ ├── EntityServices.xml
│ │ │ ├── EntitySyncServices.xml
│ │ │ ├── GoogleServices.xml
│ │ │ ├── InstanceServices.xml
│ │ │ ├── PrintServices.xml
│ │ │ ├── ScreenServices.xml
│ │ │ ├── ServerServices.xml
│ │ │ ├── ServiceServices.xml
│ │ │ ├── SystemMessageServices.xml
│ │ │ ├── UserServices.xml
│ │ │ └── WikiServices.xml
│ │ └── search/
│ │ ├── ElasticSearchServices.xml
│ │ └── SearchServices.xml
│ ├── src/
│ │ ├── main/
│ │ │ ├── groovy/
│ │ │ │ └── org/
│ │ │ │ └── moqui/
│ │ │ │ └── impl/
│ │ │ │ ├── actions/
│ │ │ │ │ └── XmlAction.java
│ │ │ │ ├── context/
│ │ │ │ │ ├── ArtifactExecutionFacadeImpl.groovy
│ │ │ │ │ ├── ArtifactExecutionInfoImpl.java
│ │ │ │ │ ├── CacheFacadeImpl.groovy
│ │ │ │ │ ├── ContextJavaUtil.java
│ │ │ │ │ ├── ElasticFacadeImpl.groovy
│ │ │ │ │ ├── ExecutionContextFactoryImpl.groovy
│ │ │ │ │ ├── ExecutionContextImpl.java
│ │ │ │ │ ├── L10nFacadeImpl.java
│ │ │ │ │ ├── LoggerFacadeImpl.groovy
│ │ │ │ │ ├── MessageFacadeImpl.groovy
│ │ │ │ │ ├── NotificationMessageImpl.groovy
│ │ │ │ │ ├── ResourceFacadeImpl.groovy
│ │ │ │ │ ├── TransactionCache.groovy
│ │ │ │ │ ├── TransactionFacadeImpl.groovy
│ │ │ │ │ ├── TransactionInternalBitronix.groovy
│ │ │ │ │ ├── UserFacadeImpl.groovy
│ │ │ │ │ ├── WebFacadeImpl.groovy
│ │ │ │ │ ├── reference/
│ │ │ │ │ │ ├── BaseResourceReference.java
│ │ │ │ │ │ ├── ComponentResourceReference.java
│ │ │ │ │ │ ├── ContentResourceReference.groovy
│ │ │ │ │ │ ├── DbResourceReference.groovy
│ │ │ │ │ │ └── WrapperResourceReference.groovy
│ │ │ │ │ ├── renderer/
│ │ │ │ │ │ ├── FtlMarkdownTemplateRenderer.groovy
│ │ │ │ │ │ ├── FtlTemplateRenderer.java
│ │ │ │ │ │ ├── GStringTemplateRenderer.groovy
│ │ │ │ │ │ ├── MarkdownTemplateRenderer.groovy
│ │ │ │ │ │ └── NoTemplateRenderer.groovy
│ │ │ │ │ └── runner/
│ │ │ │ │ ├── GroovyScriptRunner.groovy
│ │ │ │ │ ├── JavaxScriptRunner.groovy
│ │ │ │ │ └── XmlActionsScriptRunner.groovy
│ │ │ │ ├── entity/
│ │ │ │ │ ├── AggregationUtil.java
│ │ │ │ │ ├── EntityCache.groovy
│ │ │ │ │ ├── EntityConditionFactoryImpl.groovy
│ │ │ │ │ ├── EntityDataDocument.groovy
│ │ │ │ │ ├── EntityDataFeed.groovy
│ │ │ │ │ ├── EntityDataLoaderImpl.groovy
│ │ │ │ │ ├── EntityDataWriterImpl.groovy
│ │ │ │ │ ├── EntityDatasourceFactoryImpl.groovy
│ │ │ │ │ ├── EntityDbMeta.groovy
│ │ │ │ │ ├── EntityDefinition.groovy
│ │ │ │ │ ├── EntityDynamicViewImpl.groovy
│ │ │ │ │ ├── EntityEcaRule.groovy
│ │ │ │ │ ├── EntityFacadeImpl.groovy
│ │ │ │ │ ├── EntityFindBase.groovy
│ │ │ │ │ ├── EntityFindBuilder.java
│ │ │ │ │ ├── EntityFindImpl.java
│ │ │ │ │ ├── EntityJavaUtil.java
│ │ │ │ │ ├── EntityListImpl.java
│ │ │ │ │ ├── EntityListIteratorImpl.java
│ │ │ │ │ ├── EntityListIteratorWrapper.java
│ │ │ │ │ ├── EntityQueryBuilder.java
│ │ │ │ │ ├── EntitySqlException.groovy
│ │ │ │ │ ├── EntityValueBase.java
│ │ │ │ │ ├── EntityValueImpl.java
│ │ │ │ │ ├── FieldInfo.java
│ │ │ │ │ ├── condition/
│ │ │ │ │ │ ├── BasicJoinCondition.java
│ │ │ │ │ │ ├── ConditionAlias.java
│ │ │ │ │ │ ├── ConditionField.java
│ │ │ │ │ │ ├── DateCondition.groovy
│ │ │ │ │ │ ├── EntityConditionImplBase.java
│ │ │ │ │ │ ├── FieldToFieldCondition.groovy
│ │ │ │ │ │ ├── FieldValueCondition.java
│ │ │ │ │ │ ├── ListCondition.java
│ │ │ │ │ │ ├── TrueCondition.java
│ │ │ │ │ │ └── WhereCondition.groovy
│ │ │ │ │ └── elastic/
│ │ │ │ │ ├── ElasticDatasourceFactory.groovy
│ │ │ │ │ ├── ElasticEntityFind.java
│ │ │ │ │ ├── ElasticEntityListIterator.java
│ │ │ │ │ ├── ElasticEntityValue.java
│ │ │ │ │ └── ElasticSynchronization.groovy
│ │ │ │ ├── screen/
│ │ │ │ │ ├── ScreenDefinition.groovy
│ │ │ │ │ ├── ScreenFacadeImpl.groovy
│ │ │ │ │ ├── ScreenForm.groovy
│ │ │ │ │ ├── ScreenRenderImpl.groovy
│ │ │ │ │ ├── ScreenSection.groovy
│ │ │ │ │ ├── ScreenTestImpl.groovy
│ │ │ │ │ ├── ScreenTree.groovy
│ │ │ │ │ ├── ScreenUrlInfo.groovy
│ │ │ │ │ ├── ScreenWidgetRender.java
│ │ │ │ │ ├── ScreenWidgetRenderFtl.groovy
│ │ │ │ │ ├── ScreenWidgets.groovy
│ │ │ │ │ └── WebFacadeStub.groovy
│ │ │ │ ├── service/
│ │ │ │ │ ├── EmailEcaRule.groovy
│ │ │ │ │ ├── ParameterInfo.java
│ │ │ │ │ ├── RestApi.groovy
│ │ │ │ │ ├── ScheduledJobRunner.groovy
│ │ │ │ │ ├── ServiceCallAsyncImpl.groovy
│ │ │ │ │ ├── ServiceCallImpl.java
│ │ │ │ │ ├── ServiceCallJobImpl.groovy
│ │ │ │ │ ├── ServiceCallSpecialImpl.groovy
│ │ │ │ │ ├── ServiceCallSyncImpl.java
│ │ │ │ │ ├── ServiceDefinition.java
│ │ │ │ │ ├── ServiceEcaRule.groovy
│ │ │ │ │ ├── ServiceFacadeImpl.groovy
│ │ │ │ │ ├── ServiceJsonRpcDispatcher.groovy
│ │ │ │ │ ├── ServiceRunner.groovy
│ │ │ │ │ └── runner/
│ │ │ │ │ ├── EntityAutoServiceRunner.groovy
│ │ │ │ │ ├── InlineServiceRunner.java
│ │ │ │ │ ├── JavaServiceRunner.groovy
│ │ │ │ │ ├── RemoteJsonRpcServiceRunner.groovy
│ │ │ │ │ ├── RemoteRestServiceRunner.groovy
│ │ │ │ │ └── ScriptServiceRunner.java
│ │ │ │ ├── tools/
│ │ │ │ │ ├── H2ServerToolFactory.groovy
│ │ │ │ │ ├── JCSCacheToolFactory.groovy
│ │ │ │ │ ├── JackrabbitRunToolFactory.groovy
│ │ │ │ │ ├── MCacheToolFactory.java
│ │ │ │ │ └── SubEthaSmtpToolFactory.groovy
│ │ │ │ ├── util/
│ │ │ │ │ ├── EdiHandler.groovy
│ │ │ │ │ ├── ElFinderConnector.groovy
│ │ │ │ │ ├── ElasticSearchLogger.groovy
│ │ │ │ │ ├── JdbcExtractor.java
│ │ │ │ │ ├── MoquiShiroRealm.groovy
│ │ │ │ │ ├── RestSchemaUtil.groovy
│ │ │ │ │ ├── SimpleSgmlReader.groovy
│ │ │ │ │ └── SimpleSigner.java
│ │ │ │ └── webapp/
│ │ │ │ ├── ElasticRequestLogFilter.groovy
│ │ │ │ ├── GroovyShellEndpoint.groovy
│ │ │ │ ├── MoquiAbstractEndpoint.groovy
│ │ │ │ ├── MoquiAuthFilter.groovy
│ │ │ │ ├── MoquiContextListener.groovy
│ │ │ │ ├── MoquiFopServlet.groovy
│ │ │ │ ├── MoquiServlet.groovy
│ │ │ │ ├── MoquiSessionListener.groovy
│ │ │ │ ├── NotificationEndpoint.groovy
│ │ │ │ ├── NotificationWebSocketListener.groovy
│ │ │ │ └── ScreenResourceNotFoundException.groovy
│ │ │ ├── java/
│ │ │ │ └── org/
│ │ │ │ └── moqui/
│ │ │ │ ├── BaseArtifactException.java
│ │ │ │ ├── BaseException.java
│ │ │ │ ├── Moqui.java
│ │ │ │ ├── context/
│ │ │ │ │ ├── ArtifactAuthorizationException.java
│ │ │ │ │ ├── ArtifactExecutionFacade.java
│ │ │ │ │ ├── ArtifactExecutionInfo.java
│ │ │ │ │ ├── ArtifactTarpitException.java
│ │ │ │ │ ├── AuthenticationRequiredException.java
│ │ │ │ │ ├── CacheFacade.java
│ │ │ │ │ ├── ElasticFacade.java
│ │ │ │ │ ├── ExecutionContext.java
│ │ │ │ │ ├── ExecutionContextFactory.java
│ │ │ │ │ ├── L10nFacade.java
│ │ │ │ │ ├── LogEventSubscriber.java
│ │ │ │ │ ├── LoggerFacade.java
│ │ │ │ │ ├── MessageFacade.java
│ │ │ │ │ ├── MessageFacadeException.java
│ │ │ │ │ ├── MoquiLog4jAppender.java
│ │ │ │ │ ├── NotificationMessage.java
│ │ │ │ │ ├── NotificationMessageListener.java
│ │ │ │ │ ├── PasswordChangeRequiredException.java
│ │ │ │ │ ├── ResourceFacade.java
│ │ │ │ │ ├── ScriptRunner.java
│ │ │ │ │ ├── SecondFactorRequiredException.java
│ │ │ │ │ ├── TemplateRenderer.java
│ │ │ │ │ ├── ToolFactory.java
│ │ │ │ │ ├── TransactionException.java
│ │ │ │ │ ├── TransactionFacade.java
│ │ │ │ │ ├── TransactionInternal.java
│ │ │ │ │ ├── UserFacade.java
│ │ │ │ │ ├── ValidationError.java
│ │ │ │ │ ├── WebFacade.java
│ │ │ │ │ └── WebMediaTypeException.java
│ │ │ │ ├── entity/
│ │ │ │ │ ├── EntityCondition.java
│ │ │ │ │ ├── EntityConditionFactory.java
│ │ │ │ │ ├── EntityDataLoader.java
│ │ │ │ │ ├── EntityDataWriter.java
│ │ │ │ │ ├── EntityDatasourceFactory.java
│ │ │ │ │ ├── EntityDynamicView.java
│ │ │ │ │ ├── EntityException.java
│ │ │ │ │ ├── EntityFacade.java
│ │ │ │ │ ├── EntityFind.java
│ │ │ │ │ ├── EntityList.java
│ │ │ │ │ ├── EntityListIterator.java
│ │ │ │ │ ├── EntityNotFoundException.java
│ │ │ │ │ ├── EntityValue.java
│ │ │ │ │ └── EntityValueNotFoundException.java
│ │ │ │ ├── etl/
│ │ │ │ │ ├── FlatXmlExtractor.java
│ │ │ │ │ └── SimpleEtl.java
│ │ │ │ ├── jcache/
│ │ │ │ │ ├── MCache.java
│ │ │ │ │ ├── MCacheConfiguration.java
│ │ │ │ │ ├── MCacheManager.java
│ │ │ │ │ ├── MEntry.java
│ │ │ │ │ └── MStats.java
│ │ │ │ ├── resource/
│ │ │ │ │ ├── ClasspathResourceReference.java
│ │ │ │ │ ├── ResourceReference.java
│ │ │ │ │ └── UrlResourceReference.java
│ │ │ │ ├── screen/
│ │ │ │ │ ├── ScreenFacade.java
│ │ │ │ │ ├── ScreenRender.java
│ │ │ │ │ └── ScreenTest.java
│ │ │ │ ├── service/
│ │ │ │ │ ├── ServiceCall.java
│ │ │ │ │ ├── ServiceCallAsync.java
│ │ │ │ │ ├── ServiceCallJob.java
│ │ │ │ │ ├── ServiceCallSpecial.java
│ │ │ │ │ ├── ServiceCallSync.java
│ │ │ │ │ ├── ServiceCallback.java
│ │ │ │ │ ├── ServiceException.java
│ │ │ │ │ └── ServiceFacade.java
│ │ │ │ └── util/
│ │ │ │ ├── CollectionUtilities.java
│ │ │ │ ├── ContextBinding.java
│ │ │ │ ├── ContextStack.java
│ │ │ │ ├── LiteStringMap.java
│ │ │ │ ├── MClassLoader.java
│ │ │ │ ├── MNode.java
│ │ │ │ ├── ObjectUtilities.java
│ │ │ │ ├── RestClient.java
│ │ │ │ ├── SimpleTopic.java
│ │ │ │ ├── StringUtilities.java
│ │ │ │ ├── SystemBinding.java
│ │ │ │ └── WebUtilities.java
│ │ │ ├── resources/
│ │ │ │ ├── META-INF/
│ │ │ │ │ ├── jakarta.mime.types
│ │ │ │ │ └── services/
│ │ │ │ │ └── org.moqui.context.ExecutionContextFactory
│ │ │ │ ├── MoquiDefaultConf.xml
│ │ │ │ ├── bitronix-default-config.properties
│ │ │ │ ├── cache.ccf
│ │ │ │ ├── log4j2.xml
│ │ │ │ ├── org/
│ │ │ │ │ └── moqui/
│ │ │ │ │ └── impl/
│ │ │ │ │ ├── pollEmailServer.groovy
│ │ │ │ │ ├── sendEmailMessage.groovy
│ │ │ │ │ └── sendEmailTemplate.groovy
│ │ │ │ └── shiro.ini
│ │ │ └── webapp/
│ │ │ └── WEB-INF/
│ │ │ └── web.xml
│ │ ├── start/
│ │ │ └── java/
│ │ │ └── MoquiStart.java
│ │ └── test/
│ │ └── groovy/
│ │ ├── CacheFacadeTests.groovy
│ │ ├── ConcurrentExecution.groovy
│ │ ├── EntityCrud.groovy
│ │ ├── EntityFindTests.groovy
│ │ ├── EntityNoSqlCrud.groovy
│ │ ├── L10nFacadeTests.groovy
│ │ ├── MessageFacadeTests.groovy
│ │ ├── MoquiSuite.groovy
│ │ ├── ResourceFacadeTests.groovy
│ │ ├── ServiceCrudImplicit.groovy
│ │ ├── ServiceFacadeTests.groovy
│ │ ├── SubSelectTests.groovy
│ │ ├── SystemScreenRenderTests.groovy
│ │ ├── TimezoneTest.groovy
│ │ ├── ToolsRestApiTests.groovy
│ │ ├── ToolsScreenRenderTests.groovy
│ │ ├── TransactionFacadeTests.groovy
│ │ └── UserFacadeTests.groovy
│ ├── template/
│ │ └── XmlActions.groovy.ftl
│ └── xsd/
│ ├── common-types-3.xsd
│ ├── email-eca-3.xsd
│ ├── entity-definition-3.xsd
│ ├── entity-eca-3.xsd
│ ├── framework-catalog.xml
│ ├── moqui-conf-3.xsd
│ ├── rest-api-3.xsd
│ ├── service-definition-3.xsd
│ ├── service-eca-3.xsd
│ ├── xml-actions-3.xsd
│ ├── xml-form-3.xsd
│ └── xml-screen-3.xsd
├── gradle/
│ └── wrapper/
│ ├── gradle-wrapper.jar
│ └── gradle-wrapper.properties
├── gradle.properties
├── gradlew
├── gradlew.bat
└── settings.gradle
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/gradle-wrapper-validation.yml
================================================
name: "Validate Gradle Wrapper"
on: [push, pull_request]
jobs:
validation:
name: "Validation"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: gradle/wrapper-validation-action@v1
================================================
FILE: .gitignore
================================================
# gradle/build files
build
.gradle
/framework/dependencies
# build generated files
/moqui*.war
/Save*.zip
# runtime directory (separate repository so ignore directory entirely)
/runtime
/execwartmp
/docker/runtime
/docker/db
/docker/elasticsearch/data/nodes
/docker/opensearch/data/nodes/*
/docker/opensearch/data/*.conf
!/docker/opensearch/data/nodes/README
/docker/acme.sh
/docker/nginx/conf.d
/docker/nginx/vhost.d
/docker/nginx/html
## Do not want to accidentally commit production certificates https://www.theregister.com/2024/07/25/data_from_deleted_github_repos/
/docker/certs
!/docker/certs/moqui1.local.*
!/docker/certs/moqui2.local.*
!/docker/certs/moqui.local.*
!/docker/certs/README
# IntelliJ IDEA files
.idea
out
*.ipr
*.iws
*.iml
# Eclipse files (and some general ones also used by Eclipse)
.metadata
bin
tmp
*.tmp
*.bak
*.swp
*~.nib
local.properties
.settings
.loadpath
# NetBeans files
nbproject/private
nbbuild
.nb-gradle
nbdist
nbactions.xml
nb-configuration.xml
# VSCode files
.vscode
# Emacs files
.projectile
# Version managers (sdkman, mise, asdf)
mise.toml
.tool-versions
# OSX auto files
.DS_Store
.AppleDouble
.LSOverride
._*
# Windows auto files
Thumbs.db
ehthumbs.db
Desktop.ini
# Linux auto files
*~
================================================
FILE: .travis.yml
================================================
language: groovy
jdk:
- openjdk11
install: true
env:
- TERM=dumb
script:
- ./gradlew getRuntime
- ./gradlew load
- ./gradlew test --info
cache:
directories:
- $HOME/.gradle/caches
- $HOME/.gradle/wrapper
================================================
FILE: .whitesource
================================================
{
"generalSettings": {
"shouldScanRepo": true
},
"checkRunSettings": {
"vulnerableCheckRunConclusionLevel": "failure"
},
"issueSettings": {
"minSeverityLevel": "LOW"
}
}
================================================
FILE: AUTHORS
================================================
Moqui Framework (http://github.com/moqui/moqui)
This software is in the public domain under CC0 1.0 Universal plus a
Grant of Patent License.
To the extent possible under law, the author(s) have dedicated all
copyright and related and neighboring rights to this software to the
public domain worldwide. This software is distributed without any
warranty.
You should have received a copy of the CC0 Public Domain Dedication
along with this software (see the LICENSE.md file). If not, see
.
===========================================================================
Copyright Waiver
I dedicate any and all copyright interest in this software to the
public domain. I make this dedication for the benefit of the public at
large and to the detriment of my heirs and successors. I intend this
dedication to be an overt act of relinquishment in perpetuity of all
present and future rights to this software under copyright law.
To the best of my knowledge and belief, my contributions are either
originally authored by me or are derived from prior works which I have
verified are also in the public domain and are not subject to claims
of copyright by other parties.
To the best of my knowledge and belief, no individual, business,
organization, government, or other entity has any copyright interest
in my contributions, and I affirm that I will not make contributions
that are otherwise encumbered.
Signed by git commit adding my legal name and git username:
Written in 2010-2022 by David E. Jones - jonesde
Written in 2021-2026 by D. Michael Jones - acetousk
Written in 2014-2015 by Solomon Bessire - sbessire
Written in 2014-2015 by Jacopo Cappellato - jacopoc
Written in 2014-2015 by Abdullah Shaikh - abdullahs
Written in 2014-2015 by Yao Chunlin - chunlinyao
Written in 2014-2015 by Jimmy Shen - shendepu
Written in 2014-2015 by Dony Zulkarnaen - donniexyz
Written in 2015 by Sam Hamilton - samhamilton
Written in 2015 by Leonardo Carvalho - CarvalhoLeonardo
Written in 2015 by Swapnil M Mane - swapnilmmane
Written in 2015 by Anton Akhiar - akhiar
Written in 2015-2023 by Jens Hardings - jenshp
Written in 2016 by Shifeng Zhang - zhangshifeng
Written in 2016 by Scott Gray - lektran
Written in 2016 by Mark Haney - mphaney
Written in 2016 by Qiushi Yan - yanqiushi
Written in 2017 by Oleg Andrieiev - oandreyev
Written in 2018 by Zhang Wei - zhangwei1979
Written in 2018 by Nirendra Singh - nirendra10695
Written in 2018-2023 by Ayman Abi Abdallah - aabiabdallah
Written in 2019 by Daniel Taylor - danieltaylor-nz
Written in 2020 by Jacob Barnes - Tellan
Written in 2020 by Amir Anjomshoaa - amiranjom
Written in 2021 by Deepak Dixit - dixitdeepak
Written in 2021 by Taher Alkhateeb - pythys
Written in 2022 by Zhang Wei - hellozhangwei
Written in 2023 by Rohit Pawar - rohitpawar2811
===========================================================================
Grant of Patent License
I hereby grant to recipients of software a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable (except as stated in
this section) patent license to make, have made, use, offer to sell, sell,
import, and otherwise transfer the Work, where such license applies only to
those patent claims licensable by me that are necessarily infringed by my
Contribution(s) alone or by combination of my Contribution(s) with the
Work to which such Contribution(s) was submitted. If any entity institutes
patent litigation against me or any other entity (including a cross-claim
or counterclaim in a lawsuit) alleging that my Contribution, or the Work to
which I have contributed, constitutes direct or contributory patent
infringement, then any patent licenses granted to that entity under this
Agreement for that Contribution or Work shall terminate as of the date such
litigation is filed.
Signed by git commit adding my legal name and git username:
Written in 2010-2022 by David E. Jones - jonesde
Written in 2021-2021 by D. Michael Jones - acetousk
Written in 2014-2015 by Solomon Bessire - sbessire
Written in 2014-2015 by Jacopo Cappellato - jacopoc
Written in 2014-2015 by Yao Chunlin - chunlinyao
Written in 2015 by Dony Zulkarnaen - donniexyz
Written in 2015 by Swapnil M Mane - swapnilmmane
Written in 2015 by Jimmy Shen - shendepu
Written in 2015-2016 by Sam Hamilton - samhamilton
Written in 2015 by Leonardo Carvalho - CarvalhoLeonardo
Written in 2015 by Anton Akhiar - akhiar
Written in 2015-2023 by Jens Hardings - jenshp
Written in 2016 by Shifeng Zhang - zhangshifeng
Written in 2016 by Scott Gray - lektran
Written in 2016 by Mark Haney - mphaney
Written in 2016 by Qiushi Yan - yanqiushi
Written in 2017 by Oleg Andrieiev - oandreyev
Written in 2018 by Zhang Wei - zhangwei1979
Written in 2018 by Nirendra Singh - nirendra10695
Written in 2018-2023 by Ayman Abi Abdallah - aabiabdallah
Written in 2019 by Daniel Taylor - danieltaylor-nz
Written in 2020 by Jacob Barnes - Tellan
Written in 2020 by Amir Anjomshoaa - amiranjom
Written in 2021 by Deepak Dixit - dixitdeepak
Written in 2021 by Taher Alkhateeb - pythys
Written in 2022 by Zhang Wei - hellozhangwei
Written in 2023 by Rohit Pawar - rohitpawar2811
===========================================================================
================================================
FILE: LICENSE.md
================================================
Because of a lack of patent licensing in CC0 1.0 this software includes a
separate Grant of Patent License adapted from Apache License 2.0.
===========================================================================
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.
===========================================================================
Grant of Patent License
"License" shall mean the terms and conditions for use, reproduction, and
distribution.
"Licensor" shall mean the original copyright owner or entity authorized by
the original copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other
entities that control, are controlled by, or are under common control with
that entity. For the purposes of this definition, "control" means (i) the
power, direct or indirect, to cause the direction or management of such
entity, whether by contract or otherwise, or (ii) ownership of fifty
percent (50%) or more of the outstanding shares, or (iii) beneficial
ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation source,
and configuration files.
"Object" form shall mean any form resulting from mechanical transformation
or translation of a Source form, including but not limited to compiled
object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form,
made available under the License, as indicated by a copyright notice that
is included in or attached to the work.
"Derivative Works" shall mean any work, whether in Source or Object form,
that is based on (or derived from) the Work and for which the editorial
revisions, annotations, elaborations, or other modifications represent, as
a whole, an original work of authorship. For the purposes of this License,
Derivative Works shall not include works that remain separable from, or
merely link (or bind by name) to the interfaces of, the Work and
Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original
version of the Work and any modifications or additions to that Work or
Derivative Works thereof, that is intentionally submitted to Licensor for
inclusion in the Work by the copyright owner or by an individual or Legal
Entity authorized to submit on behalf of the copyright owner. For the
purposes of this definition, "submitted" means any form of electronic,
verbal, or written communication sent to the Licensor or its
representatives, including but not limited to communication on electronic
mailing lists, source code control systems, and issue tracking systems that
are managed by, or on behalf of, the Licensor for the purpose of discussing
and improving the Work, but excluding communication that is conspicuously
marked or otherwise designated in writing by the copyright owner as "Not a
Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on
behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
Each Contributor hereby grants to You a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable (except as stated in
this section) patent license to make, have made, use, offer to sell, sell,
import, and otherwise transfer the Work, where such license applies only to
those patent claims licensable by such Contributor that are necessarily
infringed by their Contribution(s) alone or by combination of their
Contribution(s) with the Work to which such Contribution(s) was submitted.
If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or
contributory patent infringement, then any patent licenses granted to You
under this License for that Work shall terminate as of the date such
litigation is filed.
================================================
FILE: MoquiInit.properties
================================================
# No copyright or license for configuration file, details here are not
# considered a creative work.
# This file is used for the base settings when deploying moqui.war as a
# webapp in a servlet container or running the WAR file as an executable
# JAR with java -jar.
# NOTE: configure here before running "gradle build", this file is added
# to the war file.
# You can override these settings with command-line arguments like:
# -Dmoqui.runtime=runtime
# -Dmoqui.conf=conf/MoquiProductionConf.xml
# The location of the runtime directory for Moqui to use.
# If empty it will come from the "moqui.runtime" system property.
#
# The default property below assumes the application server is started in a
# directory that is a sibling to a "moqui" directory that contains a "runtime"
# directory.
moqui.runtime=../moqui/runtime
# NOTE: if there is a "runtime" directory in the war file (in the root of the
# webapp) that will be used instead of this setting to make it easier to
# include the runtime in a deployed war without knowing where it will be
# exploded in the file system.
# The Moqui Conf XML file to use for runtime settings.
# This property is relative to the runtime location.
moqui.conf=conf/MoquiProductionConf.xml
================================================
FILE: Procfile
================================================
web: java -cp . MoquiStart port=5000 conf=conf/MoquiProductionConf.xml
================================================
FILE: Procfile.README
================================================
No memory or other JVM options specified here so that the standard JAVA_TOOL_OPTIONS env var may be used (command line args trump JAVA_TOOL_OPTIONS)
For example: export JAVA_TOOL_OPTIONS="-Xmx1024m -Xms1024m"
Note that in Java 21 if no max heap size is specified it will default to 1/4 system memory
The port specified here is the default for the AWS ElasticBeanstalk Java SE image
see: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-procfile.html
================================================
FILE: README.md
================================================
## Welcome to Moqui Framework
[](https://github.com/moqui/moqui-framework/blob/master/LICENSE.md)
[](https://travis-ci.org/moqui/moqui-framework)
[](https://github.com/moqui/moqui-framework/releases)
[](https://github.com/moqui/moqui-framework/commits/master)
[](https://github.com/moqui/moqui-framework/releases)
[](https://github.com/moqui/moqui-framework/releases/tag/v4.0.0)
[](https://forum.moqui.org)
[](https://groups.google.com/d/forum/moqui)
[](https://www.linkedin.com/groups/4640689)
[](https://gitter.im/moqui/moqui-framework?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[](http://stackoverflow.com/questions/tagged/moqui)
For information about community infrastructure for code, discussions, support, etc see the Community Guide:
For details about running and deploying Moqui see:
Note that a runtime directory is required for Moqui Framework to run, but is not included in the source repository. The
Gradle get component, load, and run tasks will automatically add the default runtime (from the moqui-runtime repository).
For information about the current and near future status of Moqui Framework
see the [ReleaseNotes.md](https://github.com/moqui/moqui-framework/blob/master/ReleaseNotes.md) file.
For an overview of features see:
Get started with Moqui development quickly using the Tutorial at:
For comprehensive documentation of Moqui Framework see the wiki based documentation on moqui.org (*running on Moqui HiveMind*):
================================================
FILE: ReleaseNotes.md
================================================
# Moqui Framework Release Notes
## Release 4.0.0 - 27 Feb 2026
Moqui framework v4.0.0 is a major new release with massive changes some of which
are breaking changes. All users are advised to upgrade to benefit from all the
new features, security fixes, upgrades, performance improvements and so on.
### Major Changes
#### Java Upgrade to Version 21 (Incompatible Change)
Moqui Framework now requires Java 21. This provides improved performance,
long-term support, and access to modern JVM features, while removing legacy
APIs. All custom code and components must be validated against Java 21 to ensure
compatibility.
As part of this work:
- Remove deprecated finalize methods no longer applicable in JDK21.
- Lots of code improvements to comply with JDK21.
#### Groovy upgrade to version 5 (Incompatible Change)
Groovy 5 in combination with newer JDK21 is more strict in @CompileStatic. There
were illegal bytecodes being generated, and it has to do with accessing fields
from inner classes.
Another change is that Groovysh is removed. Therefore, the terminal interface
was rewritten from scratch using a different architecture based on
`groovy.lang.GroovyShell`. This led to both Screen changes (in runtime) and
backend changes.
#### Change EntityValue API (Breaking Change)
Change `EntityValue.getEntityName()` to `EntityValue.resolveEntityName()` and
`EntityValue.getEntityNamePretty()` to `EntityValue.resolveEntityNamePretty()`.
Groovy 4+ introduced a change in the way property to method mapping happens as
[documented](https://groovy-lang.org/style-guide.html#_getters_and_setters).
This introduced a bug that occurs when querying an entity that has a field named
`entityName`. The bug occurs because the query returns an object of type
`org.moqui.entity.EntityValue`. The problem is that the EntityValue class has a
method called getEntityName() and as per the groovy 4+ behavior this function is
called when trying to access a field named `entityName`. Sample code:
```
def someMember = ec.entity
.find('moqui.entity.view.DbViewEntityMember')
.condition(...)
.one()
someMember.entityName // BUG returns .getEntityName(), not .get('entityName')
```
#### Upgrade to Jetty version 12.1 and EE 11
This is a major migration. It bumps jetty to version 12.1 and also servlet
related packages (websocket, webapp, proxy) to jakarta EE 11.
The upgrade broke the existing custom moqui class loaders, and required
significant refactoring of class loaders and webapp structure (e.g.
WebAppContext, Session Handling, etc ...)
Impact on developers:
Any custom work for jetty should be upgraded to the new versions compatible with
jetty 12.1 and jakarta EE 11
#### Upgrade all javax libraries to jakarta
All libraries including commons-fileupload, xml.bind-api, activation, mail,
websocket, servlets (6.1), and others are all migrated to their jakarta
equivalents. As part of this exercise, many deprecated, old or irrelevant / not
used dependencies were removed. This change required refactoring critical moqui
facades and core API to comply with the switch to Jakarta.
Any custom work for older javax should be upgraded where applicable to use the
jakarta equivalent libraries.
#### Integration with the New Bitronix Fork (Incompatible Change)
Moqui Framework now depends on the actively maintained Bitronix fork at:
https://github.com/moqui/bitronix
The current integrated version is 4.0.0-BETA1, with stabilization ongoing.
This fork includes:
- Major modernization and cleanup
- Jakarta namespace migration
- JMS namespace migration
- Important bug fixes and stability improvements
- Legacy Bitronix artifacts are no longer supported.
- Deployments must remove old Bitronix dependencies.
#### Migration From javax.transaction to jakarta.transaction (BREAKING CHANGE)
Moqui has migrated all transaction-related imports and internal APIs from
javax.transaction.* to jakarta.transaction.*, following changes in the new
Bitronix fork.
Impact on developers:
- Any code referencing javax.transaction.* must update imports to
jakarta.transaction.*.
- Affects transaction facade usage, user transactions, and service-layer
transaction management.
- If using custom transaction API, then compilation failures should be expected
until imports are updated. This does not impact projects that are purely
depending on moqui facades without accessing the underlying APIs
This aligns Moqui with the Jakarta EE namespace changes and the newer Bitronix
transaction manager.
#### Upgrade Infrastructure
- Postgres to version 18.1
- MySQL to version 9.5
- Remove docker compose "version" obsolete key
- Upgrade opensearch to 3.4.0
- Upgrade java in docker to eclipse-temurin:21
- Switch jwilder/nginx-proxy to nginxproxy/nginx-proxy
These upgrades require careful planning when migrating to moqui V4. It is
recommended to delete elastic / open search and reindex, and to switch
from elasticsearch to opensearch. Also ensure an upgrade path for your chosen
database.
Also, in newer versions of docker, the "version" key is obsolete, so ensure
updating installed docker so that it works without the "version" setting.
#### Gradle Wrapper Updated to 9.2 (BREAKING CHANGE)
The framework now builds using Gradle 9.2, bringing:
- Faster builds
- Stricter validation and deprecation cleanup
Changes included:
- Refactored property assignments and function calls to satisfy newer Gradle immutability rules.
- Replaced deprecated exec {} blocks with Groovy execute() usage (Windows support still being refined).
- Updated and corrected dependency declarations, including replacing deprecated modules and fixing invalid version strings.
- Numerous misc. updates required by Gradle 9.x API changes.
- Unified dependencyUpdates settings
This upgrade required significant modifications to component build scripts.
Given the upgrade to gradle, Java and bitronix, the following community components were upgraded to comply with new requirements:
- HiveMind
- PopCommerce
- PopRestStore
- example
- mantle-braintree
- mantle-usl
- moqui-camel
- moqui-cups
- moqui-fop
- moqui-hazelcast
- moqui-image
- moqui-orientdb
- moqui-poi
- moqui-runtime
- moqui-sftp
- moqui-sso
- moqui-wikitext
- start
### New Features
- Upgrade groovy to version 5
- Upgrade to JDK21 by default
- Upgrade to Apache Shiro 2, no longer using INI factory, but rather INI environment classes
- Upgrade to jetty 2.1 and jakarta EE 11
- Upgrade docker infrastructure including opensearch, mysql, postgres to latest
- Upgrade all dependencies to their latest versions
- Switch from Thread.getId() to Thread.threadId() to work on both virtual and platform threads
## Release 3.9.9 - 25 Feb 2026
Moqui Framework 3.9.9 is a minor new feature and bug fix release, but mostly a maintenance release for the Moqui Framework
4.0.0 release series.
For a complete list see the commit log:
https://github.com/moqui/moqui-framework/compare/v3.0.0...v3.9.9
## Release 3.1.0 - Canceled release
## Release 3.0.0 - 31 May 2022
Moqui Framework 3.0.0 is a major new feature and bug fix release with some changes that are not backward compatible.
Java 11 is now the minimum Java version required. For development and deployment make sure Java 11 is installed
(such as openjdk-11-jdk or adoptopenjdk-11-openj9 on Linux), active (on Linux use 'sudo update-alternatives --config java'),
and that JAVA_HOME is set to the Java 11 JDK install path (for openjdk-11-jdk on Linux: /usr/lib/jvm/java-11-openjdk-amd64).
In this release the old moqui-elasticsearch component with embedded ElasticSearch is no longer supported. Instead, the new
ElasticFacade is included in the framework as a client to an external OpenSearch or ElasticSearch instance which can be
installed in runtime/opensearch or runtime/elasticsearch and automatically started/stopped in a separate process by the
MoquiStart class (executable WAR, not when WAR file dropped into Servlet container).
For search the recommended versions for this release are OpenSearch 1.3.1 (https://opensearch.org/) or ElasticSearch 7.10.2.
For ElasticSearch this is the last version released under the Apache 2.0 license).
Now that JavaScript/CSS minify and certain other issues with tools have been resolved, Gradle 7+ is supported.
This is a brief summary of the changes since the last release, for a complete list see the commit log:
https://github.com/moqui/moqui-framework/compare/v2.1.3...v3.0.0
### Non Backward Compatible Changes
- Java 11 is now required, updated from Java 8
- Updated Spock to 2.1 and with that update now using JUnit Platform and JUnit 5 (Jupiter); with this update old JUnit 4
test annotations and such are supported, but JUnit 4 TestSuite implementations need to be updated to use the new
JUnit Platform and Jupiter annotations
- Library updates have been done that conflict with ElasticSearch making it impossible to run embedded
- XMLRPC support had been partly removed years ago, is now completely removed
- CUPS4J library no longer included in moqui-framework, use the moqui-cups component to add this functionality
- Network printing services (org.moqui.impl.PrintServices) are now mostly placeholders that return error messages if used, CUPS4J
library and services that depend on it are now in the moqui-cups tool component
- H2 has non-backward compatible changes, including VALUE now being a reserved word; the Moqui Conf XML file now supports
per-database entity and field name substitution to handle this and similar future issues; the main issue this cannot
solve is with older H2 database files that have columns named VALUE, these may need to be changes to THE_VALUE using
an older version of H2 before updating (this is less common as H2 databases are not generally retained long-term)
### New Features
- Recommended Gradle version is 7+ with updates to support the latest versions of Gradle
- Updated Jetty to version 10 (which requires Java 11 or later)
- MFA support for login and update password in screens and REST API with factors including authc code by email and SMS,
TOTP code (via authenticator app), backup codes; can set a flag on UserGroup to require second factor for all users in
the group, and if any user has any additional factor enable then a second factor will be required
- Various security updates including vulnerabilities in 3rd party libraries (including Log4j, Jackson, Shiro, Jetty),
and some in Moqui itself including XSS vulnerabilities in certain error cases and other framework generated
messages/responses based on testing with OWASP Zap and two commercial third party reviews (done by larger Moqui users)
- Optimization for startup-add-missing to get meta data for all tables and columns instead of per entity for much faster startup
when enabled; default for runtime-add-missing is now 'false' and startup-add-missing is now 'true' for all DBs including H2
- View Entity find improvements
- correlated sub-select using SQL LATERAL (mysql8, postgres, db2) or APPLY (mssql and oracle; not yet implemented)
- extend the member-entity.@sub-select attribute with non-lateral option where not wanted, is used by default as is best for how
sub-select is commonly used in view entities
- entity find SQL improvements for view entities where a member entity links to another member-entity with a function on a join field
- support entity-condition in view-entity used as a sub-select, was being ignored before
- Improvements to DataDocument generation for a DataFeed to handle very large database tables to feed to ES or elsewhere,
including chunking and excluding service parameters from the per ExecutionContext instance service call history
- DataFeed and DataDocument support for manual delete of documents and automatic delete on primary entity record delete
- Scheduled screen render to send regular reports to users by email (simple email with CSV or XSLT attachment) using
saved finds on any form-list based screen
- For entity field encryption default to PBEWithHmacSHA256AndAES_128 instead of PBEWithMD5AndDES, and add configuration
options for old field encrypt settings (algo, key, etc) to support changing settings, with a service to re-encrypt all
encrypted fields on all records, or can re-encrypt only when data is touched (as long as all old settings are retained,
the framework will attempt decrypt with each)
- Groovy Shell screen added to the Tools app (with special permission), an interactive Groovy Console for testing in
various environments and for fixing certain production issues
### Bug Fixes
- H2 embedded shutdown hook removal updated, no more Bitronix errors on shutdown from H2 already having been terminated
## Release 2.1.3 - 07 Dec 2019
Moqui Framework 2.1.3 is a patch level new feature and bug fix release.
There are only minor changes and fixes in this release. For a complete list of changes see:
https://github.com/moqui/moqui-framework/compare/v2.1.2...v2.1.3
This is the last release where the moqui-elasticsearch component for embedded ElasticSearch will be supported. It is
being replaced by the new ElasticFacade included in this release.
### New Features
- Java 11 now supported with some additional libraries (like javax.activation) included by default; some code changes
to address deprecations in the Java 11 API but more needed to resolve all for better future compatibility
(in other words expect deprecation warnings when building with Java 11)
- Built-in ElasticSearch client in the new ElasticFacade that uses pooled HTTP connections with the Moqui RestClient
for the ElasticSearch JSON REST API; this is most easily used with Groovy where you can use the inline Map and List
syntax to build what becomes the JSON body for search and other requests; after this release it will replace the old
moqui-elasticsearch component, now included in the framework because the large ES jar files are no longer required
- RestClient improvements to support an externally managed RequestFactory to maintain a HttpClient across requests
for connection pooling, managing cookies, etc
- Support for binary render modes for screen with new ScreenWidgetRender interface and screen-facade.screen-output
element in the Moqui Conf XML file; this was initially implemented to support an xlsx render mode implemented in
the new moqui-poi tool component
- Screen rendering to XLSX file with one sheet to form-list enabled with the form-list.@show-xlsx-button attribute,
the XLS button will only show if the moqui-poi tool component is in place
- Support for binary rendered screen attachments to emails, and reusable emailScreenAsync transition and EmailScreenSection
to easily add a form to screens to send the screen render as an attachment to an outgoing email, rendered in the background
- WikiServices to upload and delete attachments, and delete wiki pages; improvements to clone wiki page
## Release 2.1.2 - 23 July 2019
Moqui Framework 2.1.2 is a patch level new feature and bug fix release.
There are only minor changes and fixes in this release. For a complete list of changes see:
https://github.com/moqui/moqui-framework/compare/v2.1.1...v2.1.2
### New Features
- Service include for refactoring, etc with new services.service-include element
- RestClient now supports retry on timeout for call() and 429 (velocity) return for callFuture()
- The general worker thread pool now checks for an active ExecutionContext after each run to make sure destroyed
- CORS preflight OPTIONS request and CORS actual request handling in MoquiServlet
- headers configured using cors-preflight and cors-actual types in webapp.response-header elements with default headers in MoquiDefaultConf.xml
- allowed origins configured with the webapp.@allow-origins attribute which defaults the value of the 'webapp_allow_origins'
property or env var for production configuration; default to empty which means only same origin is allowed
- Docker and instance management monitoring and configuration option improvements, Postgres support for database instances
- Entity field currency-amount now has 4 decimal digits in the DB and currency-precise has 5 decimal digits for more currency flexibility
- Added minRetryTime to ServiceJob to avoid immediate and excessive retries
- New Gradle tasks for managing git tags
- Support for read only clone datasource configuration and use (if available) in entity finds
### Bug Fixes
- Issue with DataFeed Runnable not destroying the ExecutionContext causing errors to bleed over
- Fix double content type header in RestClient in certain scenarios
## Release 2.1.1 - 29 Nov 2018
Moqui Framework 2.1.1 is a patch level new feature and bug fix release.
While this release has new features maybe significant enough to warrant a 2.2.0 version bump it is mostly refinements and
improvements to existing functionality or to address design limitations and generally make things easier and cleaner.
There are various bug fixes and security improvements in this release. There are no known backward compatibility issues since the
last release but there are minor cases where default behavior has changed (see detailed notes).
### New Features
- Various library updates (see framework/build.gradle for details)
- Updated to Gradle 4 along with changes to gradle files that require Gradle 4.0 or later
- In gradle addRuntime task create version.json files for framework/runtime and for each component, shown on System app dashboard
- New gradle gitCheckoutAll task to bulk checkout branches with option to create
- New default/example Procfile, include in moqui-plus-runtime.war
##### Web Facade and HTTP
- RestClient improvements for background requests with a Future, retry on 429 for velocity limited APIs, multipart requests, etc
- In user preferences support override by Java system property (or env var if default-property declared in Moqui Conf XML)
- Add WebFacade.getRequestBodyText() method, use to get body text more easily and now necessary as WebFacade reads the body for all
requests with a text content type instead of just application/json or text/json types as before
- Add email support for notifications with basic default template, enabled only per user for a specific NotificationTopic
- Add NotificationTopic for web (screen) critical errors
- Invalidate session before login (with attributes copy to new session) to mitigate session fixation attacks
- Add more secure defaults for Strict-Transport-Security, Content-Security-Policy, and X-Frame-Options
##### XML Screen and Form
- Support for Vue component based XML Screens using a .js file and a .vuet file that gets merged into the Vue component as the
template (template can be inline in the .js file); for an example see the DynamicExampleItems.xml screen in the example component
- XML Screen and WebFacade response headers now configurable with webapp.response-header element in Moqui Conf XML
- Add moqui-conf.screen-facade.screen and screen.subscreens-item elements that override screen.subscreens.subscreens-item elements
within a screen definition so that application root screens can be added under webroot and apps in a MoquiConf.xml file in a
component or in the active Moqui Conf XML file instead of using database records
- Add support for 'no sub-path' subscreens to extend or override screens, transitions, and resources under the parent screen by
looking first in each no sub-path subscreen for a given screen path and if not found then look under the parent screen; for
example this is used in the moqui-org component for the moqui.org web-site so that /index.html is found in the moqui-org
component and so that /Login resolves to the Login.xml screen in the moqui-org component instead of the default one under webroot
- Add screen path alias support configured with ScreenPathAlias entity records
- Now uses URLDecoder for all screen path segments to match use of URLEncoder as default for URL encoding in output
- In XML Screen transition both service-call and actions are now allowed, service-call runs first
- Changed Markdown rendering from Pegdown to flexmark-java to support CommonMark 0.28, some aspects of GitHub Flavored Markdown,
and automatic table of contents
- Add form-single.@pass-through-parameters attribute to create hidden inputs for current request parameters
- Moved validate-* attributes from XML Form field element to sub-field elements so that in form-list different validation can be
done for header, first-/second-/last-row, and default-/conditional-field; as part of this the automatic validate settings from
transition.service-call are now set on the sub-field instead of the field element
##### Service Facade
- Add seca.@id and eeca.@id attributes to specify optional IDs that can be used to override or disable SECAs and EECAs
- SystemMessage improvements for security, HTTP receive endpoint, processing/etc timeouts, etc
- Service semaphore concurrency improvements, support for semaphore-name which defaults to prior behavior of service name
##### Entity Facade
- Add eeca.set-results attribute to set results of actions in the fields for rules run before entity operation
- Add entity.relationship.key-value element for constants on join conditions
- Authorization based entity find filters are now applied after view entities are trimmed so constraints are only added for
entities actually used in the query
- EntityDataLoader now supports a create only mode (used in the improved Data Import screen in the Tools app, usable directly)
- Add mysql8 database conf for new MySQL 8 JDBC driver
### Bug Fixes
- Serious bug in MoquiAuthFilter where it did not destroy ExecutionContext leaving it in place for the next request using that
thread; also changed MoquiServlet to better protect against existing ExecutionContext for thread; also changed WebFacade init
from HTTP request to remove current user if it doesn't match user authenticated in session with Shiro, or if no user is
authenticated in session
- MNode merge methods did not properly clear node by name internal cache when adding child nodes causing new children to show up
in full child node list but not when getting first or all children by node name if they had been accessed by name before the merge
- Fix RestClient path and parameter encoding
- Fix RestClient basic authentication realm issue, now custom builds Authorization request header
- Fix issue in update#Password service with reset password when UserAccount has a resetPassword but no currentPassword
- Disable default geo IP lookup for Visit records because the freegeoip service has been discontinued
- Fix DataFeed trigger false positives for PK fields on related entities included in DataDocument definitions
- Fix transaction response type screen-last in vuet/vapps mode, history wasn't being maintained server side
## Release 2.1.0 - 22 Oct 2017
Moqui Framework 2.1.0 is a minor new feature and bug fix release.
Most of the effort in the Moqui Ecosystem since the last release has been on the business artifact and application levels. Most of
the framework changes have been for improved user interfaces but there have also been various lower level refinements and
enhancements.
This release has a few bug fixes from the 2.0.0 release and has new features like DbResource and WikiPage version management,
a simple tool for ETL, DataDocument based dynamic view entities, and various XML Screen and Form widget options and usability
improvements. This release was originally planned to be a patch level release primarily for bug fixes but very soon after the 2.0.0
release work start on the Vue based client rendering (SPA) functionality and various other new features that due to business deals
progressed quickly.
The default moqui-runtime now has support for hybrid static/dynamic XML Screen rendering based on Vue JS. There are various changes
for better server side handling but most changes are in moqui-runtime. See the moqui-runtime release notes for more details.
Some of these changes may be useful for other client rendering purposes, ie for other client side tools and frameworks.
### Non Backward Compatible Changes
- New compile dependency on Log4J2 and not just SLF4J
- DataDocument JSON generation no longer automatically adds all primary key fields of the primary entity to allow for aggregation
by function in DataDocument based queries (where DataDocument is used to create a dynamic view entity); for ElasticSearch indexing
a unique ID is required so all primary key fields of the primary entity should be defined
- The DataDocumentField, DataDocumentCondition, and DataDocumentLink entities now have an artificial/sequenced secondary key instead
of using another field (fieldPath, fieldNameAlias, label); existing tables may work with some things but reloading seed data will
fail if you have any DataDocument records in place; these are typically seed data records so the easiest way to update/migrate
is to drop the tables for DataDocumentField/Link/Condition entities and then reload seed data as normal for a code update
- If using moqui-elasticsearch the index approach has changed to one index per DataDocument to prep for ES6 and improve the
performance and index types by field name; to update an existing instance it is best to start with an empty ES instance or at
least delete old indexes and re-index based on data feeds
- The default Dockerfile now runs the web server on port 80 instead of 8080 within the container
### New Features
- Various library updates
- SLF4J MDC now used to track moqui_userId and moqui_visitorId for logging
- New ExecutionContextFactory.registerLogEventSubscriber() method to register for Log4J2 LogEvent processing, initially used in the
moqui-elasticsearch component to send log messages to ElasticSearch for use in the new LogViewer screen in the System app
- Improved Docker Compose samples with HTTPS and PostgreSQL, new file for Kibana behind transparent proxy servlet in Moqui
- Added MoquiAuthFilter that can be used to require authorization and specified permission for arbitrary paths such as servlets;
this is used along with the Jetty ProxyServlet$Transparent to provide secure access to things server only accessible tools like
ElasticSearch (on /elastic) and Kibana (on /kibana) in the moqui-elasticsearch component
- Multi service calls now pass results from previous calls to subsequent calls if parameter names match, and return results
- Service jobs may now have a lastRunTime parameter passed by the job scheduler; lastRunTime on lock and passed to service is now
the last run time without an error
- view-entity now supports member-entity with entity-condition and no key-map for more flexible join expressions
- TransactionCache now handles more situations like using EntityListIterator.next() calls and not just getCompleteList(), and
deletes through the tx cache are more cleanly handled for records created through the tx cache
- ResourceReference support for versions in supported implementations (initially DbResourceReference)
- ResourceFacade locations now support a version suffix following a hash
- Improved wiki services to track version along with underlying ResourceReference
- New SimpleEtl class plus support for extract and load through EntityFacade
- Various improvements in send#EmailTemplate, email view tracking with transparent pixel image
- Improvements for form-list aggregations and show-total now supports avg, count, min, max, first, and last in addition to sum
- Improved SQLException handling with more useful messages and error codes from database
- Added view-entity.member-relationship element as a simpler alternative to member-entity using existing relationships
- DataDocumentField now has a functionName attribute for functions on fields in a DataDocument based query
- Any DataDocument can now be treated as an entity using the name pattern DataDocument.${dataDocumentId}
- Sub-select (sub-query) is now supported for view-entity by a simple flag on member-entity (or member-relationship); this changes
the query structure so the member entity is joined in a select clause with any conditions for fields on that member entity put
in its where clause instead of the where clause for the top-level select; any fields selected are selected in the sub-select as
are any fields used for the join ON conditions; the first example of this is the InvoicePaymentApplicationSummary view-entity in
mantle-usl which also uses alias.@function and alias.complex-alias to use concat_ws for combined name aliases
- Sub-select also now supported for view-entity members of other view entities; this provides much more flexibility for functions
and complex-aliases in the sub-select queries; there are also examples of this in mantle-usl
- Now uses Jackson Databind for JSON serialization and deserialization; date/time values are in millis since epoch
### Bug Fixes
- Improved exception (throwable) handling for service jobs, now handled like other errors and don't break the scheduler
- Fixed field.@hide attribute not working with runtime conditions, now evaluated each time a form-list is rendered
- Fixed long standing issue with distinct counts and limited selected fields, now uses a distinct sub-select under a count select
- Fixed long standing issue where view-entity aliased fields were not decrypted
- Fixed issue with XML entity data loading using sub-elements for related entities and under those sub-elements for field data
- Fixed regression in EntityFind where cache was used even if forUpdate was set
- Fixed concurrency issue with screen history (symptom was NPE on iterator.next() call)
## Release 2.0.0 - 24 Nov 2016
Moqui Framework 2.0.0 is a major new feature and bug fix release, with various non backward compatible API and other changes.
This is the first release since 1.0.0 with significant and non backwards compatible changes to the framework API. Various deprecated
methods have been removed. The Cache Facade now uses the standard javax.cache interfaces and the Service Facade now uses standard
java.util.concurrent interfaces for async and scheduled services. Ehcache and Quartz Scheduler have been replaced by direct,
efficient interfaces implementations.
This release includes significant improvements in configuration and with the new ToolFactory functionality is more modular with
more internals exposed through interfaces and extendable through components. Larger and less universally used tool are now in
separate components including Apache Camel, Apache FOP, ElasticSearch, JBoss KIE and Drools, and OrientDB.
Multi-server instances are far better supported by using Hazelcast for distributed entity cache invalidation, notifications,
caching, background service execution, and for web session replication. The moqui-hazelcast component is pre-configured to enable
all of this functionality in its MoquiConf.xml file. To use add the component and add a hazelcast.xml file to the classpath with
settings for your cluster (auto-discover details, etc).
Moqui now scales up better with performance improvements, concurrency fixes, and Hazelcast support (through interfaces other
distributed system libraries like Apache Ignite could also be used). Moqui also now scales down better with improved memory
efficiency and through more modular tools much smaller runtime footprints are possible.
The multi-tenant functionality has been removed and replaced with the multi-instance approach. There is now a Dockerfile included
with the recommended approach to run Moqui in Docker containers and Docker Compose files for various scenarios including an
automatic reverse proxy using nginx-proxy. There are now service interfaces and screens in the System application for managing
multiple Moqui instances from a master instance. Instances with their own database can be automatically provisioned using
configurable services, with initial support for Docker containers and MySQL databases. Provisioning services will be added over time
to support other instance hosts and databases, and you can write your own for whatever infrastructure you prefer to use.
To support WebSocket a more recent Servlet API the embedded servlet container is now Jetty 9 instead of Winstone. When running
behind a proxy such as nginx or httpd running in the embedded mode (executable WAR file) is now adequate for production use.
If you are upgrading from an earlier version of Moqui Framework please read all notes about Non Backward Compatible Changes. Code,
configuration, and database meta data changes may be necessary depending on which features of the framework you are using.
In this version Moqui Framework starts and runs faster, uses less memory, is more flexible, configuration is easier, and there are
new and better ways to deploy and manage multiple instances. A decent machine ($1800 USD Linux workstation, i7-6800K 6 core CPU)
generated around 350 screens per second with an average response time under 200ms. This was running Moqui and MySQL on the same
machine with a JMeter script running on a separate machine doing a 23 step order to ship/bill process that included 2 reports
(one MySQL based, one ElasticSearch based) and all the GL posting, etc. The load simulated entering and shipping (by internal users)
around 1000 orders/minute which would support thousands of concurrent internal or ecommerce users. On larger server hardware and
with some lower level tuning (this was on stock/default Linux, Java 8, and MySQL 5.7 settings) a single machine could handle
significantly more traffic.
With the latest framework code and the new Hazelcast plugin Moqui supports high performance clusters to handle massive loads. The
most significant limit is now database performance as we need a transactional SQL database for this sort of business process
(with locking on inventory reservations and issuances, GL posting, etc as currently implemented in Mantle USL).
Enjoy!
### Non Backward Compatible Changes
- Java JDK 8 now required (Java 7 no longer supported)
- Now requires Servlet Container supporting the Servlet 3.1 specification
- No longer using Winstone embedded web server, now using Jetty 9
- Multi-Tenant Functionality Removed
- ExecutionContext.getTenant() and getTenantId() removed
- UserFacade.loginUser() third parameter (tenantId) removed
- CacheFacade.getCache() with second parameter for tenantId removed
- EntityFacade no longer per-tenant, getTenantId() removed
- TransactionInternal and EntityDatasourceFactory methods no longer have tenantId parameter
- Removed tenantcommon entity group and moqui.tenant entities
- Removed tenant related MoquiStart command line options
- Removed tenant related Moqui Conf XML, XML Screen, etc attributes
- Entity Definitions
- XSDs updated for these changes, though old attributes still supported
- changed entity.@package-name to entity.@package
- changed entity.@group-name to entity.@group
- changed relationship.@related-entity-name to relationship.@related
- changed key-map.@related-field-name to key-map.@related
- UserField no longer supported (UserField and UserFieldValue entities)
- XML Screen and Form
- field.@entry-name attribute replaced by field.@from attribute (more meaningful, matches attribute used on set element); the old
entry-name attribute is still supported, but removed from XSD
- Service Job Scheduling
- Quartz Scheduler has been removed, use new ServiceJob instead with more relevant options, much cleaner and more manageable
- Removed ServiceFacade.getScheduler() method
- Removed ServiceCallSchedule interface, implementation, and ServiceFacade.schedule() factory method
- Removed ServiceQuartzJob class (impl of Job interface)
- Removed EntityJobStore class (impl of JobStore interface); this is a huge and complicated class to handle the various
complexities of Quartz and was never fully working, had some remaining issues in testing
- Removed HistorySchedulerListener and HistoryTriggerListener classes
- Removed all entities in the moqui.service.scheduler and moqui.service.quartz packages
- Removed quartz.properties and quartz_data.xml configuration files
- Removed Scheduler screens from System app in tools component
- For all of these artifacts see moqui-framework commit #d42ede0 and moqui-runtime commit #6a9c61e
- Externalized Tools
- ElasticSearch (and Apache Lucene)
- libraries, classes and all related services, screens, etc are now in the moqui-elasticsearch component
- System/DataDocument screens now in moqui-elasticsearch component and added to tools/System app through SubscreensItem record
- all ElasticSearch services in org.moqui.impl.EntityServices moved to org.moqui.search.SearchServices including:
index#DataDocuments, put#DataDocumentMappings, index#DataFeedDocuments, search#DataDocuments, search#CountBySource
- Moved index#WikiSpacePages service from org.moqui.impl.WikiServices to org.moqui.search.SearchServices
- ElasticSearch dependent REST API methods moved to the 'elasticsearch' REST API in the moqui-elasticsearch component
- Apache FOP is now in the moqui-fop tool component; everything in the framework, including the now poorly named MoquiFopServlet,
use generic interfaces but XML-FO files will not transform to PDF/etc without this component in place
- OrientDB and Entity Facade interface implementations are now in the moqui-orientdb component, see its README.md for usage
- Apache Camel along with the CamelServiceRunner and MoquiServiceEndpoint are now in the moqui-camel component which has a
MoquiConf.xml file so no additional configuration is needed
- JBoss KIE and Drools are now in tool component moqui-kie, an optional component for mantle-usl; has MoquiConf to add ToolFactory
- Atomikos TM moved to moqui-atomikos tool component
- ExecutionContext and ExecutionContextFactory
- Removed initComponent(), destroyComponent() methods; were never well supported (runtime component init/destroy caused issues)
- Removed getCamelContext() from ExecutionContextFactory and ExecutionContext, use getTool("Camel", CamelContext.class)
- Removed getElasticSearchClient() from ExecutionContextFactory and ExecutionContext, use getTool("ElasticSearch", Client.class)
- Removed getKieContainer, getKieSession, and getStatelessKieSession methods from ExecutionContextFactory and ExecutionContext,
use getTool("KIE", KieToolFactory.class) and use the corresponding methods there
- See new feature notes under Tool Factory
- Caching
- Ehcache has been removed
- The org.moqui.context.Cache interface is replaced by javax.cache.Cache
- Configuration options for caches changed (moqui-conf.cache-list.cache)
- NotificationMessage
- NotificationMessage, NotificationMessageListener interfaces have various changes for more features and to better support
serialized messages for notification through a distributed topic
- Async Services
- Now uses more standard java.util.concurrent interfaces
- Removed ServiceCallAsync.maxRetry() - was never supported
- Removed ServiceCallAsync.persist() - was never supported well, used to simply call through Quartz Scheduler when set
- Removed persist option from XML Actions service-call.@async attribute
- Async services never called through Quartz Scheduler (only scheduled)
- ServiceCallAsync.callWaiter() replaced by callFuture()
- Removed ServiceCallAsync.resultReceiver()
- Removed ServiceResultReceiver interface - use callFuture() instead
- Removed ServiceResultWaiter class - use callFuture() instead
- See related new features below
- Service parameter.subtype element removed, use the much more flexible nested parameter element
- JCR and Apache Jackrabbit
- The repository.@type, @location, and @conf-location attributes have been removed and the repository.parameter sub-element
added for use with the javax.jcr.RepositoryFactory interface
- See new configuration examples in MoquiDefaultConf.xml under the repository-list element
- OWASP ESAPI and AntiSamy
- ESAPI removed, now using simple StringEscapeUtils from commons-lang
- AntiSamy replaced by Jsoup.clean()
- Removed ServiceSemaphore entity, now using ServiceParameterSemaphore
- Deprecated methods
- These methods were deprecated (by methods with shorter names) long ago and with other API changes now removing them
- Removed getLocalizedMessage() and formatValue() from L10nFacade
- Removed renderTemplateInCurrentContext(), runScriptInCurrentContext(), evaluateCondition(), evaluateContextField(), and
evaluateStringExpand() from ResourceFacade
- Removed EntityFacade.makeFind()
- ArtifactHit and ArtifactHitBin now use same artifact type enum as ArtifactAuthz, for efficiency and consistency; configuration of
artifact-stats by sub-type no longer supported, had little value and caused performance overhead
- Removed ArtifactAuthzRecord/Cond entities and support for them; this was never all that useful and is replaced by the
ArtifactAuthzFilter and EntityFilter entities
- The ContextStack class has moved to the org.moqui.util package
- Replaced Apache HttpComponents client with jetty-client to get support for HTTP/2, cleaner API, better async support, etc
- When updating to this version recommend stopping all instances in a cluster before starting any instance with the new version
### New Features
- Now using Jetty embedded for the executable WAR instead of Winstone
- using Jetty 9 which requires Java 8
- now internally using Servlet API 3.1.0
- Many library updates, cleanup of classes found in multiple jar files (ElasticSearch JarHell checks pass; nice in general)
- Configuration
- Added default-property element to set Java System properties from the configuration file
- Added Groovy string expansion to various configuration attributes
- looks for named fields in Java System properties and environment variables
- used in default-property.@value and all xa-properties attributes
- replaces the old explicit check for ${moqui.runtime}, which was a simple replacement hack
- because these are Groovy expressions the typical dots used in property names cannot be used in these strings, use an
underscore instead of a dot, ie ${moqui_runtime} instead of ${moqui.runtime}; if a property name contains underscores and
no value is found with the literal name it replaces underscores with dots and looks again
- Deployment and Docker
- The MoquiStart class can now run from an expanded WAR file, i.e. from a directory with the contents of a Moqui executable WAR
- On startup DataSource (database) connections are retried 5 times, every 5 seconds, for situations where init of separate
containers is triggered at the same time like with Docker Compose
- Added a MySQLConf.xml file where settings can come from Java system properties or system environment variables
- The various webapp.@http* attributes can now be set as system properties or environment variables
- Added a Dockerfile and docker-build.sh script to build a Docker image from moqui-plus-runtime.war or moqui.war and runtime
- Added sample Docker Compose files for moqui+mysql, and for moqui, mysql, and nginx-proxy for reverse proxy that supports
virtual hosts for multiple Docker containers running Moqui
- Added script to run a Docker Compose file after copying configuration and data persistence runtime directories if needed
- Multi-Instance Management
- New services (InstanceServices.xml) and screens in the System app for Moqui instance management
- This replaces the removed multi-tenant functionality
- Initially supports Docker for the instance hosting environment via Docker REST API
- Initially supports MySQL for instance databases (one DB per instance, just like in the past)
- Tool Factory
- Added org.moqui.context.ToolFactory interface used to initialize, destroy, and get instances of tools
- Added tools.tool-factory element in Moqui Conf XML file; has default tools in MoquiDefaultConf.xml and can be populated or
modified in component and/or runtime conf XML files
- Use new ExecutionContextFactory.getToolFactory(), ExecutionContextFactory.getTool(), and ExecutionContext.getTool() methods
to interact with tools
- See non backward compatible change notes for ExecutionContextFactory
- WebSocket Support
- Now looks for javax.websocket.server.ServerContainer in ServletContext during init, available from ECFI.getServerContainer()
- If ServletContainer found adds endpoints defined in the webapp.endpoint element in the Moqui Conf XML file
- Added MoquiAbstractEndpoint, extend this when implementing an Endpoint so that Moqui objects such as ExecutionContext/Factory
are available, UserFacade initialized from handshake (HTTP upgrade) request, etc
- Added NotificationEndpoint which listens for NotificationMessage through ECFI and sends them over WebSocket to notify user
- NotificationMessage
- Notifications can now be configured to send through a topic interface for distributed topics (implemented in the
moqui-hazelcast component); this handles the scenario where a notification is generated on one server but a user is connected
(by WebSocket, etc) to another
- Various additional fields for display in the JavaScript NotificationClient including type, title and link templates, etc
- Caching
- CacheFacade now supports separate local and distributed caches both using the javax.cache interfaces
- Added new MCache class for faster local-only caches
- implements the javax.cache.Cache interface
- supports expire by create, access, update
- supports custom expire on get
- supports max entries, eviction done in separate thread
- Support for distributed caches such as Hazelcast
- New interfaces to plugin entity distributed cache invalidation through a SimpleTopic interface, supported in moqui-hazelcast
- Set many entities to cache=never, avoid overhead of cache where read/write ratio doesn't justify it or cache could cause issues
- Async Services
- ServiceCallAsync now using standard java.util.concurrent interfaces
- Use callFuture() to get a Future object instead of callWaiter()
- Can now get Runnable or Callable objects to run a service through a ExecutorService of your choice
- Services can now be called local or distributed
- Added ServiceCallAsync.distribute() method
- Added distribute option to XML Actions service-call.@async attribute
- Distributed executor is configurable, supported in moqui-hazelcast
- Distributed services allow offloading service execution to worker nodes
- Service Jobs
- Configure ad-hoc (explicitly executed) or scheduled jobs using the new ServiceJob and related entities
- Tracks execution in ServiceJobRun records
- Can send NotificationMessage, success or error, to configured topic
- Run service job through ServiceCallJob interface, ec.service.job()
- Replacement for Quartz Scheduler scheduled services
- Added SubEtha SMTP server which receives email messages and calls EMECA rules, an alternative to polling IMAP and POP3 servers
- Hazelcast Integration (moqui-hazelcast component)
- These features are only enabled with this tool component in place
- Added default Hazelcast web session replication config
- Hazelcast can be used for distributed entity cache, web session replication, distributed execution, and OrientDB clustering
- Implemented distributed entity cache invalidate using a Hazelcast Topic, enabled in Moqui Conf XML file with the
@distributed-cache-invalidate attribute on the entity-facade element
- XSL-FO rendering now supports a generic ToolFactory to create a org.xml.sax.ContentHandler object, with an implementation
using Apache FOP now in the moqui-fop component
- JCR and Apache Jackrabbit
- JCR support (for content:// locations in the ResourceFacade) now uses javax.jcr interfaces only, no dependencies on Jackrabbit
- JCR repository configuration now supports other JCR implementations by using RepositoryFactory parameters
- Added ADMIN_PASSWORD permission for administrative password change (in UserServices.update#Password service)
- Added UserServices.enable#UserAccount service to enable disabled account
- Added support for error screens rendered depending on type of error
- configured in the webapp.error-screen element in Moqui Conf XML file
- if error screen render fails sends original error response
- this is custom content that avoids sending an error response
- A component may now have a MoquiConf.xml file that overrides the default configuration file (MoquiDefaultConf.xml from the
classpath) but is overridden by the runtime configuration file; the MoquiConf.xml file in each component is merged into the main
conf based on the component dependency order (logged on startup)
- Added ExecutionContext.runAsync method to run a closure in a worker thread with an ExecutionContext like the current (user, etc)
- Added configuration for worker thread pool parameters, used for local async services, EC.runAsync, etc
- Transaction Facade
- The write-through transaction cache now supports a read only mode
- Added service.@no-tx-cache attribute which flushes and disables write through transaction cache for the rest of the transaction
- Added flushAndDisableTransactionCache() method to flush/disable the write through cache like service.@no-tx-cache
- Entity Facade
- In view-entity.alias.complex-alias the expression attribute is now expanded so context fields may be inserted or other Groovy
expressions evaluated using dollar-sign curly-brace (${}) syntax
- Added view-entity.alias.case element with when and else sub-elements that contain complex-alias elements; these can be used for
CASE, CASE WHEN, etc SQL expressions
- EntityFind.searchFormMap() now has a defaultParameters argument, used when no conditions added from the input fields Map
- EntityDataWriter now supports export with a entity master definition name, applied only to entities exported that have a master
definition with the given master name
- XML Screen and Form
- screen path URLs that don't exist are now by default disabled instead of throwing an exception
- form-list now supports @header-dialog to put header-field widgets in a dialog instead of in the header
- form-list now supports @select-columns to allow users to select which fields are displayed in which columns, or not displayed
- added search-form-inputs.default-parameters element whose attributes are used as defaultParameters in searchFormMap()
- ArtifactAuthzFailure records are only created when a user tries to use an artifact, not when simply checking to see if use is
permitted (such as in menus, links, etc)
- significant macro cleanups and improvements
- csv render macros now improved to support more screen elements, more intelligently handle links (only include anchor/text), etc
- text render macros now use fixed width output (number of characters) along with new field attributes to specify print settings
- added field.@aggregate attribute for use in form-list with options to aggregate field values across multiple results or
display fields in a sub-list under a row with the common fields for the group of rows
- added form-single.@owner-form attribute to skip HTML form element and add the HTML form attribute to fields so they are owned
by a different form elsewhere in the web page
- The /status path now a transition instead of a screen and returns JSON with more server status information
- XML Actions now statically import all the old StupidUtilities methods so 'StupidUtilities.' is no longer needed, shouldn't be used
- StupidUtilities and StupidJavaUtilities reorganized into the new ObjectUtilities, CollectionUtilities, and StringUtilities
classes in the moqui.util package (in the moqui-util project)
### Bug Fixes
- Fixed issues with clean shutdown running with the embedded Servlet container and with gradle test
- Fixed issue with REST and other requests using various HTTP request methods that were not handled, MoquiServlet now uses the
HttpServlet.service() method instead of the various do*() methods
- Fixed issue with REST and other JSON request body parameters where single entry lists were unwrapped to just the entry
- Fixed NPE in EntityFind.oneMaster() when the master value isn't found, returns null with no error; fixes moqui-runtime issue #18
- Fixed ElFinder rm (moqui-runtime GitHub issue #23), response for upload
- Screen sub-content directories treated as not found so directory entries not listed (GitHub moqui-framework issue #47)
- In entity cache auto clear for list of view-entity fixed mapping of member entity fields to view entity alias, and partial match
when only some view entity fields are on a member entity
- Cache clear fix for view-entity list cache, fixes adding a permission on the fly
- Fixed issue with Entity/DataEdit screens in the Tools application where the parameter and form field name 'entityName' conflicted
with certain entities that have a field named entityName
- Concurrency Issues
- Fixed concurrent update errors in EntityCache RA (reverse association) using Collections.synchronizedList()
- Fixed per-entity DataFeed info rebuild to avoid multiple runs and rebuild before adding to cache in use to avoid partial data
- Fixed attribute and child node wrapper caching in FtlNodeWrapper where in certain cases a false null would be returned
## Release 1.6.2 - 26 Mar 2016
Moqui Framework 1.6.2 is a minor new feature and bug fix release.
This release is all about performance improvements, bug fixes, library
updates and cleanups. There are a number of minor new features like better
multi-tenant handling (and security), optionally loading data on start if
the DB is empty, more flexible handling of runtime Moqui Conf XML location,
database support and transaction management, and so on.
### Non Backward Compatible Changes
- Entity field types are somewhat more strict for database operations; this
is partly for performance reasons and partly to avoid database errors
that happen only on certain databases (ie some allow passing a String for
a Timestamp, others don't; now you have to use a Timestamp or other date
object); use EntityValue.setString or similar methods to do data
conversions higher up
- Removed the TenantCurrency, TenantLocale, TenantTimeZone, and
TenantCountry entities; they aren't generally used and better not to have
business settings in these restricted technical config entities
### New Features
- Many performance improvements based on profiling; cached entities finds
around 6x faster, non cached around 3x; screen rendering also faster
- Added JDBC Connection stash by tenant, entity group, and transaction,
can be disabled with transaction-facade.@use-connection-stash=false in
the Moqui Conf XML file
- Many code cleanups and more CompileStatic with XML handling using new
MNode class instead of Groovy Node; UserFacadeImpl and
TransactionFacadeImpl much cleaner with internal classes for state
- Added tools.@empty-db-load attribute with data file types to load on
startup (through webapp ContextListener init only) if the database is
empty (no records for moqui.basic.Enumeration)
- If the moqui.conf property (system property, command line, or in
MoquiInit.properties) starts with a forward slash ('/') it is now
considered an absolute path instead of relative to the runtime directory
allowing a conf file outside the runtime directory (an alternative
to using ../)
- UserAccount.userId and various other ID fields changed from id-long to id
as userId is only an internal/sequenced ID now, and for various others
the 40 char length changed years ago is more than adequate; existing
columns can be updated for the shorter length, but don't have to be
- Changes to run tests without example component in place (now a component
separate from moqui-runtime), using the moqui.test and other entities
- Added run-jackrabbit option to run Apache Jackrabbit locally when Moqui
starts and stop is when Moqui stops, with conf/etc in runtime/jackrabbit
- Added SubscreensDefault entity and supporting code to override default
subscreens by tenant and/or condition with database records
- Now using the VERSION_2_3_23 version for FreeMarker instead of a
previous release compatibility version
- Added methods to L10nFacade that accept a Locale when something other
than the current user's locale is needed
- Added TransactionFacade runUseOrBegin() and runRequireNew() methods to
run code (in a Groovy Closure) in a transaction
- ArtifactHit/Bin persistence now done in a worker thread instead of async
service; uses new eci.runInWorkerThread() method, may be added
ExecutionContext interface in the future
- Added XML Form text-line.depends-on element so autocomplete fields can
get data on the client from other form fields and clear on change
- Improved encode/decode handling for URL path segments and parameters
- Service parameters with allow-html=safe are now accepted even with
filtered elements and attributes, non-error messages are generated and
the clean HTML from AntiSamy is used
- Now using PegDown for Markdown processing instead of Markdown4J
- Multi Tenant
- Entity find and CrUD operations for entities in the tenantcommon group
are restricted to the DEFAULT instance, protects REST API and so on
regardless of admin permissions a tenant admin might assign
- Added tenants allowed on SubscreensItem entity and subscreens-item
element, makes more sense to filter apps by tenant than in screen
- Improvements to tenant provisioning services, new MySQL provisioning,
and enable/disable tenant services along with enable check on switch
- Added ALL_TENANTS option for scheduled services, set on system
maintenance services in quartz_data.xml by default; runs the service
for each known tenant (by moqui.tenant.Tenant records)
- Entity Facade
- DB meta data (create tables, etc) and primary sequenced ID queries now
use a separate thread to run in a different transaction instead of
suspend/resume as some databases have issues with that, especially
nested which happens when service and framework code suspends
- Service Facade
- Added separateThread option to sync service call as an alternative to
requireNewTransaction which does a suspend/resume, runs service in a
separate thread and waits for the service to complete
- Added service.@semaphore-parameter attribute which creates a distinct
semaphore per value of that parameter
- Services called with a ServiceResultWaiter now get messages passed
through from the service job in the current MessageFacade (through
the MessageFacadeException), better handling for other Throwable
- Async service calls now run through lighter weight worker thread pool
if persist not set (if persist set still through Quartz Scheduler)
- Dynamic (SPA) browser features
- Added screen element when render screen to support macros at the screen
level, such as code for components and services in Angular 2
- Added support for render mode extension (like .html, .js, etc) to
last screen name in screen path (or URL), uses the specified
render-mode and doesn't try to render additional subscreens
- Added automatic actions.json transition for all screens, runs actions
and returns results as JSON for use in client-side template rendering
- Added support for .json extension to transitions, will run the
transition and if the response goes to another screen returns path to
that screen in a list and parameters for it, along with
messages/errors/etc for client side routing between screens
### Bug Fixes
- DB operations for sequenced IDs, service semaphores, and DB meta data are
now run in a separate thread instead of tx suspend/resume as some
databases have issues with suspend/resume, especially multiple
outstanding suspended transactions
- Fixed issue with conditional default subscreen URL caching
- Internal login from login/api key and async/scheduled services now checks
for disabled accounts, expired passwords, etc just like normal login
- Fixed issue with entity lists in TransactionCache, were not cloned so
new/updated records changed lists that calling code might use
- Fixed issue with cached entity lists not getting cleared when a record is
updated that wasn't in a list already in the cache but that matches its
condition
- Fixed issue with cached view-entity lists not getting cleared on new or
updated records; fixes issues with new authz, tarpits and much more not
applied immediately
- Fixed issue with cached view-entity one results not getting cleared when
a member entity is updated (was never implemented)
- Entities in the tenantcommon group no longer available for find and CrUD
operations outside the DEFAULT instance (protect tenant data)
- Fixed issue with find one when using a Map as a condition that may
contain non-PK fields and having an artifact authz filter applied, was
getting non-PK fields and constraining query when it shouldn't
(inconsistent with previous behavior)
- Fixed ElasticSearch automatic mappings where sub-object mappings always
had just the first property
- Fixed issues with Entity DataFeed where cached DataDocument mappings per
entity were not consistent and no feed was done for creates
- Fixed safe HTML service parameters (allow-html=safe), was issue loading
antisamy-esapi.xml though ESAPI so now using AntiSamy directly
- Fixed issues with DbResource reference move and other operations
- Fixed issues with ResourceReference operations and wiki page updates
## Release 1.6.1 - 24 Jan 2016
Moqui Framework 1.6.1 is a minor new feature and bug fix release.
This is the first release after the repository reorganization in Moqui
Ecosystem. The runtime directory is now in a separate repository. The
framework build now gets JAR files from Bintray JCenter instead of having
them in the framework/lib directory. Overall the result is a small
foundation with additional libraries, components, etc added as needed using
Gradle tasks.
### Build Changes
- Gradle tasks to help handle runtime directory in a separate repository
from Moqui Framework
- Added component management features as Gradle tasks
- Components available configured in addons.xml
- Repositories components come from configured in addons.xml
- Get component from current or release archive (getCurrent, getRelease)
- Get component from git repositories (getGit)
- When getting a component, automatically gets all components it depends
on (must be configured in addons.xml so it knows where to get them)
- Do a git pull for moqui, runtime, and all components
- Most JAR files removed, framework build now uses Bintray JCenter
- JAR files are downloaded as needed on build
- For convenience in IDEs to copy JAR files to the framework/dependencies
directory use: gradle framework:copyDependencies; note that this is not
necessary in IntelliJ IDEA (will import dependencies when creating a new
project based on the gradle files, use the refresh button in the Gradle
tool window to update after updating moqui)
- If your component builds source or runs Spock tests changes will be
needed, see the runtime/base-component/example/build.gradle file
### New Features
- The makeCondition(Map) methods now support _comp entry for comparison
operator, _join entry for join operator, and _list entry for a list of
conditions that will be combined with other fields/values in the Map
- In FieldValueCondition if the value is a collection and operator is
EQUALS set to IN, or if NOT_EQUAL then NOT_IN
### Bug Fixes
- Fixed issue with EntityFindBase.condition() where condition break down
set ignore case to true
- Fixed issue with from/thru date where conversion from String was ignored
- Fixed MySQL date-time type for milliseconds; improved example conf for XA
- If there are errors in screen actions the error message is displayed
instead of rendering the widgets (usually just resulting in more errors)
## Long Term To Do List - aka Informal Road Map
- Support local printers, scales, etc in web-based apps using https://qz.io/
- PDF, Office, etc document indexing for wiki attachments (using Apache Tika)
- Wiki page version history with full content history diff, etc; store just differences, lib for that?
- https://code.google.com/archive/p/java-diff-utils/
- compile group: 'com.googlecode.java-diff-utils', name: 'diffutils', version: '1.3.0'
- https://bitbucket.org/cowwoc/google-diff-match-patch/
- compile group: 'org.bitbucket.cowwoc', name: 'diff-match-patch', version: '1.1'
- Option for transition to only mount if all response URLs for screen paths exist
- Saved form-list Finds
- Save settings for a user or group to share (i.e. associate with userId or userGroupId). Allow for any group a user is in.
- allow different aggregate/show-total/etc options in select-columns, more complex but makes sense?
- add form-list presets in xml file, like saved finds but perhaps more options? allow different aggregate settings in presets?
- form-list data prep, more self-contained
- X form-list.entity-find element support instead of form-list.@list attribute
- _ form-list.service-call
- _ also more general form-list.actions element?
- form-single.entity-find-one element support, maybe form-single.actions too
- Instance Provisioning and Management
- embedded and gradle docker client (for docker host or docker swarm)
- direct through Docker API
- https://docs.docker.com/engine/reference/commandline/dockerd/#bind-docker-to-another-host-port-or-a-unix-socket
- https://docs.docker.com/engine/security/https/
- https://docs.docker.com/engine/reference/api/docker_remote_api/
- Support incremental (add/subtract) updates in EntityValue.update() or a variation of it; deterministic DB style
- Support seek for faster pagination like jOOQ: https://blog.jooq.org/2013/10/26/faster-sql-paging-with-jooq-using-the-seek-method/
- Improved Distributed Datasource Support
- Put all framework, mantle entities in the 4 new groups: transactional, nontransactional, configuration, analytical
- Review warnings about view-entities that have members in multiple groups (which may be in different databases)
- Test with transactional in H2 and nontransactional, configuration, analytical in OrientDB
- Known changes needed
- Check distributed foreign keys in create, update, delete (make sure records exist or don't exist in other databases)
- Add augment-member to view-entity that can be in a separate database
- Make it easier to define view-entity so that caller can treat it mostly as a normal join-based view
- Augment query results with optionally cached values from records in a separate database
- For conditions on fields from augment-member do a pre-query to get set of PKs, use them in an IN condition on
the main query (only support simple AND scenario, error otherwise); sort of like a sub-select
- How to handle order by fields on augment-member? Might require separate query and some sort of fancy sorting...
- Some sort of EntityDynamicView handling without joins possible? Maybe augment member methods?
- DataDocument support across multiple databases, doing something other than one big dynamic join...
- Possibly useful
- Consider meta-data management features such as versioning and more complete history for nontransactional and
configuration, preferably using some sort of more efficient underlying features in the datasource
(like Jackrabbit/Oak; any support for this in OrientDB? ElasticSearch keeps version number for concurrency, but no history)
- Write EntityFacade interface for ElasticSearch to use like OrientDB?
- Support persistence through EntityFacade as nested documents, ie specify that detail/etc entities be included in parent/master document
- SimpleFind interface as an alternative to EntityFind for datasources that don't support joins, etc (like OrientDB)
and maybe add support for the internal record ID that can be used for faster graph traversal, etc
- Try Caffeine JCache at https://github.com/ben-manes/caffeine
- do in moqui-caffeine tool component
- add multiple threads to SpeedTest.xml?
- WebSocket Notifications
- Increment message, event, task count labels in header?
- DataDocument add flag if new or updated
- if new increment count with JS
- Side note: DataDocument add info about what was updated somehow?
- User Notification
- Add Moqui Conf XML elements to configure NotificationMessageListener classes
- Listener to send Email with XML Screen to layout (and try out using JSON documents as nested Maps from a screen)
- where to configure the email and screen to use? use EmailTemplate/emailTemplateId, but where to specify?
- for notifications from DataFeeds can add DataFeed.emailTemplateId (or not, what about toAddresses, etc?)
- maybe have a more general way to configure details of topics, including emailTemplateId and screenLocation...
- Hazelcast based improvements
- configuration for 'microservice' deployments, partitioning services to run on particular servers in a cluster and
not others (partition groups or other partition feature?)
- can use for reliable WAN service calls like needed for EntitySync?
- ie to a remote cluster
- different from commercial only WAN replication feature
- would be nice for reliable message queue
- Quartz Scheduler
- can use Hazelcast for scheduled service execution in a cluster, perhaps something on top of, underneath, or instead of Quartz Scheduler?
- consider using Hazelcast as a Quartz JobStore, ie: https://github.com/FlavioF/quartz-scheduler-hazelcast-jobstore
- DB (or ElasticSearch?) MapStore for persisted (backed up) Hazelcast maps
- use MapStore and MapLoader interfaces
- see http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#loading-and-storing-persistent-data
- https://github.com/mozilla-metrics/bagheera-elasticsearch
- older, useful only as a reference for implementing something like this in Moqui
- best to implement something using the EntityFacade for easier configuration, etc
- see JDBC, etc samples: https://github.com/hazelcast/hazelcast-code-samples/tree/master/distributed-map/mapstore/src/main/java
- Persisted Queue for Async Services, etc
- use QueueStore interface
- see http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#queueing-with-persistent-datastore
- use DB?
- XML Screens
- Screen section-iterate pagination
- Screen form automatic client JS validation for more service in-parameters
for: number-range, text-length, text-letters, time-range, credit-card.@types
- Dynamic Screens (database-driven: DynamicScreen* entities)
- Entity Facade
- LiquiBase integration (entity.change-set element?)
- Add view log, like current change audit log (AuditLogView?)
- Improve entity cache auto-clear performance using ehcache search
http://ehcache.org/generated/2.9.0/html/ehc-all/#page/Ehcache_Documentation_Set%2Fto-srch_searching_a_cache.html%23
- Artifact Execution Facade
- Call ArtifactExecutionFacade.push() (to track, check authz, etc) for
other types of artifacts (if/as determined to be helpful), including:
Component, Webapp, Screen Section, Screen Form, Screen Form Field,
Template, Script, Entity Field
- For record-level authz automatically add constraints to queries if
the query follows an adequate pattern and authz requires it, or fail
authz if can't add constraint
- Tools Screens
- Auto Screen
- Editable data grid, created by form-list, for detail and assoc related entities
- Entity
- Entity model internal check (relationship, view-link.key-map, ?)
- Database meta-data check/report against entity definitions; NOTE: use LiquiBase for this
- Script Run (or groovy shell?)
- Service
- Configure and run chain of services (dynamic wizard)
- Artifact Info screens (with in/out references for all)
- Screen tree and graph browse screen
- Entity usage/reference section
- Service usage/reference section on ServiceDetail screen
- Screen to install a component (upload and register, load data from it; require special permission for this, not enabled on the demo server)
- Data Document and Feed
- API (or service?) push outstanding data changes (registration/connection, time trigger; tie to SystemMessage)
- API (or service?) receive/persist data change messages - going reverse of generation for DataDocuments... should be interesting
- Consumer System Registry
- feed transport (for each: supports confirmation?)
- WebSocket (use Notification system, based on notificationName (and userId?))
- Service to send email from DataFeed (ie receive#DataFeed implementation), use XML Screen for email content
- don't do this directly, do through NotificationMessage, ie the next item... or maybe not, too many parameters for
email from too many places related to a DataDocument, may not be flexible enough and may be quite messy
- Service (receive#DataFeed impl) to send documents as User NotificationMessages (one message per DataDocument); this
is probably the best way to tie a feed to WebSocket notifications for data updates
- Use the dataFeedId as the NotificationMessage topic
- Use this in HiveMind to send notifications of project, task, and wiki changes (maybe?)
- Integration
- OData V4 (http://www.odata.org) compliant entity auto REST API
- like current but use OData URL structure, query parameters, etc
- mount on /odata4 as alternative to existing /rest
- generate EDMX for all entities (and exported services?)
- use Apache Olingo (http://olingo.apache.org)
- see: https://templth.wordpress.com/2015/04/27/implementing-an-odata-service-with-olingo/
- also add an ElasticSearch interface? https://templth.wordpress.com/2015/04/03/handling-odata-queries-with-elasticsearch/
- Generate minimal Data Document based on changes (per TX possible, runs async so not really; from existing doc, like current ES doc)
- Update database from Data Document
- Data Document UI
- show/edit field, rel alias, condition, link
- special form for add (edit?) field with 5 drop-downs for relationships, one for field, all updated based on
master entity and previous selections
- Data Document REST interface
- get single by dataDocumentId and PK values for primary entity
- search through ElasticSearch for those with associated feed/index
- json-schema, RAML, Swagger API defs
- generic service for sending Data Document to REST (or other?) end point
- Service REST API
- allow mapping DataDocument operations as well
- Add attribute for resource/method like screen for anonymous and no authz access
- OAuth2 Support
- Simple OAuth2 for authentication only
- https://tools.ietf.org/html/draft-ietf-oauth-v2-27#section-4.4
- use current api key functionality, or expand for limiting tokens to a particular client by registered client ID
- Use Apache Oltu, see https://cwiki.apache.org/confluence/display/OLTU/OAuth+2.0+Authorization+Server
- Spec at http://tools.ietf.org/html/rfc6749
- http://oltu.apache.org/apidocs/oauth2/reference/org/apache/oltu/oauth2/as/request/package-summary.html
- http://search.maven.org/#search|ga|1|org.apache.oltu
- https://stormpath.com/blog/build-api-restify-stormpath/
- https://github.com/PROCERGS/login-cidadao/blob/master/app/Resources/doc/en/examplejava.md
- https://github.com/swagger-api/swagger-ui/issues/807
- Add authz and token transitions in rest.xml
- Support in Service REST API (and entity/master?)
- Add examples of auth and service calls using OAuth2
- Add OAuth2 details in Swagger and RAML files
- More?
- AS2 Client and Server
- use OpenAS2 (http://openas2.sourceforge.net, https://github.com/OpenAS2/OpenAs2App)?
- tie into SystemMessage for send/receive (with AS2 service for send, code to receive SystemMessage from AS2 server)
- Email verification by random code on registration and email change
- Login through Google, Facebook, etc
- OpenID, SAML, OAuth, ...
- https://developers.facebook.com/docs/facebook-login/login-flow-for-web/v2.0
- Workflow that manages activity flow with screens and services attached to
activities, and tasks based on them taking users to defined or automatic
screen; see BonitaSoft.com Open Source BPM for similar concept; generally
workflow without requiring implementation of an entire app once the
workflow itself is defined
================================================
FILE: SECURITY.md
================================================
# Security Policy
## Supported Versions
The primary supported version for each repository is the latest commit in the master (primary) branch.
Moqui Ecosystem projects are maintained by volunteer contributors, primarily people who use and work with the code as part of their employment or professional services. There are periodic community releases but most distributions and releases involve custom code and are managed, internally or publicly, by third parties. Community releases are checkpoint releases, not maintained release branches, and are best for evaluation rather than production use.
Moqui uses a 'continous release' approach for managing code repositories. Aside from new (work-in-progress) and archived repositories, the master branch in each repository is considered production ready. Rather than running a centrally dictated release schedule and process, the focus is on keeping master branches in a production ready state so that users may use whatever release process and frequency they prefer.
For most use cases we recommend using code directly from the master branch in each repository. For stabilization and periodic updates (instead of continuous) we recommend using a fork for each git repository with an 'upstream' remote pointing to the Moqui Ecosystem repository for easy upstream updates.
## Reporting a Vulnerability
To report security issues that should not be disclosed publicly before they are fixed, please use the private **[moqui-board@googlegroups.com](mailto:moqui-board@googlegroups.com)** mailing list. This is setup so that anyone can send messages to it, but only members of the group can read the messages.
## Issues and Pull Requests
For more information on submitting issues and pull requests please see the [Issue and Pull Request Guide](https://moqui.org/m/docs/moqui/Issue+and+Pull+Request+Guide) on moqui.org.
================================================
FILE: addons.xml
================================================
================================================
FILE: build.gradle
================================================
/*
* This software is in the public domain under CC0 1.0 Universal plus a
* Grant of Patent License.
*
* To the extent possible under law, the author(s) have dedicated all
* copyright and related and neighboring rights to this software to the
* public domain worldwide. This software is distributed without any
* warranty.
*
* You should have received a copy of the CC0 Public Domain Dedication
* along with this software (see the LICENSE.md file). If not, see
* .
*/
plugins {
id 'com.github.ben-manes.versions' version '0.53.0'
id 'org.ajoberstar.grgit' version '5.3.3'
}
// Filters dependencyUpdates to report only stable (official) releases
// Use `./gradlew dependencyUpdates` to check which packages are upgradable in all components
dependencyUpdates.resolutionStrategy {
componentSelection { rules ->
rules.all { ComponentSelection selection ->
boolean rejected = ['alpha', 'beta', 'rc', 'cr', 'm', 'b'].any { qualifier ->
selection.candidate.version ==~ /(?i).*[.-]${qualifier}[.\d-].*/
}
if (rejected) selection.reject('Release candidate')
}
}
}
// Run headless so GradleWorkerMain does not steal focus (mostly a macOS annoyance)
allprojects {
tasks.withType(JavaForkOptions) {
jvmArgs '-Djava.awt.headless=true'
}
repositories {
mavenCentral()
}
}
import groovy.util.Node
import groovy.xml.XmlParser
import groovy.xml.XmlSlurper
import org.ajoberstar.grgit.*
defaultTasks 'build'
def openSearchVersion = '3.4.0'
def elasticSearchVersion = '7.10.2'
def tomcatHome = '../apache-tomcat'
// no longer include version in war file name: def getWarName() { 'moqui-' + childProjects.framework.version + '.war' }
def getWarName() { 'moqui.war' }
def plusRuntimeName = 'moqui-plus-runtime.war'
def execTempDir = 'execwartmp'
def moquiRuntime = 'runtime'
def moquiConfDev = 'conf/MoquiDevConf.xml'
def moquiConfProduction = 'conf/MoquiProductionConf.xml'
def allCleanTasks = getTasksByName('clean', true)
def allBuildTasks = getTasksByName('build', true)
def allTestTasks = getTasksByName('test', true)
allTestTasks.each { it.systemProperties << System.properties.subMap(getDefaultPropertyKeys()) }
// kill the build -> check -> test dependency, only run tests explicitly and not always on build
getTasksByName('check', true).each { it.dependsOn.clear() }
Set getComponentTestTasks() {
Set testTasks = new LinkedHashSet()
for (Project subProject in getSubprojects())
if (subProject.getPath().startsWith(':runtime:component:')) testTasks.addAll(subProject.getTasksByName('test', false))
return testTasks
}
def getDefaultPropertyKeys() {
def defaultProperties = []
Node confXml = new XmlParser().parse(file('framework/src/main/resources/MoquiDefaultConf.xml'))
for (Node defaultProperty in confXml.'default-property') { defaultProperties << defaultProperty.'@name' }
defaultProperties
}
// ========== clean tasks ==========
task clean(type: Delete) { delete file(warName); delete file(execTempDir); delete file('wartemp'); cleanVersionDetailFiles() }
task cleanTempDir(type: Delete) { delete file(execTempDir) }
task cleanDb { doLast {
if (!file(moquiRuntime).exists()) return
delete files(file(moquiRuntime+'/db/derby').listFiles()) - files(moquiRuntime+'/db/derby/derby.properties')
delete file(moquiRuntime+'/db/h2')
delete file(moquiRuntime+'/db/orientdb/databases')
delete fileTree(dir: moquiRuntime+'/txlog', include: '*')
cleanElasticSearch(moquiRuntime)
} }
task cleanLog(type: Delete) { delete fileTree(dir: moquiRuntime+'/log', include: '*') }
task cleanSessions(type: Delete) { delete fileTree(dir: moquiRuntime+'/sessions', include: '*') }
task cleanLoadSave(type: Delete) { delete file('SaveH2.zip'); delete file('SaveDEFAULT.zip')
delete file('SaveTransactional.zip'); delete file('SaveAnalytical.zip'); delete file('SaveOrientDb.zip')
delete file('SaveElasticSearch.zip'); delete file('SaveOpenSearch.zip') }
task cleanPlusRuntime(type: Delete) { delete file(plusRuntimeName) }
task cleanOther(type: Delete) { delete fileTree(dir: '.', includes: ['**/.nbattrs', '**/*~', '**/.#*', '**/.DS_Store', '**/*.rej', '**/*.orig']) }
task cleanAll { dependsOn clean, allCleanTasks, cleanDb, cleanLog, cleanSessions, cleanLoadSave, cleanPlusRuntime }
// ========== ElasticSearch tasks (for install in runtime/elasticsearch) ==========
def cleanElasticSearch(String moquiRuntime) {
File osDir = file(moquiRuntime + '/opensearch')
String workDir = moquiRuntime + (osDir.exists() ? '/opensearch' : '/elasticsearch')
if (file(workDir+'/bin').exists()) {
def pidFile = file(workDir+'/pid')
if (pidFile.exists()) {
String pid = pidFile.getText()
logger.lifecycle("${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'} running with pid ${pid}, stopping before deleting data then restarting")
['kill', pid].execute(null, file(workDir)).waitFor()
['tail', "--pid=${pid}", '-f', '/dev/null'].execute(null, file(workDir)).waitFor()
delete file(workDir+'/data')
if (file(workDir+'/logs').exists()) delete files(file(workDir+'/logs').listFiles())
if (pidFile.exists()) delete pidFile
startSearch()
} else {
logger.lifecycle("Found ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'} in ${workDir}/bin directory but no pid, deleting data without stop/start; WARNING if ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'} is running this will cause problems!")
delete file(workDir+'/data')
if (file(workDir+'/logs').exists()) delete files(file(workDir+'/logs').listFiles())
}
} else {
delete file(workDir+'/data')
if (file(workDir+'/logs').exists()) delete files(file(workDir+'/logs').listFiles())
}
}
task downloadOpenSearch { doLast {
// NOTE: works with Linux and macOS
// TODO: Windows support...
String distType = "tar.gz"
// https://artifacts.opensearch.org/releases/core/opensearch/1.3.1/opensearch-min-1.3.1-linux-x64.tar.gz
String esUrl = "https://artifacts.opensearch.org/releases/core/opensearch/${openSearchVersion}/opensearch-min-${openSearchVersion}-linux-x64.${distType}"
String targetDirPath = moquiRuntime + '/opensearch'
String esExtraDirPath = targetDirPath + '/opensearch-' + openSearchVersion
File targetDir = file(targetDirPath)
if (targetDir.exists()) { logger.lifecycle("Found directory at ${targetDirPath}, deleting"); delete targetDir }
File zipFile = file("${moquiRuntime}/opensearch-min-${openSearchVersion}-linux-x64.${distType}")
if (!zipFile.exists()) {
logger.lifecycle("Downloading OpenSearch from ${esUrl}")
ant.get(src: esUrl, dest: zipFile)
} else {
logger.lifecycle("Found OpenSearch archive at ${zipFile.getPath()}, using that instead of downloading")
}
// the eachFile closure removes the first path from each file, moving everything up a directory, which also requires delete of the extra dirs
copy { from distType == "zip" ? zipTree(zipFile) : tarTree(zipFile); into targetDir; eachFile {
def pathList = it.getRelativePath().getSegments() as List
if (pathList[0] == ".") pathList = pathList.tail()
it.setPath(pathList.tail().join("/"))
return it
} }
// make sure there is a logs directory, OpenSearch (just like ES) has start error without it
File esLogsDir = file(targetDirPath + '/logs')
if (!esLogsDir.exists()) esLogsDir.mkdir()
File extraDir = file(esExtraDirPath)
if (extraDir.exists()) delete extraDir
delete zipFile
}}
task downloadElasticSearch { doLast {
String suffix
String distType
String osName = System.getProperty("os.name").toLowerCase()
if (osName.startsWith("windows")) {
suffix = "windows-x86_64.zip"
distType = "zip"
} else if (osName.startsWith("mac")) {
suffix = "darwin-x86_64.tar.gz"
distType = "tar.gz"
} else {
suffix = "linux-x86_64.tar.gz"
distType = "tar.gz"
}
String esUrl = "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${elasticSearchVersion}-no-jdk-${suffix}"
String targetDirPath = moquiRuntime + '/elasticsearch'
String esExtraDirPath = targetDirPath + '/elasticsearch-' + elasticSearchVersion
File targetDir = file(targetDirPath)
if (targetDir.exists()) { logger.lifecycle("Found directory at ${targetDirPath}, deleting"); delete targetDir }
File zipFile = file("${targetDirPath}-${elasticSearchVersion}.${distType}")
if (!zipFile.exists()) {
logger.lifecycle("Downloading ElasticSearch from ${esUrl}")
ant.get(src: esUrl, dest: zipFile)
} else {
logger.lifecycle("Found ElasticSearch archive at ${zipFile.getPath()}, using that instead of downloading")
}
// the eachFile closure removes the first path from each file, moving everything up a directory, which also requires delete of the extra dirs
copy { from distType == "zip"? zipTree(zipFile) : tarTree(zipFile); into targetDir; eachFile {
def pathList = it.getRelativePath().getSegments() as List
if (pathList[0] == ".") pathList = pathList.tail()
it.setPath(pathList.tail().join("/"))
return it
} }
// make sure there is a logs directory, ES start error without it
File esLogsDir = file(targetDirPath + '/logs')
if (!esLogsDir.exists()) esLogsDir.mkdir()
File extraDir = file(esExtraDirPath)
if (extraDir.exists()) delete extraDir
delete zipFile
}}
/* startElasticSearch old approach, with ES 7.10.2 and OpenSearch never exits, gradle just sits there doing nothing (though same command in terminal does exit)
task startElasticSearch(type:Exec) {
File osDir = file(moquiRuntime + '/opensearch')
workingDir moquiRuntime + (osDir.exists() ? '/opensearch' : '/elasticsearch')
commandLine (osDir.exists() ? ['./bin/opensearch', '-d', '-p', 'pid'] : ['./bin/elasticsearch', '-d', '-p', 'pid'])
ignoreExitValue true
onlyIf { (file(moquiRuntime + '/elasticsearch/bin').exists() || file(moquiRuntime + '/opensearch/bin').exists())
&& !file(moquiRuntime + '/elasticsearch/pid').exists() && !file(moquiRuntime + '/opensearch/pid').exists() }
doFirst {
logger.lifecycle("Starting ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'} installed in runtime/${osDir.exists() ? 'opensearch' : 'elasticsearch'}")
}
}
*/
void startSearch(String moquiRuntime) {
File osDir = file(moquiRuntime + '/opensearch')
String workDir = moquiRuntime + (osDir.exists() ? '/opensearch' : '/elasticsearch')
def pidFile = file(workDir + '/pid')
def binFile = file(workDir + '/bin')
if (binFile.exists() && !pidFile.exists()) {
logger.lifecycle("Starting ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'} installed in ${workDir}")
ProcessBuilder pb = new ProcessBuilder((osDir.exists() ? './bin/opensearch' : './bin/elasticsearch'), '-d', '-p', 'pid')
pb.directory(file(workDir))
pb.redirectOutput()
pb.redirectError()
pb.inheritIO()
logger.lifecycle("Starting process with command ${pb.command()} in ${pb.directory().path}")
try {
Process proc = pb.start()
// logger.lifecycle("ran start waiting...")
int result = proc.waitFor()
logger.lifecycle("Process finished with ${result}")
} catch (Exception e) {
logger.lifecycle("Error starting ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'}", e)
}
} else {
if (pidFile.exists()) logger.lifecycle("Not Starting ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'} installed in ${workDir}, pid file already exists")
if (!binFile.exists()) logger.lifecycle("Not Starting ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'}, no ${workDir}/bin directory found")
}
}
task startElasticSearch { doLast {
startSearch(moquiRuntime)
} }
void stopSearch(String moquiRuntime) {
File osDir = file(moquiRuntime + '/opensearch')
String workDir = moquiRuntime + (osDir.exists() ? '/opensearch' : '/elasticsearch')
def pidFile = file(workDir + '/pid')
def binFile = file(workDir + '/bin')
if (pidFile.exists() && binFile.exists()) {
String pid = pidFile.getText()
logger.lifecycle("Stopping ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'} installed in ${workDir} with pid ${pid}")
["kill", pid].execute(null, file(workDir)).waitFor()
if (pidFile.exists()) delete pidFile
} else {
if (!pidFile.exists()) logger.lifecycle("Not Stopping ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'} installed in ${workDir}, no pid file found")
if (!binFile.exists()) logger.lifecycle("Not Stopping ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'}, no ${workDir}/bin directory found")
}
}
task stopElasticSearch { doLast {
stopSearch(moquiRuntime)
} }
// ========== JDBC driver download tasks ==========
task getPostgresJdbc {
description = "Download the latest PostgreSQL JDBC driver to runtime/lib"
dependsOn 'getRuntime'
doLast {
def libDir = file(moquiRuntime + '/lib')
if (!libDir.exists()) libDir.mkdirs()
// Remove existing Postgres JAR files
fileTree(dir: libDir, include: 'postgres*.jar').each { it.delete() }
// Get the latest version from Maven repository
def metadataUrl = 'https://repo1.maven.org/maven2/org/postgresql/postgresql/maven-metadata.xml'
def metadataFile = file("${buildDir}/postgresql-maven-metadata.xml")
ant.get(src: metadataUrl, dest: metadataFile)
def metadata = new XmlSlurper().parse(metadataFile)
def latestVersion = metadata.versioning.latest.text()
metadataFile.delete()
// Download the latest version
def downloadUrl = "https://repo1.maven.org/maven2/org/postgresql/postgresql/${latestVersion}/postgresql-${latestVersion}.jar"
def jarFile = file("${libDir}/postgresql-${latestVersion}.jar")
logger.lifecycle("Downloading PostgreSQL JDBC driver ${latestVersion} from ${downloadUrl}")
ant.get(src: downloadUrl, dest: jarFile)
logger.lifecycle("Downloaded PostgreSQL JDBC driver to ${jarFile}")
}
}
task getMySqlJdbc {
description = "Download the latest MySQL JDBC driver to runtime/lib"
dependsOn 'getRuntime'
doLast {
def libDir = file(moquiRuntime + '/lib')
if (!libDir.exists()) libDir.mkdirs()
// Remove existing MySQL connector JAR files
fileTree(dir: libDir, include: 'mysql-connector*.jar').each { it.delete() }
// Get the latest version from Maven repository
def metadataUrl = 'https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/maven-metadata.xml'
def metadataFile = file("${buildDir}/mysql-connector-j-maven-metadata.xml")
ant.get(src: metadataUrl, dest: metadataFile)
def metadata = new XmlSlurper().parse(metadataFile)
def latestVersion = metadata.versioning.latest.text()
metadataFile.delete()
// Download the latest version
def downloadUrl = "https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/${latestVersion}/mysql-connector-j-${latestVersion}.jar"
def jarFile = file("${libDir}/mysql-connector-j-${latestVersion}.jar")
logger.lifecycle("Downloading MySQL JDBC driver ${latestVersion} from ${downloadUrl}")
ant.get(src: downloadUrl, dest: jarFile)
logger.lifecycle("Downloaded MySQL JDBC driver to ${jarFile}")
}
}
// ========== development tasks ==========
task setupIntellij {
description = "Adds all XML catalog items to intellij to enable autocomplete"
doLast {
def ideaDir = "${rootDir}/.idea"
def parser = new XmlSlurper()
parser.setFeature("http://apache.org/xml/features/disallow-doctype-decl", false)
parser.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false)
def catalogEntries = parser.parse(file("${rootDir}/framework/xsd/framework-catalog.xml"))
.system
.list()
.stream()
.map { [url: it.@systemId, location: "\$PROJECT_DIR\$/framework/xsd/${it.@uri}"] }
.collect(java.util.stream.Collectors.toList())
mkdir ideaDir
def rawXml
def miscFile = file("${ideaDir}/misc.xml")
if (!miscFile.exists()) {
def builder = new groovy.xml.StreamingMarkupBuilder()
builder.encoding = 'UTF-8'
rawXml = builder.bind {
project(version: '4') {
component(name: 'ExternalStorageConfigurationManager', enabled: true)
component(name: 'ProjectResources') {
catalogEntries.each { resource(url: it.url, location: it.location) }
}
}
}
} else {
def projectNode = parser.parse(miscFile)
def resourcesNode = projectNode.children().find { it.@name == 'ProjectResources' }
if (resourcesNode.size() == 0) {
projectNode.appendNode {
component(name: 'ProjectResources') {
catalogEntries.each { resource(url: it.url, location: it.location) }
}
}
} else {
catalogEntries.each { cat ->
def existingEntry = resourcesNode.children().find { it.@url == cat.url }
if (existingEntry.size() > 0) {
existingEntry.replaceNode { resource(url: cat.url, location: cat.location) }
} else {
resourcesNode.appendNode { resource(url: cat.url, location: cat.location) }
}
}
}
rawXml = projectNode
}
def misc = groovy.xml.XmlUtil.serialize(rawXml)
miscFile.write(misc)
}
}
task setupVscode {
description = "Configures VS Code settings with runtime directory exclusions"
doLast {
def settingsFile = file("${rootDir}/.vscode/settings.json")
mkdir settingsFile.parentFile
def settings = (settingsFile.exists() && settingsFile.length() > 0) ?
new groovy.json.JsonSlurper().parseText(settingsFile.text) : [:]
// Disable gitignore for search (so we can manually exclude build/runtime dirs)
settings['search.useIgnoreFiles'] = false
// Add search.exclude if missing
if (!settings['search.exclude']) settings['search.exclude'] = [:]
// Add exclusion patterns
settings['search.exclude']['**/build'] = true
settings['search.exclude']['runtime/{log,sessions,txlog,db,elasticsearch,opensearch}'] = true
// Write formatted JSON
settingsFile.text = groovy.json.JsonOutput.prettyPrint(groovy.json.JsonOutput.toJson(settings))
logger.lifecycle("VS Code settings updated at ${settingsFile}")
}
}
// ========== test task ==========
// NOTE1: to run startElasticSearch before the first test task add it as a dependency to all test tasks
// NOTE2: to run stopElasticSearch after the last test task make all test tasks finalizedBy stopElasticSearch
getTasksByName('test', true).each {
if (it.path != ':test') {
// logger.lifecycle("Adding dependencies for test task ${it.getPath()}")
it.dependsOn(startElasticSearch)
it.finalizedBy(stopElasticSearch)
}
}
// ========== check/update tasks ==========
task getRuntime {
description = "If the runtime directory does not exist get it using settings in myaddons.xml or addons.xml; also check default components in myaddons.xml (addons.@default) and download any missing"
doLast { checkRuntimeDirAndDefaults(project.hasProperty('locationType') ? locationType : null) }
}
task checkRuntime { doLast {
if (!file('runtime').exists()) throw new GradleException("Required 'runtime' directory not found. Use 'gradle getRuntime' or 'gradle getComponent' or manually clone the moqui-runtime repository. This must be done in a separate Gradle run before a build so Gradle can find and run build tasks.")
} }
task gitPullAll {
description = "Do a git pull to update moqui, runtime, and each installed component (for each where a .git directory is found)"
doLast {
// framework and runtime
if (file(".git").exists()) { doGitPullWithStatus(file('.').path) }
if (file("runtime/.git").exists()) { doGitPullWithStatus(file('runtime').path) }
// all directories under runtime/component
for (File compDir in file('runtime/component').listFiles().findAll { it.isDirectory() && it.listFiles().find { it.name == '.git' } }) {
doGitPullWithStatus(compDir.path)
}
}
}
def doGitPullWithStatus(def gitDir) {
try {
def curGrgit = Grgit.open(dir: gitDir)
logger.lifecycle("\nPulling ${gitDir} (branch:${curGrgit.branch.current()?.name}, tracking:${curGrgit.branch.current()?.trackingBranch?.name})")
def beforeHead = curGrgit.head()
curGrgit.pull()
def afterHead = curGrgit.head()
if (beforeHead == afterHead) {
logger.lifecycle("Already up-to-date.")
} else {
List commits = curGrgit.log { range(beforeHead, afterHead) }
for (Commit commit in commits) logger.lifecycle("- ${commit.getAbbreviatedId(7)} by ${commit.committer?.name}: ${commit.shortMessage}")
}
} catch (Throwable t) {
logger.error(t.message)
}
}
task gitCheckoutAll {
description = "Do a git checkout on moqui, runtime, and each installed component (for each where a .git directory is found); use -Pbranch= (required) to specify a branch, use -Pcreate=true to create branches with the given name"
doLast {
if (!project.hasProperty('branch')) throw new InvalidUserDataException("No branch property specified (use -Pbranch=...)")
String curBranch = branch
String curTag = (project.hasProperty('tag') ? tag : null) ?: curBranch
boolean createBranch = false
if (project.hasProperty('create') && create == 'true') createBranch = true
List gitDirectories = []
if (file(".git").exists()) gitDirectories.add(file('.').path)
if (file("runtime/.git").exists()) gitDirectories.add(file('runtime').path)
for (File compDir in file('runtime/component').listFiles().findAll { it.isDirectory() && it.listFiles().find { it.name == '.git' } })
gitDirectories.add(compDir.path)
for (String gitDir in gitDirectories) {
def curGrgit = Grgit.open(dir: gitDir)
def branchList = curGrgit.branch.list(mode: org.ajoberstar.grgit.operation.BranchListOp.Mode.ALL)
def tagList = curGrgit.tag.list()
def targetBranch = branchList.find({ it.name == curBranch })
def targetTag = tagList.find({ it.name == curTag })
if (targetBranch == null && targetTag == null) {
def originBranch = branchList.find({ it.name == 'origin/' + curBranch })
if (originBranch != null) {
logger.lifecycle("In ${gitDir} branch ${curBranch} not found but found ${originBranch.name}, creating local branch tracking that branch")
targetBranch = curGrgit.branch.add(name: curBranch, startPoint: originBranch, mode: org.ajoberstar.grgit.operation.BranchAddOp.Mode.TRACK)
}
}
if (createBranch || targetBranch != null || targetTag != null) {
if (targetTag != null) {
if (createBranch && curBranch != curTag) {
logger.lifecycle("== Git checkout ${gitDir} tag ${curTag} and create branch ${curBranch}")
try { curGrgit.checkout(branch: curBranch, createBranch: true, startPoint: targetTag) }
catch (Exception e) { logger.lifecycle("Checkout error", e) }
} else {
logger.lifecycle("== Git checkout ${gitDir} tag ${curTag}")
try { curGrgit.checkout(branch: curTag, createBranch: false) }
catch (Exception e) { logger.lifecycle("Checkout error", e) }
}
} else {
logger.lifecycle("== Git checkout ${gitDir} branch ${curBranch} create ${createBranch}")
try { curGrgit.checkout(branch: curBranch, createBranch: createBranch) }
catch (Exception e) { logger.lifecycle("Checkout error", e) }
}
} else {
logger.lifecycle("* No branch or tag '${curBranch}' in ${gitDir}\nBranches: ${branchList.collect({it.name})}\nTags: ${tagList.collect({it.name})}")
}
logger.lifecycle("")
}
}
}
task gitStatusAll {
description = "Do a git status to check moqui, runtime, and each installed component (for each where a .git directory is found)"
doLast {
List gitDirectories = []
if (file(".git").exists()) gitDirectories.add(file('.').path)
if (file("runtime/.git").exists()) gitDirectories.add(file('runtime').path)
for (File compDir in file('runtime/component').listFiles().findAll { it.isDirectory() && it.listFiles().find { it.name == '.git' } })
gitDirectories.add(compDir.path)
for (String gitDir in gitDirectories) {
def curGrgit = Grgit.open(dir: gitDir)
logger.lifecycle("\nGit status for ${gitDir} (branch:${curGrgit.branch.current()?.name}, tracking:${curGrgit.branch.current()?.trackingBranch?.name})")
try {
if (curGrgit.remote.list().find({ it.name == 'upstream'})) {
def upstreamAhead = curGrgit.log { range curGrgit.resolve.toCommit('refs/remotes/upstream/master'), curGrgit.resolve.toCommit('refs/remotes/origin/master') }
if (upstreamAhead) logger.lifecycle("- origin/master ${upstreamAhead.size()} commits ahead of upstream/master")
}
} catch (Exception e) {
logger.error("Error finding commits ahead of upstream", e)
}
try {
def masterLatest = curGrgit.resolve.toCommit('refs/remotes/origin/master')
if (masterLatest == null) {
logger.error("No origin/master branch exists, can't determine unpushed commits")
} else {
def unpushed = curGrgit.log { range masterLatest, curGrgit.resolve.toCommit('HEAD') }
if (unpushed) logger.lifecycle("--- ${unpushed.size()} commits unpushed (ahead of origin/master)")
for (Commit commit in unpushed) logger.lifecycle(" - ${commit.getAbbreviatedId(8)} - ${commit.shortMessage}")
}
} catch (Exception e) {
logger.error("Error finding unpushed commits", e)
}
def curStatus = curGrgit.status()
if (curStatus.isClean()) logger.lifecycle("* nothing to commit, working directory clean")
if (curStatus.staged.added || curStatus.staged.modified || curStatus.staged.removed) logger.lifecycle("--- Changes to be committed::")
for (String fn in curStatus.staged.added) logger.lifecycle(" added: ${fn}")
for (String fn in curStatus.staged.modified) logger.lifecycle(" modified: ${fn}")
for (String fn in curStatus.staged.removed) logger.lifecycle(" removed: ${fn}")
if (curStatus.unstaged.added || curStatus.unstaged.modified || curStatus.unstaged.removed) logger.lifecycle("--- Changes not staged for commit:")
for (String fn in curStatus.unstaged.added) logger.lifecycle(" added: ${fn}")
for (String fn in curStatus.unstaged.modified) logger.lifecycle(" modified: ${fn}")
for (String fn in curStatus.unstaged.removed) logger.lifecycle(" removed: ${fn}")
}
}
}
task gitUpstreamAll {
description = "Do a git pull upstream:master for moqui, runtime, and each installed component (for each where a .git directory is found and has a remote called upstream)"
doLast {
String remoteName = project.hasProperty('remote') ? remote : 'upstream'
List gitDirectories = []
if (file(".git").exists()) gitDirectories.add(file('.').path)
if (file("runtime/.git").exists()) gitDirectories.add(file('runtime').path)
for (File compDir in file('runtime/component').listFiles().findAll { it.isDirectory() && it.listFiles().find { it.name == '.git' } })
gitDirectories.add(compDir.path)
for (String gitDir in gitDirectories) {
def curGrgit = Grgit.open(dir: gitDir)
if (curGrgit.remote.list().find({ it.name == remoteName})) {
logger.lifecycle("\nGit merge ${remoteName} for ${gitDir}")
curGrgit.pull(remote: remoteName, branch: 'master')
} else {
logger.lifecycle("\nNo ${remoteName} remote for ${gitDir}")
}
}
}
}
task gitTagAll {
description = "Do a git add or remove tag on the currently checked out commit in moqui, runtime, and each installed component"
doLast {
def tagName = (project.hasProperty('tag')) ? tag : null;
def tagMessage = (project.hasProperty('message')) ? message : null;
boolean removeTags = (project.hasProperty('remove') && remove == 'true')
boolean pushTags = (project.hasProperty('push') && push == 'true')
// Users can simply push tags to the remote
if (tagName == null && pushTags == false)
throw new InvalidUserDataException("No tag property specified (use -Ptag=...) and No push tag specified (use -Ppush=true)")
List gitDirectories = []
if (file(".git").exists()) gitDirectories.add(file('.').path)
if (file("runtime/.git").exists()) gitDirectories.add(file('runtime').path)
for (File compDir in file('runtime/component').listFiles().findAll { it.isDirectory() && it.listFiles().find { it.name == '.git' } })
gitDirectories.add(compDir.path)
def frameworkDir = gitDirectories.first()
for (String gitDir in gitDirectories) {
def relativePath = "."+gitDir.minus(frameworkDir)
def curGrgit = Grgit.open(dir: gitDir)
def branchName = curGrgit.branch.current().name
def commit = curGrgit.log(maxCommits: 1).find()
if (tagName != null) {
def tagList = curGrgit.tag.list()
def targetTag = tagList.find({ it.name == tagName })
if (targetTag == null) {
if (removeTags) {
logger.lifecycle("== Git tag '${tagName}' not found in ${branchName} of ${relativePath} ... skipping")
} else {
curGrgit.tag.add(name: tagName, message: tagMessage ?: "Tagging version ${tagName}")
logger.lifecycle("== Git tagging commit ${commit.abbreviatedId} - '${commit.shortMessage}' by '${commit.author.name}' in ${branchName} of ${relativePath}")
}
} else {
if (removeTags) {
curGrgit.tag.remove(names: [tagName])
logger.lifecycle("== Git removing tag '${tagName}' in ${branchName} of ${relativePath}")
} else {
logger.lifecycle("== Git tag '${tagName}' already exists in ${branchName} of ${relativePath}, skipping...")
}
}
}
if (pushTags) {
if (removeTags) {
curGrgit.push(refsOrSpecs: [':refs/tags/'+tagName])
} else {
curGrgit.push(tags: true)
}
logger.lifecycle("== Git pushing tag changes to remote of ${relativePath}")
}
}
}
}
task gitDiffTagsAll {
description = "Do a git diff between two tags in the currently checked out branch in moqui, runtime, and each installed component"
doLast {
if (!project.hasProperty('taga') || taga == null)
throw new InvalidUserDataException("No taga property specified (use -Ptaga=...)")
// If tagb is not passed, we assume HEAD
def tagb = (project.hasProperty('tagb') && tagb != null) ? tagb : "HEAD";
logger.lifecycle("== Git diffing tags ${taga} and ${tagb}")
List gitDirectories = []
if (file(".git").exists()) gitDirectories.add(file('.').path)
if (file("runtime/.git").exists()) gitDirectories.add(file('runtime').path)
for (File compDir in file('runtime/component').listFiles().findAll { it.isDirectory() && it.listFiles().find { it.name == '.git' } })
gitDirectories.add(compDir.path)
def frameworkDir = gitDirectories.first()
for (String gitDir in gitDirectories) {
def relativePath = "."+gitDir.minus(frameworkDir)
def grgit = Grgit.open(dir: gitDir)
def tagList = grgit.tag.list()
def tagaCommit = tagList.find({ it.name == taga })
def tagbCommit = tagList.find({ it.name == tagb })
logger.lifecycle("${relativePath}")
if ((taga == "HEAD" || tagaCommit != null) && (tagb == "HEAD" || tagbCommit != null)) {
grgit.log {
range taga, tagb
}.each {
logger.lifecycle(" ${it.abbreviatedId} - ${it.shortMessage}")
}
}
}
}
}
task gitMergeAll {
description = "Do a git diff between two tags in the currently checked out branch in moqui, runtime, and each installed component"
doLast {
def branchName = (project.hasProperty('branch')) ? branch : null;
def tagName = (project.hasProperty('tag')) ? tag : null;
def mergeMode = (project.hasProperty('mode')) ? mode : null;
def mergeMessage = (project.hasProperty('message')) ? message : null;
def pushMerge = (project.hasProperty('push')) ? push : null;
List gitDirectories = []
if (file(".git").exists()) gitDirectories.add(file('.').path)
if (file("runtime/.git").exists()) gitDirectories.add(file('runtime').path)
for (File compDir in file('runtime/component').listFiles().findAll { it.isDirectory() && it.listFiles().find { it.name == '.git' } })
gitDirectories.add(compDir.path)
def frameworkDir = gitDirectories.first()
for (String gitDir in gitDirectories) {
def relativePath = "."+gitDir.minus(frameworkDir)
logger.lifecycle("${relativePath}")
def grgit = Grgit.open(dir: gitDir)
def currentBranch = grgit.branch.current()?.name;
if (branchName == currentBranch)
continue
def doMerge = false;
if (branchName && grgit.branch.list().find({ it.name == branchName }) != null) {
doMerge = true;
}
if (tagName && grgit.tag.list().find({ it.name == tagName }) != null) {
doMerge = true;
}
if (doMerge) {
grgit.merge(head: branchName ?: tagName, mode: mergeMode, message: mergeMessage)
logger.lifecycle(" Merging ${branchName ?: tagName} into ${currentBranch}")
}
if (pushMerge) {
grgit.push();
logger.lifecycle(" Pushing merge")
}
}
}
}
// ========== run tasks ==========
task run(type: JavaExec) {
dependsOn checkRuntime, allBuildTasks, cleanTempDir
workingDir = '.'; jvmArgs = ['-server', '-XX:-OmitStackTraceInFastThrow']
systemProperties = ['moqui.conf':moquiConfDev, 'moqui.runtime':moquiRuntime]
// NOTE: this is a hack, using -jar instead of a class name, and then the first argument is the name of the jar file
mainClass = '-jar'; args = [warName]
}
task runProduction(type: JavaExec) {
dependsOn checkRuntime, allBuildTasks, cleanTempDir
workingDir = '.'; jvmArgs = ['-server', '-Xms1024M']
systemProperties = ['moqui.conf':moquiConfProduction, 'moqui.runtime':moquiRuntime]
mainClass = '-jar'; args = [warName]
}
task load(type: JavaExec) {
description = "Run Moqui to load data; to specify data types use something like: gradle load -Ptypes=seed,seed-initial,install"
dependsOn checkRuntime, allBuildTasks
systemProperties = ['moqui.conf':moquiConfDev, 'moqui.runtime':moquiRuntime]
workingDir = '.'; jvmArgs = ['-server']; mainClass = '-jar'
args = [warName, 'load', (project.properties.containsKey('types') ? "types=${types}" : "types=all")]
}
task loadSeed(type: JavaExec) {
dependsOn checkRuntime, allBuildTasks
systemProperties = ['moqui.conf':moquiConfProduction, 'moqui.runtime':moquiRuntime]
workingDir = '.'; jvmArgs = ['-server']; mainClass = '-jar'
args = [warName, 'load', (project.properties.containsKey('types') ? "types=${types}" : "types=seed")]
}
task loadSeedInitial(type: JavaExec) {
dependsOn checkRuntime, allBuildTasks
systemProperties = ['moqui.conf':moquiConfProduction, 'moqui.runtime':moquiRuntime]
workingDir = '.'; jvmArgs = ['-server']; mainClass = '-jar'
args = [warName, 'load', (project.properties.containsKey('types') ? "types=${types}" : "types=seed,seed-initial")]
}
task loadProduction(type: JavaExec) {
dependsOn checkRuntime, allBuildTasks
systemProperties = ['moqui.conf':moquiConfProduction, 'moqui.runtime':moquiRuntime]
workingDir = '.'; jvmArgs = ['-server']; mainClass = '-jar'
args = [warName, 'load', (project.properties.containsKey('types') ? "types=${types}" : "types=seed,seed-initial,install")]
}
task saveDb { doLast {
if (file(moquiRuntime+'/db/derby/moqui').exists())
ant.zip(destfile: 'SaveDerby.zip') { fileset(dir: moquiRuntime+'/db/derby/moqui') { include(name: '**/*') } }
if (file(moquiRuntime+'/db/h2').exists())
ant.zip(destfile: 'SaveH2.zip') { fileset(dir: moquiRuntime+'/db/h2') { include(name: '**/*') } }
if (file(moquiRuntime+'/db/orientdb/databases').exists())
ant.zip(destfile: 'SaveOrientDb.zip') { fileset(dir: moquiRuntime+'/db/orientdb/databases') { include(name: '**/*') } }
File osDir = file(moquiRuntime + '/opensearch')
String workDir = moquiRuntime + (osDir.exists() ? '/opensearch' : '/elasticsearch')
if (file(workDir+'/data').exists()) {
if (file(workDir+'/bin').exists()) {
def pidFile = file(workDir+'/pid')
if (pidFile.exists()) {
String pid = pidFile.getText()
logger.lifecycle("ElasticSearch running with pid ${pid}, stopping before saving data then restarting")
['kill', pid].execute(null, file(workDir)).waitFor()
['tail', "--pid=${pid}", '-f', '/dev/null'].execute(null, file(workDir)).waitFor()
if (pidFile.exists()) delete pidFile
ant.zip(destfile: (osDir.exists() ? 'SaveOpenSearch.zip' : 'SaveElasticSearch.zip')) { fileset(dir: workDir+'/data') { include(name: '**/*') } }
startSearch(moquiRuntime)
} else {
logger.lifecycle("Found ${osDir.exists() ? 'OpenSearch' : 'ElasticSearch'} ${workDir}/bin directory but no pid, saving data without stop/start; WARNING if ElasticSearch is running this will cause problems!")
ant.zip(destfile: (osDir.exists() ? 'SaveOpenSearch.zip' : 'SaveElasticSearch.zip')) { fileset(dir: workDir+'/data') { include(name: '**/*') } }
}
} else {
ant.zip(destfile: (osDir.exists() ? 'SaveOpenSearch.zip' : 'SaveElasticSearch.zip')) { fileset(dir: workDir+'/data') { include(name: '**/*') } }
}
}
} }
task loadSave {
description = "Clean all, build and load, then save database (H2, Derby), OrientDB, and OpenSearch/ElasticSearch files; to be used before reloadSave"
dependsOn cleanAll, load, saveDb
}
task reloadSave {
description = "After a loadSave clean database (H2, Derby), OrientDB, and ElasticSearch files and reload from saved copy"
dependsOn cleanTempDir, cleanDb, cleanLog, cleanSessions
dependsOn allBuildTasks
doLast {
if (file('SaveDerby.zip').exists()) copy { from zipTree('SaveDerby.zip'); into file(moquiRuntime+'/db/derby/moqui') }
if (file('SaveH2.zip').exists()) copy { from zipTree('SaveH2.zip'); into file(moquiRuntime+'/db/h2') }
if (file('SaveOrientDb.zip').exists()) copy { from zipTree('SaveOrientDb.zip'); into file(moquiRuntime+'/db/orientdb/databases') }
if (file('SaveElasticSearch.zip').exists()) {
String esDir = moquiRuntime+'/elasticsearch'
if (file(esDir+'/bin').exists()) {
def pidFile = file(esDir+'/pid')
if (pidFile.exists()) {
String pid = pidFile.getText()
logger.lifecycle("ElasticSearch running with pid ${pid}, stopping before restoring data then restarting")
['kill', pid].execute(null, file(esDir)).waitFor()
['tail', "--pid=${pid}", '-f', '/dev/null'].execute(null, file(esDir)).waitFor()
copy { from zipTree('SaveElasticSearch.zip'); into file(moquiRuntime+'/elasticsearch/data') }
if (pidFile.exists()) delete pidFile
['./bin/elasticsearch', '-d', '-p', 'pid'].execute(null, file(esDir)).waitFor()
} else {
logger.lifecycle("Found ElasticSearch ${esDir}/bin directory but no pid, saving data without stop/start; WARNING if ElasticSearch is running this will cause problems!")
copy { from zipTree('SaveElasticSearch.zip'); into file(moquiRuntime+'/elasticsearch/data') }
}
} else {
copy { from zipTree('SaveElasticSearch.zip'); into file(moquiRuntime+'/elasticsearch/data') }
}
}
if (file('SaveOpenSearch.zip').exists()) {
String esDir = moquiRuntime+'/opensearch'
if (file(esDir+'/bin').exists()) {
def pidFile = file(esDir+'/pid')
if (pidFile.exists()) {
String pid = pidFile.getText()
logger.lifecycle("OpenSearch running with pid ${pid}, stopping before restoring data then restarting")
['kill', pid].execute(null, file(esDir)).waitFor()
['tail', "--pid=${pid}", '-f', '/dev/null'].execute(null, file(esDir)).waitFor()
copy { from zipTree('SaveOpenSearch.zip'); into file(moquiRuntime+'/opensearch/data') }
if (pidFile.exists()) delete pidFile
['./bin/opensearch', '-d', '-p', 'pid'].execute(null, file(esDir)).waitFor()
} else {
logger.lifecycle("Found OpenSearch ${esDir}/bin directory but no pid, saving data without stop/start; WARNING if OpenSearch is running this will cause problems!")
copy { from zipTree('SaveOpenSearch.zip'); into file(moquiRuntime+'/opensearch/data') }
}
} else {
copy { from zipTree('SaveOpenSearch.zip'); into file(moquiRuntime+'/opensearch/data') }
}
}
}
}
// ========== deploy tasks ==========
task deployTomcat { doLast {
// remove runtime directory, may have been added for logs/etc
delete file(tomcatHome + '/runtime')
// remove ROOT directory and war to avoid conflicts
delete file(tomcatHome + '/webapps/ROOT')
delete file(tomcatHome + '/webapps/ROOT.war')
// copy the war file to ROOT.war
copy { from file(warName); into file(tomcatHome + '/webapps'); rename(warName, 'ROOT.war') }
} }
task plusRuntimeWarTemp {
dependsOn checkRuntime, allBuildTasks
doLast {
File wartempFile = file('wartemp')
if (wartempFile.exists()) delete wartempFile
// make version detail files
makeVersionDetailFiles()
// unzip the "moqui-${version}.war" file to the wartemp directory
copy { from zipTree(warName); into wartempFile }
// copy runtime directory (with a few exceptions) into a runtime directory in the war
copy {
from fileTree(dir: '.', include: moquiRuntime+'/**',
excludes: ['**/*.jar', '**/build', moquiRuntime+'/classes/**', moquiRuntime+'/lib/**', moquiRuntime+'/log/**', moquiRuntime+'/sessions/**'])
into wartempFile
}
// copy the jar files from runtime/lib
copy { from fileTree(dir: moquiRuntime+'/lib', include: '**/*.jar').files into 'wartemp/WEB-INF/lib' }
// copy the classpath resource files from runtime/classes
copy { from fileTree(dir: moquiRuntime+'/classes', include: '**/*') into 'wartemp/WEB-INF/classes' }
// copy the jar files from components
copy { from fileTree(dir: moquiRuntime+'/base-component', include: '**/*.jar').files into 'wartemp/WEB-INF/lib' }
copy {
from fileTree(dir: moquiRuntime+'/component', include: '**/*.jar', exclude: '**/librepo/*.jar').files
into 'wartemp/WEB-INF/lib'
duplicatesStrategy DuplicatesStrategy.WARN
}
copy {
from fileTree(dir: moquiRuntime+'/ecomponent', include: '**/*.jar', exclude: '**/librepo/*.jar').files
into 'wartemp/WEB-INF/lib'
duplicatesStrategy DuplicatesStrategy.WARN
}
// add MoquiInit.properties fresh copy, just in case it was changed
copy { from file('MoquiInit.properties') into 'wartemp/WEB-INF/classes' }
// add Procfile to root
copy { from file('Procfile') into 'wartemp' }
// special case: copy elasticsearch plugin/module jars (needed for ES installed in runtime/elasticsearch
if (file(moquiRuntime+'/elasticsearch').exists())
copy { from fileTree(dir: '.', include: moquiRuntime+'/elasticsearch/**/*.jar') into wartempFile }
// special case: copy opensearch plugin/module jars (needed for ES installed in runtime/opensearch
if (file(moquiRuntime+'/opensearch').exists())
copy { from fileTree(dir: '.', include: moquiRuntime+'/opensearch/**/*.jar') into wartempFile }
// special case: copy jackrabbit standalone jar (if exists)
copy { from fileTree(dir: moquiRuntime + '/jackrabbit', include: 'jackrabbit-standalone-*.jar').files; into 'wartemp/' + moquiRuntime + '/jackrabbit' }
// clean up version detail files
cleanVersionDetailFiles()
}
}
task addRuntime(type: Zip) {
description = "Create moqui-plus-runtime.war file from the moqui.war file and the runtime directory embedded in it"
dependsOn checkRuntime, allBuildTasks, plusRuntimeWarTemp
archiveFileName = plusRuntimeName
destinationDirectory = file('.')
from file('wartemp')
doFirst { if (file(plusRuntimeName).exists()) delete file(plusRuntimeName) }
doLast { delete file('wartemp') }
}
// don't use this task directly, use addRuntimeTomcat which calls this
task deployTomcatRuntime { doLast {
delete file(tomcatHome + '/runtime'); delete file(tomcatHome + '/webapps/ROOT'); delete file(tomcatHome + '/webapps/ROOT.war')
copy { from file(plusRuntimeName); into file(tomcatHome + '/webapps'); rename(plusRuntimeName, 'ROOT.war') }
} }
task addRuntimeTomcat {
dependsOn addRuntime
dependsOn deployTomcatRuntime
}
// ========== component tasks ==========
task getDefaults {
description = "Get a component using specified location type, also check/get all components it depends on; requires component property; locationType property optional (defaults to git if there is a .git directory, otherwise to current)"
doLast {
String curLocationType = file('.git').exists() ? 'git' : 'current'
if (project.hasProperty('locationType')) curLocationType = locationType
getComponentTop(curLocationType)
}
}
task getComponent {
description = "Get a component using specified location type, also check/get all components it depends on; requires component property; locationType property optional (defaults to git if there is a .git directory, otherwise to current)"
doLast {
String curLocationType = file('.git').exists() ? 'git' : 'current'
if (project.hasProperty('locationType')) curLocationType = locationType
getComponentTop(curLocationType)
}
}
task createComponent {
description = "Create a new component. Set new component name with -Pcomponent=new_component_name (based on the moqui start component here: https://github.com/moqui/start)"
doLast {
String curLocationType = file('.git').exists() ? 'git' : 'current'
if (project.hasProperty('locationType')) curLocationType = locationType
if (project.hasProperty('component')) {
checkRuntimeDirAndDefaults(curLocationType)
Set compsChecked = new TreeSet()
def startComponentName = 'start'
File componentDir = getComponent(startComponentName, curLocationType, parseAddons(), parseMyaddons(), compsChecked)
if (componentDir?.exists()) {
logger.lifecycle("Got component start, dependent components checked: ${compsChecked}")
def newComponent = file("runtime/component/${component}")
def renameSuccessful = componentDir.renameTo(newComponent)
if (!renameSuccessful) {
logger.error("Failed to rename component start to ${component}. Try removing the existing component directory first or giving this program write permissions.")
} else {
logger.lifecycle("Renamed component start to ${component}")
}
print "Updated file: "
newComponent.eachFileRecurse(groovy.io.FileType.FILES) { file ->
try {
// If file name is startComponentName.* rename to component.*
if (file.name.startsWith(startComponentName)) {
String newFileName = (file.name - startComponentName)
newFileName = component + newFileName
File newFile = new File(file.parent, newFileName)
file.renameTo(newFile)
file = newFile
print "${file.path - newComponent.path - '/'}, "
}
String content = file.text
if (content.contains(startComponentName)) {
content = content.replaceAll(startComponentName, component)
file.text = content
print "${file.path - newComponent.path - '/'}, "
}
} catch (IOException e) {
println "Error processing file ${file.path}: ${e.message}"
}
}
print "\n\n"
println "Select rest api (r), screens (s), or both (B):"
def componentInput = System.in.newReader().readLine()
if (componentInput == 'r') {
new File(newComponent, 'screen').deleteDir()
new File(newComponent, 'template').deleteDir()
new File(newComponent, 'data/AppSeedData.xml').delete()
new File(newComponent, 'MoquiConf.xml').delete()
def moquiConf = new File(newComponent, 'MoquiConf.xml')
moquiConf.append("\n" +
"\n" +
"\n" +
"")
println "Selected rest api so, deleted screen, template, and AppSeedData.xml\n"
} else if (componentInput == 's') {
new File(newComponent, "services/${component}.rest.xml").delete()
new File(newComponent, 'data/ApiSeedData.xml').delete()
println "Selected screens so, deleted rest api and ApiSeedData.xml\n"
} else if (componentInput == 'b' || componentInput == 'B' || componentInput == '') {
println "Selected both rest api and screens\n"
} else {
println "Invalid input. Try again"
newComponent.deleteDir()
return
}
println "Are you going to code or test in groovy or java [y/N]"
def codeInput = System.in.newReader().readLine()
if (codeInput == 'y' || codeInput == 'Y') {
println "Keeping src folder\n"
} else if (codeInput == 'n' || codeInput == 'N' || codeInput == '') {
new File(newComponent, 'src').deleteDir()
new File(newComponent, 'build.grade').delete()
println "Selected no so, deleted src and build.grade\n"
} else {
println "Invalid input. Try again"
newComponent.deleteDir()
return
}
println "Setup a git repository [Y/n]"
def gitInput = System.in.newReader().readLine()
if (gitInput == 'y' || gitInput == 'Y' || gitInput == '') {
new File(newComponent, '.git').deleteDir()
// Setup git repository
def grgit = Grgit.init(dir: newComponent.path)
grgit.add(patterns: ['.'])
// Can't get signing to work easily. If signing works well then might as well commit
// grgit.commit(message: 'Initial commit')
println "Selected yes, so git is initialized\n"
println "To setup the git remote origin, type the git remote url or enter to skip"
def remoteUrl = System.in.newReader().readLine()
if (remoteUrl != '') {
grgit.remote.add(name: 'origin', url: remoteUrl)
println "Run the following to push the git repository:\ncd runtime/component/${component} && git commit -m 'Initial commit' && git push && cd ../../.."
} else {
println "Run the following to push the git repository:\ncd runtime/component/${component} && git commit -m 'Initial commit' && git remote add origin git@github.com:yourgroup/${component} && git push && cd ../../.."
}
} else if (gitInput == 'n' || gitInput == 'N') {
new File(newComponent, '.git').deleteDir()
println "Selected no, so git is not initialized\n"
println "Run the following to push the git repository:\ncd runtime/component/${component} && git commit -m 'Initial commit' && git remote add origin git@github.com:yourgroup/${component} && git push && cd ../../.."
} else {
println "Invalid input. Try again"
newComponent.deleteDir()
return
}
println "Add to myaddons.xml [Y/n]"
def myaddonsInput = System.in.newReader().readLine()
if (myaddonsInput == 'y' || myaddonsInput == 'Y' || myaddonsInput == '') {
def myaddonsFile = file('myaddons.xml')
if (myaddonsFile.exists()){
// Iterate through myaddons file and delete the lines that are
// Read the lines from the file
def lines = myaddonsFile.readLines()
// Filter out the lines that contain
def filteredLines = lines.findAll { !it.contains("") }
// Write the filtered lines back to the file
myaddonsFile.text = filteredLines.join('\n')
} else {
println "myaddons.xml not found. Creating one\nEnter repository github (g), github-ssh (GS), bitbucket (b), or bitbucket-ssh (bs)"
def repositoryInput = System.in.newReader().readLine()
myaddonsFile.append("")
}
println "Enter the component git repository group"
def groupInput = System.in.newReader().readLine()
println "Enter the component git repository name"
def nameInput = System.in.newReader().readLine()
// get git branch
def grgit = Grgit.open(dir: newComponent.path)
def branch = grgit.branch.current().name
myaddonsFile.append("\n ")
myaddonsFile.append("\n")
} else if (myaddonsInput == 'n' || myaddonsInput == 'N') {
println "Selected no, so component not added to myaddons.xml\n"
} else {
println "Invalid input. Try again"
newComponent.deleteDir()
return
}
}
} else {
throw new InvalidUserDataException("No component property specified")
}
}
}
task getCurrent {
description = "Get the current archive for a component, also check each component it depends on and if not present get its current archive; requires component property"
doLast { getComponentTop('current') }
}
task getRelease {
description = "Get the release archive for a component, also check each component it depends on and if not present get its configured release archive; requires component property"
doLast { getComponentTop('release') }
}
task getBinary {
description = "Get the binary release archive for a component, also check each component it depends on and if not present get its configured release archive; requires component property"
doLast { getComponentTop('binary') }
}
task getGit {
description = "Clone the git repository for a component, also check each component it depends on and if not present clone its git repository; requires component property"
doLast { getComponentTop('git') }
}
task getDepends {
description = "Check/Get all dependencies for all components in runtime/component; locationType property optional (defaults to git if there is a .git directory, otherwise to current)"
doLast {
String curLocationType = file('.git').exists() ? 'git' : 'current'
if (project.hasProperty('locationType')) curLocationType = locationType
checkAllComponentDependencies(curLocationType)
}
}
task getComponentSet {
description = "Gets all components in the specied componentSet using specified location type, also check/get all components it depends on; requires -Pcomponent property; -PlocationType property optional (defaults to git if there is a .git directory, otherwise to current)"
doLast {
String curLocationType = file('.git').exists() ? 'git' : 'current'
if (project.hasProperty('locationType')) curLocationType = locationType
if (!project.hasProperty('componentSet')) throw new InvalidUserDataException("No componentSet property specified")
checkRuntimeDirAndDefaults(curLocationType)
Set compsChecked = new TreeSet()
loadComponentSet((String) componentSet, curLocationType, parseAddons(), parseMyaddons(), compsChecked)
logger.lifecycle("Got component-set ${componentSet}, got or checked components: ${compsChecked}")
}
}
task zipComponents {
description = "Create a .zip archive file for each component in runtime/component"
dependsOn allBuildTasks
doLast { for (File compDir in findComponentDirs()) createComponentZip(compDir) }
}
task zipComponent {
description = "Create a .zip archive file a single component in runtime/component; requires component property"
dependsOn allBuildTasks
doLast {
if (!project.hasProperty('component')) throw new InvalidUserDataException("No component property specified")
createComponentZip(file('runtime/component/' + component))
}
}
// ========== utility methods ==========
def createComponentZip(File compDir) {
File compXmlFile = file("${compDir.path}/component.xml")
if (!compXmlFile.exists()) {
logger.lifecycle("No component.xml file found at ${compXmlFile.path}, not creating component zip")
return
}
Node compXml = new XmlParser().parse(compXmlFile)
File zipFile = file("${compDir.parentFile.path}/${compXml.'@name'}${compXml.'@version' ? '-' + compXml.'@version' : ''}.zip")
if (zipFile.exists()) { logger.lifecycle("Deleting existing component zip file: ${zipFile.name}"); zipFile.delete() }
// exclude build, src, librepo, build.gradle, defaultexcludes (which includes .git)
ant.zip(destfile: zipFile.path) { fileset(dir: compDir.parentFile.path, includes: "${compDir.name}/**", defaultexcludes: 'yes',
excludes: "${compDir.name}/build/**,${compDir.name}/src/**,${compDir.name}/librepo/**,${compDir.name}/build.gradle") }
logger.lifecycle("Created component zip file: ${zipFile.name}")
}
def checkRuntimeDirAndDefaults(String locType) {
Node addons = parseAddons()
Node myaddons = parseMyaddons()
if (!locType) locType = file('.git').exists() ? 'git' : 'current'
File runtimeDir = file('runtime')
if (!runtimeDir.exists()) {
Node runtimeNode = myaddons != null && myaddons.runtime ? (Node) myaddons.runtime[0] : null
if (runtimeNode == null) runtimeNode = addons != null && addons.runtime ? (Node) addons.runtime[0] : null
if (runtimeNode == null) throw new InvalidUserDataException("The runtime directory does not exist and no runtime element found in myaddons.xml or addons.xml")
downloadComponent("runtime", locType, runtimeNode, addons, myaddons)
}
// look for @default in myaddons.xml only
if (myaddons?.'@default') {
String defaultComps = myaddons.'@default'
Set compsChecked = new TreeSet()
Set defaultCompsDownloaded = new TreeSet()
for (String compName in defaultComps.split(',')) {
compName = compName.trim()
if (!compName) continue
File componentDir = file("runtime/component/${compName}")
if (componentDir.exists()) continue
getComponent(compName, locType, addons, myaddons, compsChecked)
defaultCompsDownloaded.add(compName)
}
if (defaultCompsDownloaded)
logger.lifecycle("Got default components ${defaultCompsDownloaded}, dependent components checked: ${compsChecked}")
}
}
def loadComponentSet(String setName, String curLocationType, Node addons, Node myaddons, Set compsChecked) {
Node setNode = null
if (myaddons) setNode = myaddons.'component-set'.find({ it."@name" == setName })
if (setNode == null) setNode = addons.'component-set'.find({ it."@name" == setName })
if (setNode == null) throw new InvalidUserDataException("Could not find component-set with name ${setName}")
String components = setNode.'@components'
if (components) for (String compName in components.split(","))
getComponent(compName, curLocationType, addons, myaddons, compsChecked)
String sets = setNode.'@sets'
if (sets) for (String subsetName in sets.split(","))
loadComponentSet(subsetName, curLocationType, addons, myaddons, compsChecked)
}
Collection findComponentDirs() {
file('runtime/component').listFiles().findAll({ it.isDirectory() && it.listFiles().find({ it.name == 'component.xml' }) })
}
Node parseAddons() { new XmlParser().parse(file('addons.xml')) }
Node parseMyaddons() { if (file('myaddons.xml').exists()) { new XmlParser().parse(file('myaddons.xml')) } else { null } }
Node parseComponent(project) { new XmlParser().parse(project.file('component.xml')) }
def getComponentTop(String locationType) {
if (project.hasProperty('component')) {
checkRuntimeDirAndDefaults(locationType)
Set compsChecked = new TreeSet()
File componentDir = getComponent(component, locationType, parseAddons(), parseMyaddons(), compsChecked)
if (componentDir?.exists()) logger.lifecycle("Got component ${component}, dependent components checked: ${compsChecked}")
} else {
throw new InvalidUserDataException("No component property specified")
}
}
File getComponent(String compName, String type, Node addons, Node myaddons, Set compsChecked) {
// get the component
Node component = myaddons != null ? (Node) myaddons.component.find({ it."@name" == compName }) : null
if (component == null) component = (Node) addons.component.find({ it."@name" == compName })
if (component == null) throw new InvalidUserDataException("Component ${compName} not found in myaddons.xml or addons.xml")
if (component.'@skip-get' == 'true') { logger.lifecycle("Skipping get component ${compName} (skip-get=true)"); return null }
File componentDir = downloadComponent("runtime/component/${compName}", type, component, addons, myaddons)
checkComponentDependencies(compName, type, addons, myaddons, compsChecked)
return componentDir
}
File downloadComponent(String targetDirPath, String type, Node component, Node addons, Node myaddons) {
String compName = component.'@name'
String branch = component.'@branch'
// fall back to 'current' (branch-based) if release/binary requested but version is empty
if (type in ['release', 'binary'] && !component.'@version') type = 'current'
String repositoryName = (component.'@repository' ?: myaddons?.'@default-repository' ?: addons.'@default-repository' ?: 'github')
Node repository = myaddons != null ? (Node) myaddons.repository.find({ it."@name" == repositoryName }) : null
if (repository == null) repository = (Node) addons.repository.find({ it."@name" == repositoryName })
if (repository == null) throw new InvalidUserDataException("Repository ${repositoryName} not found in myaddons.xml or addons.xml")
Node location = (Node) repository.location.find({ it."@type" == type })
if (location == null) throw new InvalidUserDataException("Location for type ${type} now found in repository ${repositoryName}")
String url = Eval.me('component', component, '"""' + location.'@url' + '"""')
logger.lifecycle("Getting ${compName} (type ${type}) from ${url} at ${branch} to ${targetDirPath}")
File targetDir = file(targetDirPath)
if (targetDir.exists()) { logger.lifecycle("Component ${compName} already exists at ${targetDir}"); return targetDir }
if (type in ['current', 'release', 'binary']) {
File zipFile = file("${targetDirPath}.zip")
ant.get(src: url, dest: zipFile)
// the eachFile closure removes the first path from each file, moving everything up a directory
copy { from zipTree(zipFile); into targetDir; eachFile { it.setPath((it.getRelativePath().getSegments() as List).tail().join("/")); return it } }
delete zipFile
// delete the empty directories left over from zip expansion with first path removed
String archiveDirName = compName + '-'
if (type == 'current') { archiveDirName += component.'@branch' } else { archiveDirName += component.'@version' }
// logger.lifecycle("Deleting dir ${targetDirPath}/${archiveDirName}")
delete file("${targetDirPath}/${archiveDirName}")
} else if (type == 'git') {
Grgit.clone(dir: targetDir, uri: url, refToCheckout: branch)
}
logger.lifecycle("Downloaded ${compName} to ${targetDirPath}")
return targetDir
}
def checkComponentDependencies(String compName, String type, Node addons, Node myaddons, Set compsChecked) {
File componentDir = file("runtime/component/${compName}")
if (!componentDir.exists()) return
compsChecked.add(compName)
File compXmlFile = file("${componentDir.path}/component.xml")
if (!compXmlFile.exists()) return
Node compXml = new XmlParser().parse(compXmlFile)
for (Node dependsOn in compXml.'depends-on') {
String depCompName = dependsOn.'@name'
if (file("runtime/component/${depCompName}").exists()) {
if (!compsChecked.contains(depCompName)) checkComponentDependencies(depCompName, type, addons, myaddons, compsChecked)
} else {
getComponent(depCompName, type, addons, myaddons, compsChecked)
}
}
}
def checkAllComponentDependencies(String type) {
Node addons = parseAddons()
Node myaddons = parseMyaddons()
Set compsChecked = new TreeSet()
for (File compDir in findComponentDirs()) {
checkComponentDependencies(compDir.name, type, addons, myaddons, compsChecked)
}
logger.lifecycle("Dependent components checked: ${compsChecked}")
}
def makeVersionDetailFiles() {
if (file(".git").exists()) {
def topVersionMap = [framework:getVersionDetailMap(file('.'))]
if (file("runtime/.git").exists()) topVersionMap.runtime = getVersionDetailMap(file('runtime'))
file('runtime/version.json').write(groovy.json.JsonOutput.toJson(topVersionMap), "UTF-8")
}
for (File compDir in file('runtime/component').listFiles().findAll { it.isDirectory() && it.listFiles().find { it.name == '.git' } }) {
def versionMap = getVersionDetailMap(compDir)
if (versionMap == null) continue
file(compDir.path + '/version.json').write(groovy.json.JsonOutput.toJson(versionMap), "UTF-8")
}
}
Map getVersionDetailMap(File gitDir) {
def curGrgit = Grgit.open(dir: gitDir)
if (curGrgit == null) return null
String trackingName = curGrgit.branch.current()?.trackingBranch?.name
String trackingUrl = ""
int trackingNameSlash = trackingName ? trackingName.indexOf('/') : -1
if (trackingNameSlash > 0) {
String remoteName = trackingName.substring(0, trackingNameSlash)
def trackingRemote = curGrgit.remote.list().find({ it.name == remoteName })
if (trackingRemote != null) trackingUrl = trackingRemote.url
}
try {
String headId = curGrgit.head()?.id
// tags come in order of oldest first so want to find last in case multiple tags refer to HEAD commit
def headTag = curGrgit.tag.list().reverse().find({ it.commit.id == headId })
return [branch:curGrgit.branch.current()?.name, tracking:trackingName, url:trackingUrl, head:headId?.take(10), tag:headTag?.name]
} catch (Throwable t) {
logger.lifecycle("Error getting git info for directory ${gitDir?.path}", t)
return null
}
}
def cleanVersionDetailFiles() {
def runtimeVersionFile = file("runtime/version.json")
if (runtimeVersionFile.exists()) runtimeVersionFile.delete()
for (File compDir in file('runtime/component').listFiles().findAll { it.isDirectory() }) {
File versionDetailFile = file(compDir.path + '/version.json')
if (versionDetailFile.exists()) versionDetailFile.delete()
}
}
// ========== combined tasks ==========
task cleanPullLoad { dependsOn cleanAll, gitPullAll, load }
task cleanPullTest { dependsOn cleanAll, gitPullAll, load, allTestTasks }
task cleanPullCompTest { dependsOn cleanAll, gitPullAll, load, getComponentTestTasks() }
task compTest { dependsOn getComponentTestTasks() }
================================================
FILE: build.xml
================================================
================================================
FILE: docker/README.md
================================================
# Moqui On Docker
This directory contains everything needed to deploy moqui on docker.
## Prerequisites
- Docker.
- Docker Compose Plugin.
- Java 21.
- Convenience scripts require bash.
## Deployment Instructions
To deploy moqui on docker with all necessary services, follow below:
- Choose a docker compose file in `docker/`. For example to deploy moqui on
postgres you can choose moqui-postgres-compose.yml.
- Find and download a suitable JDBC driver for the target database, download its
jar file and place it in `runtime/lib`.
- Generate moqui war file `./gradlew build`
- Get into docker folder `cd docker`
- Build chosen compose file, e.g. `./build-compose-up.sh moqui-postgres-compose.yml`
This last step would build the "moqui" image and deploy all services. You can
confirm by accessing the system on http://localhost
For a more secure and complete deployment, it is recommended to carefully review
the compose files and adjust as needed, including changing credentials and other
settings such as setting the host names, configuring for letsencrypt, etc ...
## Compose Files
There are multiple compose files offered providing different services:
- moqui-acme-postgres.yml: Moqui, nginx, postgres and automatically issues SSL
certificates from letsencrypt. Requires configuring variables including
`VIRTUAL_HOST` and `LETSENCRYPT_HOST` and postgres driver.
- moqui-postgres-compose.yml: Moqui with postgres and nginx standard deployment.
- moqui-mysql-compose.yml: Moqui with mysql and nginx standard deployment.
- mysql-compose.yml: deploys all services with mysql except moqui itself. Useful
when deploying moqui elsewhere like on a servlet container.
- postgres-compose.yml: Same as mysql-compose but replacing mysql with postgres
- moqui-cluster1-compose.yml: Moqui with mysql. Designed to be deployed in a
hazelcast cluster for horizontal scaling. Requires preparing moqui with
the hazelcast component and mysql driver.
## Helper Scripts
- `build-compose-up.sh`: Given a certain compose file, build "moqui" and deploy
all services in the chosen yml file.
- `clean.sh`: Clean the artifacts generated upon deployment including the
database, opensearch and runtime.
- `compose-down.sh`: Tear down all services of a certain yml file
- `compose-up.sh`: Deploy all services of a certain yml file. If the "moqui"
image exists in the yml file and it is not built, this script will fail, and
you should use the build-compose-up.sh instead.
- `postgres_backup.sh`: Convenience script to create a database dump. Might need
adjusting the credentials.
## Moqui Image
The actual image "moqui" is generated from the Dockerfile found in
`docker/simple/Dockerfile`. All compose files depend on a "moqui" image
generated by this file.
Note: The deployment and tear down scripts can accept a container image name to
override the default name. For example, to use a hardened JDK image, a command
like the following can be used:
`./build-compose-up.sh moqui-acme-postgres.yml .. eclipse-temurin:21-jdk-ubi10-minimal`
================================================
FILE: docker/build-compose-up.sh
================================================
#! /bin/bash
if [[ ! $1 ]]; then
echo "Usage: ./build-compose-up.sh [] []"
exit 1
fi
COMP_FILE="${1}"
MOQUI_HOME="${2:-..}"
NAME_TAG=moqui
RUNTIME_IMAGE="${3:-eclipse-temurin:21-jdk}"
if [ -f simple/docker-build.sh ]; then
cd simple
./docker-build.sh ../.. $NAME_TAG $RUNTIME_IMAGE
# shellcheck disable=SC2103
cd ..
fi
if [ -f compose-up.sh ]; then
./compose-up.sh $COMP_FILE $MOQUI_HOME $RUNTIME_IMAGE
fi
================================================
FILE: docker/clean.sh
================================================
#! /bin/bash
search_name=opensearch
if [ -d runtime/opensearch/bin ]; then search_name=opensearch;
elif [ -d runtime/elasticsearch/bin ]; then search_name=elasticsearch;
fi
rm -Rf runtime/
rm -Rf runtime1/
rm -Rf runtime2/
rm -Rf db/
rm -Rf $search_name/data/nodes
rm -Rf $search_name/data/*.conf
rm $search_name/logs/*.log
docker rm moqui-server
docker rm moqui-database
docker rm nginx-proxy
================================================
FILE: docker/compose-down.sh
================================================
#! /bin/bash
if [[ ! $1 ]]; then
echo "Usage: ./compose-down.sh "
exit 1
fi
COMP_FILE="${1}"
# set the project name to 'moqui', network will be called 'moqui_default'
docker compose -f $COMP_FILE -p moqui down
================================================
FILE: docker/compose-up.sh
================================================
#! /bin/bash
if [[ ! $1 ]]; then
echo "Usage: ./compose-up.sh [] []"
exit 1
fi
COMP_FILE="${1}"
MOQUI_HOME="${2:-..}"
NAME_TAG=moqui
RUNTIME_IMAGE="${3:-eclipse-temurin:21-jdk}"
# Note: If you don't have access to your conf directory while running this:
# This will make it so that your docker/conf directory no longer has your configuration files in it.
# This is because when docker-compose provisions a volume on the host it applies the host's data before the image's data.
# - change docker compose's moqui-server conf volume path from ./runtime/conf to conf
# - add a top level volumes: tag with conf: below
# - remove the next block of if statements from this file and you should be good to go
search_name=opensearch
if [ -d runtime/opensearch/bin ]; then search_name=opensearch;
elif [ -d runtime/elasticsearch/bin ]; then search_name=elasticsearch;
fi
if [ ! -e runtime ]; then mkdir runtime; fi
if [ ! -e runtime/conf ]; then cp -R $MOQUI_HOME/runtime/conf runtime/; fi
if [ ! -e runtime/lib ]; then cp -R $MOQUI_HOME/runtime/lib runtime/; fi
if [ ! -e runtime/classes ]; then cp -R $MOQUI_HOME/runtime/classes runtime/; fi
if [ ! -e runtime/log ]; then cp -R $MOQUI_HOME/runtime/log runtime/; fi
if [ ! -e runtime/txlog ]; then cp -R $MOQUI_HOME/runtime/txlog runtime/; fi
if [ ! -e runtime/db ]; then cp -R $MOQUI_HOME/runtime/db runtime/; fi
if [ ! -e runtime/$search_name ]; then cp -R $MOQUI_HOME/runtime/$search_name runtime/; fi
# set the project name to 'moqui', network will be called 'moqui_default'
docker compose -f $COMP_FILE -p moqui up -d
================================================
FILE: docker/elasticsearch/data/README
================================================
This directory must exist for mapping otherwise created as root in container and elasticsearch cannot access it.
================================================
FILE: docker/elasticsearch/moquiconfig/elasticsearch.yml
================================================
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
#
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
cluster.name: MoquiElasticSearch
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
# NOTE: for cluster use auto generated
# node.name: MoquiLocal
# Add custom attributes to the node:
#
#node.attr.rack: r1
node.master: false
node.data: false
node.ingest: false
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
# By default use _local_ and _site_ for localhost or any local network including docker container virtual network
network.host:
- _site_
- _local_
# transport.type: local
# discovery.type: single-node
discovery.type: zen
discovery.zen.minimum_master_nodes: 1
# use unicast discovery to find external elasticsearch server, multicast doesn't seem to work with docker bridge network
discovery.zen.ping.unicast.hosts: elasticsearch
transport.host: 0.0.0.0
transport.tcp.port: 9300
# CORS settings for local testing only
# http.cors.enabled: true
# http.cors.allow-origin: '*'
# Set a port for HTTP (9200 is the default, or with no port specified looks at subsequent ports to find one open):
http.port: 9200
# For more information, see the documentation at:
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
#
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
#
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
#node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
# ---------------------------------- Script ------------------------------------
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-scripting-security.html
================================================
FILE: docker/elasticsearch/moquiconfig/log4j2.properties
================================================
status = info
# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debug
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.10000m%n
appender.rolling.filePattern = ${sys:es.logs}-%d{yyyy-MM-dd}.log
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs}_index_search_slowlog-%d{yyyy-MM-dd}.log
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs}_index_indexing_slowlog-%d{yyyy-MM-dd}.log
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true
logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false
================================================
FILE: docker/kibana/kibana.yml
================================================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "kibana"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
server.basePath: "/kibana"
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://moqui-server:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
================================================
FILE: docker/moqui-acme-postgres.yml
================================================
# A Docker Compose application with Moqui, Postgres, OpenSearch, OpenSearch Dashboards, and virtual hosting through
# nginx-proxy supporting multiple moqui instances on different hostnames.
# Run with something like this for detached mode:
# $ docker compose -f moqui-postgres-compose.yml -p moqui up -d
# Or to copy runtime directories for mounted volumes, set default settings, etc use something like this:
# $ ./compose-run.sh moqui-postgres-compose.yml
# This sets the project/app name to 'moqui' and the network will be 'moqui_default', to be used by external moqui containers
# Test locally by adding the virtual host to /etc/hosts or with something like:
# $ curl -H "Host: moqui.local" localhost/Login
# To run an additional instance of moqui run something like this (but with
# many more arguments for volume mapping, db setup, etc):
# $ docker run -e VIRTUAL_HOST=moqui2.local --name moqui2_local --network moqui_default moqui
# To import data from the docker host using port 5432 mapped for 127.0.0.1 only use something like this:
# $ psql -h 127.0.0.1 -p 5432 -U moqui -W moqui < pg-dump.sql
services:
nginx-proxy:
# For documentation on SSL and other settings see:
# https://github.com/nginxproxy/nginx-proxy
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- 80:80
- 443:443
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/localtime:/etc/localtime:ro
# note: .crt, .key, and .dhparam.pem files start with the domain name in VIRTUAL_HOST (ie 'acetousk.com.*') or use CERT_NAME env var
- ./certs:/etc/nginx/certs
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/vhost.d:/etc/nginx/vhost.d
- ./nginx/html:/usr/share/nginx/html
environment:
# change this for the default host to use when accessing directly by IP, etc
- DEFAULT_HOST=moqui.local
# use SSL_POLICY to disable TLSv1.0, etc in nginx-proxy
- SSL_POLICY=AWS-TLS-1-1-2017-01
networks:
- proxy-tier
acme-companion:
image: nginxproxy/acme-companion
container_name: acme-companion
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /etc/localtime:/etc/localtime:ro
- ./certs:/etc/nginx/certs
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/vhost.d:/etc/nginx/vhost.d
- ./nginx/html:/usr/share/nginx/html
- ./acme.sh:/etc/acme.sh
networks:
- proxy-tier
environment:
# TODO: For production change this to your email
- DEFAULT_EMAIL=mail@yourdomain.tld
# TODO: For production change this to false
- LETSENCRYPT_TEST=true
depends_on:
- nginx-proxy
moqui-server:
image: moqui
container_name: moqui-server
command: conf=conf/MoquiProductionConf.xml port=80 no-run-es
restart: always
links:
- moqui-database
- moqui-search
volumes:
- /etc/localtime:/etc/localtime:ro
- ./runtime/conf:/opt/moqui/runtime/conf
- ./runtime/lib:/opt/moqui/runtime/lib
- ./runtime/classes:/opt/moqui/runtime/classes
- ./runtime/ecomponent:/opt/moqui/runtime/ecomponent
- ./runtime/log:/opt/moqui/runtime/log
- ./runtime/txlog:/opt/moqui/runtime/txlog
- ./runtime/sessions:/opt/moqui/runtime/sessions
- ./runtime/db:/opt/moqui/runtime/db
- ./runtime/opensearch:/opt/moqui/runtime/opensearch
environment:
- "JAVA_TOOL_OPTIONS=-Xms1024m -Xmx1024m"
- instance_purpose=production
- entity_ds_db_conf=postgres
- entity_ds_host=moqui-database
- entity_ds_port=5432
- entity_ds_database=moqui
- entity_ds_schema=public
- entity_ds_user=moqui
- entity_ds_password='MOQUI_CHANGE_ME!!!'
- entity_ds_crypt_pass='DEFAULT_CHANGE_ME!!!'
# configuration for ElasticFacade.ElasticClient, make sure the old moqui-elasticsearch component is NOT included in the Moqui build
- elasticsearch_url=https://moqui-search:9200
# prefix for index names, use something distinct and not 'moqui_' or 'mantle_' which match the beginning of OOTB index names
- elasticsearch_index_prefix=default_
- elasticsearch_user=admin
- elasticsearch_password=MoquiElasticChangeMe@2026
# CHANGE ME - note that VIRTUAL_HOST is for nginx-proxy so it picks up this container as one it should reverse proxy
# this can be a comma separate list of hosts like 'example.com,www.example.com'
- VIRTUAL_HOST=moqui.local
- LETSENCRYPT_HOST=moqui.local
# moqui will accept traffic from other hosts but these are the values used for URL writing when specified:
# - webapp_http_host=moqui.local
- webapp_http_port=80
- webapp_https_port=443
- webapp_https_enabled=true
# nginx-proxy populates X-Real-IP with remote_addr by default, better option for outer proxy than X-Forwarded-For which defaults to proxy_add_x_forwarded_for
- webapp_client_ip_header=X-Real-IP
- default_locale=en_US
- default_time_zone=UTC
networks:
- proxy-tier
- default
moqui-database:
image: postgres:18.1
container_name: moqui-database
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:5432:5432
volumes:
- /etc/localtime:/etc/localtime:ro
# edit these as needed to map configuration and data storage
- ./db/postgres:/var/lib/postgresql
environment:
- POSTGRES_DB=moqui
- POSTGRES_DB_SCHEMA=public
- POSTGRES_USER=moqui
- POSTGRES_PASSWORD='MOQUI_CHANGE_ME!!!'
# PGDATA, POSTGRES_INITDB_ARGS
networks:
default:
moqui-search:
image: opensearchproject/opensearch:3.4.0
container_name: moqui-search
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
volumes:
- /etc/localtime:/etc/localtime:ro
# edit these as needed to map configuration and data storage
- ./opensearch/data/nodes:/usr/share/opensearch/data/nodes
# - ./opensearch/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml
# - ./opensearch/logs:/usr/share/opensearch/logs
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=MoquiElasticChangeMe@2026
- discovery.type=single-node
- network.host=_site_
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
networks:
proxy-tier:
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:3.4.0
container_name: opensearch-dashboards
volumes:
- /etc/localtime:/etc/localtime:ro
links:
- moqui-search
ports:
- 127.0.0.1:5601:5601
environment:
OPENSEARCH_HOSTS: '["https://moqui-search:9200"]'
networks:
default:
proxy-tier:
networks:
proxy-tier:
================================================
FILE: docker/moqui-cluster1-compose.yml
================================================
# NOTE: ElasticSearch uses odd user and directory setup for externally mapped data, etc directories, see:
# https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
# Make sure vm.max_map_count=262144 is set in /etc/sysctl.conf on host (on live system run 'sudo sysctl -w vm.max_map_count=262144')
# A Docker Compose application with 2 moqui instances, mysql, elasticsearch, kibana, and virtual hosting through
# nginx-proxy supporting multiple moqui instances on different hosts.
# The 'moqui' image should be prepared with the MySQL JDBC jar and the moqui-hazelcast and moqui-elasticsearch components.
# This does virtual hosting instead of load balancing so that each moqui instance can be accessed consistently (moqui1.local, moqui2.local).
# Run with something like this for detached mode:
# $ docker-compose -f moqui-cluster1-compose.yml -p moqui up -d
# Or to copy runtime directories for mounted volumes, set default settings, etc use something like this:
# $ ./compose-run.sh moqui-cluster1-compose.yml
# This sets the project/app name to 'moqui' and the network will be 'moqui_default', to be used by external moqui containers
# Test locally by adding the virtual host to /etc/hosts or with something like:
# $ curl -H "Host: moqui1.local" localhost/Login
services:
nginx-proxy:
# For documentation on SSL and other settings see:
# https://github.com/nginx-proxy/nginx-proxy
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
# note: .crt, .key, and .dhparam.pem files start with the domain name in VIRTUAL_HOST (ie 'moqui.local.*') or use CERT_NAME env var
- ./certs:/etc/nginx/certs
- ./nginx/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf
environment:
# change this for the default host to use when accessing directly by IP, etc
- DEFAULT_HOST=moqui1.local
# use SSL_POLICY to disable TLSv1.0, etc in nginx-proxy
- SSL_POLICY=AWS-TLS-1-1-2017-01
moqui-server1:
image: moqui
container_name: moqui-server1
command: conf=conf/MoquiProductionConf.xml port=80
restart: always
links:
- moqui-database
- moqui-search
volumes:
- ./runtime/conf:/opt/moqui/runtime/conf
- ./runtime/lib:/opt/moqui/runtime/lib
- ./runtime/classes:/opt/moqui/runtime/classes
- ./runtime/ecomponent:/opt/moqui/runtime/ecomponent
- ./runtime/log:/opt/moqui/runtime/log
- ./runtime/txlog:/opt/moqui/runtime/txlog
- ./runtime/sessions:/opt/moqui/runtime/sessions
- ./runtime/db:/opt/moqui/runtime/db
- ./runtime/opensearch:/opt/moqui/runtime/opensearch
environment:
- "JAVA_TOOL_OPTIONS=-Xms1024m -Xmx1024m"
- instance_purpose=production
- entity_ds_db_conf=mysql8
- entity_ds_host=moqui-database
- entity_ds_port=3306
- entity_ds_database=moqui
# NOTE: using root user because for TX recovery MySQL requires the 'XA_RECOVER_ADMIN' and in version 8 that must be granted explicitly
- entity_ds_user=root
- entity_ds_password=moquiroot
- entity_ds_crypt_pass='DEFAULT_CHANGE_ME!!!'
# configuration for ElasticFacade.ElasticClient, make sure the old moqui-elasticsearch component is NOT included in the Moqui build
- elasticsearch_url=https://moqui-search:9200
# prefix for index names, use something distinct and not 'moqui_' or 'mantle_' which match the beginning of OOTB index names
- elasticsearch_index_prefix=default_
- elasticsearch_user=admin
- elasticsearch_password=MoquiElasticChangeMe@2026
# settings for kibana proxy
- kibana_host=opensearch-dashboards
# CHANGE ME - note that VIRTUAL_HOST is for nginx-proxy so it picks up this container as one it should reverse proxy
# this can be a comma separate list of hosts like 'example.com,www.example.com'
- VIRTUAL_HOST=moqui1.local
# moqui will accept traffic from other hosts but these are the values used for URL writing when specified:
- webapp_http_host=moqui1.local
- webapp_http_port=80
- webapp_https_port=443
- webapp_https_enabled=true
# nginx-proxy populates X-Real-IP with remote_addr by default, better option for outer proxy than X-Forwarded-For which defaults to proxy_add_x_forwarded_for
- webapp_client_ip_header=X-Real-IP
- default_locale=en_US
- default_time_zone=UTC
# hazelcast multicast setup
- hazelcast_multicast_enabled=true
- hazelcast_multicast_group=224.2.2.3
- hazelcast_multicast_port=54327
- hazelcast_group_name=test
- hazelcast_group_password=test-pass
moqui-server2:
image: moqui
container_name: moqui-server2
command: conf=conf/MoquiProductionConf.xml port=80
restart: always
links:
- moqui-database
- moqui-search
volumes:
- ./runtime/conf:/opt/moqui/runtime/conf
- ./runtime/lib:/opt/moqui/runtime/lib
- ./runtime/classes:/opt/moqui/runtime/classes
- ./runtime/ecomponent:/opt/moqui/runtime/ecomponent
- ./runtime/log:/opt/moqui/runtime/log
- ./runtime/txlog:/opt/moqui/runtime/txlog
- ./runtime/sessions:/opt/moqui/runtime/sessions
# this one isn't needed when not using H2/etc: - ./runtime/db:/opt/moqui/runtime/db
environment:
- "JAVA_TOOL_OPTIONS=-Xms1024m -Xmx1024m"
- instance_purpose=production
- entity_ds_db_conf=mysql8
- entity_ds_host=moqui-database
- entity_ds_port=3306
- entity_ds_database=moqui
# NOTE: using root user because for TX recovery MySQL requires the 'XA_RECOVER_ADMIN' and in version 8 that must be granted explicitly
- entity_ds_user=root
- entity_ds_password=moquiroot
- entity_ds_crypt_pass='DEFAULT_CHANGE_ME!!!'
# configuration for ElasticFacade.ElasticClient, make sure the old moqui-elasticsearch component is NOT included in the Moqui build
- elasticsearch_url=https://moqui-search:9200
# prefix for index names, use something distinct and not 'moqui_' or 'mantle_' which match the beginning of OOTB index names
- elasticsearch_index_prefix=default_
- elasticsearch_user=admin
- elasticsearch_password=MoquiElasticChangeMe@2026
# settings for kibana proxy
- kibana_host=opensearch-dashboards
# CHANGE ME - note that VIRTUAL_HOST is for nginx-proxy so it picks up this container as one it should reverse proxy
# this can be a comma separate list of hosts like 'example.com,www.example.com'
- VIRTUAL_HOST=moqui2.local
# moqui will accept traffic from other hosts but these are the values used for URL writing when specified:
- webapp_http_host=moqui2.local
- webapp_http_port=80
- webapp_https_port=443
- webapp_https_enabled=true
# nginx-proxy populates X-Real-IP with remote_addr by default, better option for outer proxy than X-Forwarded-For which defaults to proxy_add_x_forwarded_for
- webapp_client_ip_header=X-Real-IP
- default_locale=en_US
- default_time_zone=UTC
# hazelcast multicast setup
- hazelcast_multicast_enabled=true
- hazelcast_multicast_group=224.2.2.3
- hazelcast_multicast_port=54327
- hazelcast_group_name=test
- hazelcast_group_password=test-pass
moqui-database:
image: mysql:9.5
container_name: moqui-database
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:3306:3306
volumes:
# edit these as needed to map configuration and data storage
- ./db/mysql/data:/var/lib/mysql
# - /my/mysql/conf.d:/etc/mysql/conf.d
environment:
- MYSQL_ROOT_PASSWORD=moquiroot
- MYSQL_DATABASE=moqui
- MYSQL_USER=moqui
- MYSQL_PASSWORD=moqui
moqui-search:
image: opensearchproject/opensearch:3.4.0
container_name: moqui-search
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
volumes:
# edit these as needed to map configuration and data storage
- ./opensearch/data:/usr/share/opensearch/data
# - ./opensearch/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml
# - ./opensearch/logs:/usr/share/opensearch/logs
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=MoquiElasticChangeMe@2026
- discovery.type=single-node
- network.host=_site_
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:3.4.0
container_name: opensearch-dashboards
links:
- moqui-search
ports:
- 5601:5601
environment:
OPENSEARCH_HOSTS: '["https://moqui-search:9200"]'
================================================
FILE: docker/moqui-mysql-compose.yml
================================================
# A Docker Compose application with Moqui, MySQL, OpenSearch, OpenSearch Dashboards, and virtual hosting through
# nginx-proxy supporting multiple moqui instances on different hostnames.
# Run with something like this for detached mode:
# $ docker compose -f moqui-mysql-compose.yml -p moqui up -d
# Or to copy runtime directories for mounted volumes, set default settings, etc use something like this:
# $ ./compose-run.sh moqui-mysql-compose.yml
# This sets the project/app name to 'moqui' and the network will be 'moqui_default', to be used by external moqui containers
# Test locally by adding the virtual host to /etc/hosts or with something like:
# $ curl -H "Host: moqui.local" localhost/Login
# To run an additional instance of moqui run something like this (but with
# many more arguments for volume mapping, db setup, etc):
# $ docker run -e VIRTUAL_HOST=moqui2.local --name moqui2_local --network moqui_default moqui
# To import data from the docker host using port 3306 mapped for 127.0.0.1 only use something like this:
# $ mysql -p -u root -h 127.0.0.1 moqui < mysql-export.sql
services:
nginx-proxy:
# For documentation on SSL and other settings see:
# https://github.com/nginxproxy/nginx-proxy
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- 80:80
- 443:443
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/localtime:/etc/localtime:ro
# note: .crt, .key, and .dhparam.pem files start with the domain name in VIRTUAL_HOST (ie 'moqui.local.*') or use CERT_NAME env var
- ./certs:/etc/nginx/certs
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/vhost.d:/etc/nginx/vhost.d
- ./nginx/html:/usr/share/nginx/html
environment:
# change this for the default host to use when accessing directly by IP, etc
- DEFAULT_HOST=moqui.local
# use SSL_POLICY to disable TLSv1.0, etc in nginx-proxy
- SSL_POLICY=AWS-TLS-1-1-2017-01
moqui-server:
image: moqui
container_name: moqui-server
command: conf=conf/MoquiProductionConf.xml port=80
restart: always
links:
- moqui-database
- moqui-search
volumes:
- ./runtime/conf:/opt/moqui/runtime/conf
- ./runtime/lib:/opt/moqui/runtime/lib
- ./runtime/classes:/opt/moqui/runtime/classes
- ./runtime/ecomponent:/opt/moqui/runtime/ecomponent
- ./runtime/log:/opt/moqui/runtime/log
- ./runtime/txlog:/opt/moqui/runtime/txlog
- ./runtime/sessions:/opt/moqui/runtime/sessions
- ./runtime/db:/opt/moqui/runtime/db
- ./runtime/opensearch:/opt/moqui/runtime/opensearch
environment:
- "JAVA_TOOL_OPTIONS=-Xms1024m -Xmx1024m"
- instance_purpose=production
- entity_ds_db_conf=mysql8
- entity_ds_host=moqui-database
- entity_ds_port=3306
- entity_ds_database=moqui
# NOTE: using root user because for TX recovery MySQL requires the 'XA_RECOVER_ADMIN' and in version 8 that must be granted explicitly
- entity_ds_user=root
- entity_ds_password=moquiroot
- entity_ds_crypt_pass='DEFAULT_CHANGE_ME!!!'
# configuration for ElasticFacade.ElasticClient, make sure the old moqui-elasticsearch component is NOT included in the Moqui build
- elasticsearch_url=https://moqui-search:9200
# prefix for index names, use something distinct and not 'moqui_' or 'mantle_' which match the beginning of OOTB index names
- elasticsearch_index_prefix=default_
- elasticsearch_user=admin
- elasticsearch_password=MoquiElasticChangeMe@2026
# CHANGE ME - note that VIRTUAL_HOST is for nginx-proxy so it picks up this container as one it should reverse proxy
# this can be a comma separate list of hosts like 'example.com,www.example.com'
- VIRTUAL_HOST=moqui.local
# moqui will accept traffic from other hosts but these are the values used for URL writing when specified:
# - webapp_http_host=moqui.local
- webapp_http_port=80
- webapp_https_port=443
- webapp_https_enabled=true
# nginx-proxy populates X-Real-IP with remote_addr by default, better option for outer proxy than X-Forwarded-For which defaults to proxy_add_x_forwarded_for
- webapp_client_ip_header=X-Real-IP
- default_locale=en_US
- default_time_zone=UTC
moqui-database:
image: mysql:9.5
container_name: moqui-database
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:3306:3306
volumes:
# edit these as needed to map configuration and data storage
- ./db/mysql/data:/var/lib/mysql
# - /my/mysql/conf.d:/etc/mysql/conf.d
environment:
- MYSQL_ROOT_PASSWORD=moquiroot
- MYSQL_DATABASE=moqui
- MYSQL_USER=moqui
- MYSQL_PASSWORD=moqui
moqui-search:
image: opensearchproject/opensearch:3.4.0
container_name: moqui-search
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
volumes:
# edit these as needed to map configuration and data storage
- ./opensearch/data:/usr/share/opensearch/data
# - ./opensearch/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml
# - ./opensearch/logs:/usr/share/opensearch/logs
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=MoquiElasticChangeMe@2026
- discovery.type=single-node
- network.host=_site_
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:3.4.0
container_name: opensearch-dashboards
links:
- moqui-search
ports:
- 5601:5601
environment:
OPENSEARCH_HOSTS: '["https://moqui-search:9200"]'
================================================
FILE: docker/moqui-postgres-compose.yml
================================================
# A Docker Compose application with Moqui, Postgres, OpenSearch, OpenSearch Dashboards, and virtual hosting through
# nginx-proxy supporting multiple moqui instances on different hostnames.
# Run with something like this for detached mode:
# $ docker compose -f moqui-postgres-compose.yml -p moqui up -d
# Or to copy runtime directories for mounted volumes, set default settings, etc use something like this:
# $ ./compose-run.sh moqui-postgres-compose.yml
# This sets the project/app name to 'moqui' and the network will be 'moqui_default', to be used by external moqui containers
# Test locally by adding the virtual host to /etc/hosts or with something like:
# $ curl -H "Host: moqui.local" localhost/Login
# To run an additional instance of moqui run something like this (but with
# many more arguments for volume mapping, db setup, etc):
# $ docker run -e VIRTUAL_HOST=moqui2.local --name moqui2_local --network moqui_default moqui
# To import data from the docker host using port 5432 mapped for 127.0.0.1 only use something like this:
# $ psql -h 127.0.0.1 -p 5432 -U moqui -W moqui < pg-dump.sql
services:
nginx-proxy:
# For documentation on SSL and other settings see:
# https://github.com/nginx-proxy/nginx-proxy
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
# note: .crt, .key, and .dhparam.pem files start with the domain name in VIRTUAL_HOST (ie 'moqui.local.*') or use CERT_NAME env var
- ./certs:/etc/nginx/certs
- ./nginx/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf
environment:
# change this for the default host to use when accessing directly by IP, etc
- DEFAULT_HOST=moqui.local
# use SSL_POLICY to disable TLSv1.0, etc in nginx-proxy
- SSL_POLICY=AWS-TLS-1-1-2017-01
moqui-server:
image: moqui
container_name: moqui-server
command: conf=conf/MoquiProductionConf.xml port=80
restart: always
links:
- moqui-database
- moqui-search
volumes:
- ./runtime/conf:/opt/moqui/runtime/conf
- ./runtime/lib:/opt/moqui/runtime/lib
- ./runtime/classes:/opt/moqui/runtime/classes
- ./runtime/ecomponent:/opt/moqui/runtime/ecomponent
- ./runtime/log:/opt/moqui/runtime/log
- ./runtime/txlog:/opt/moqui/runtime/txlog
- ./runtime/sessions:/opt/moqui/runtime/sessions
- ./runtime/db:/opt/moqui/runtime/db
- ./runtime/opensearch:/opt/moqui/runtime/opensearch
environment:
- "JAVA_TOOL_OPTIONS=-Xms1024m -Xmx1024m"
- instance_purpose=production
- entity_ds_db_conf=postgres
- entity_ds_host=moqui-database
- entity_ds_port=5432
- entity_ds_database=moqui
- entity_ds_schema=public
- entity_ds_user=moqui
- entity_ds_password=moqui
- entity_ds_crypt_pass='DEFAULT_CHANGE_ME!!!'
# configuration for ElasticFacade.ElasticClient, make sure the old moqui-elasticsearch component is NOT included in the Moqui build
- elasticsearch_url=https://moqui-search:9200
# prefix for index names, use something distinct and not 'moqui_' or 'mantle_' which match the beginning of OOTB index names
- elasticsearch_index_prefix=default_
- elasticsearch_user=admin
- elasticsearch_password=MoquiElasticChangeMe@2026
# CHANGE ME - note that VIRTUAL_HOST is for nginx-proxy so it picks up this container as one it should reverse proxy
# this can be a comma separate list of hosts like 'example.com,www.example.com'
- VIRTUAL_HOST=moqui.local
# moqui will accept traffic from other hosts but these are the values used for URL writing when specified:
# - webapp_http_host=moqui.local
- webapp_http_port=80
- webapp_https_port=443
- webapp_https_enabled=true
# nginx-proxy populates X-Real-IP with remote_addr by default, better option for outer proxy than X-Forwarded-For which defaults to proxy_add_x_forwarded_for
- webapp_client_ip_header=X-Real-IP
- default_locale=en_US
- default_time_zone=UTC
moqui-database:
image: postgres:18.1
container_name: moqui-database
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:5432:5432
volumes:
# edit these as needed to map configuration and data storage
- ./db/postgres:/var/lib/postgresql
environment:
- POSTGRES_DB=moqui
- POSTGRES_DB_SCHEMA=public
- POSTGRES_USER=moqui
- POSTGRES_PASSWORD=moqui
# PGDATA, POSTGRES_INITDB_ARGS
moqui-search:
image: opensearchproject/opensearch:3.4.0
container_name: moqui-search
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
volumes:
# edit these as needed to map configuration and data storage
- ./opensearch/data:/usr/share/opensearch/data
# - ./opensearch/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml
# - ./opensearch/logs:/usr/share/opensearch/logs
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=MoquiElasticChangeMe@2026
- discovery.type=single-node
- network.host=_site_
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:3.4.0
container_name: opensearch-dashboards
links:
- moqui-search
ports:
- 5601:5601
environment:
OPENSEARCH_HOSTS: '["https://moqui-search:9200"]'
================================================
FILE: docker/moqui-run.sh
================================================
#! /bin/bash
echo "Usage: moqui-run.sh [] [] []"
echo
MOQUI_HOME="${1:-..}"
NAME_TAG="${2:-moqui}"
RUNTIME_IMAGE="${3:-eclipse-temurin:21-jdk}"
search_name=opensearch
if [ -d "$MOQUI_HOME/runtime/opensearch/bin" ]; then search_name=opensearch;
elif [ -d "$MOQUI_HOME/runtime/elasticsearch/bin" ]; then search_name=elasticsearch;
fi
if [ -f simple/docker-build.sh ]; then
cd simple
./docker-build.sh ../.. $NAME_TAG $RUNTIME_IMAGE
# shellcheck disable=SC2103
cd ..
fi
if [ ! -e runtime ]; then mkdir runtime; fi
if [ ! -e runtime/conf ]; then cp -R $MOQUI_HOME/runtime/conf runtime/; fi
if [ ! -e runtime/lib ]; then cp -R $MOQUI_HOME/runtime/lib runtime/; fi
if [ ! -e runtime/classes ]; then cp -R $MOQUI_HOME/runtime/classes runtime/; fi
if [ ! -e runtime/log ]; then cp -R $MOQUI_HOME/runtime/log runtime/; fi
if [ ! -e runtime/txlog ]; then cp -R $MOQUI_HOME/runtime/txlog runtime/; fi
if [ ! -e runtime/db ]; then cp -R $MOQUI_HOME/runtime/db runtime/; fi
if [ ! -e runtime/$search_name ]; then cp -R $MOQUI_HOME/runtime/$search_name runtime/; fi
docker run --rm -p 80:80 -v $PWD/runtime/conf:/opt/moqui/runtime/conf -v $PWD/runtime/lib:/opt/moqui/runtime/lib \
-v $PWD/runtime/classes:/opt/moqui/runtime/classes -v $PWD/runtime/log:/opt/moqui/runtime/log \
-v $PWD/runtime/txlog:/opt/moqui/runtime/txlog -v $PWD/runtime/db:/opt/moqui/runtime/db \
-v $PWD/runtime/$search_name:/opt/moqui/runtime/$search_name \
--name moqui-server $NAME_TAG
# docker run -d -p 80:80 $NAME_TAG
# docker run --rm -p 80:80 $NAME_TAG
================================================
FILE: docker/mysql-compose.yml
================================================
# A Docker Compose application with Moqui, MySQL, OpenSearch, OpenSearch Dashboards, and virtual hosting through
# nginx-proxy supporting multiple moqui instances on different hostnames.
# Run with something like this for detached mode:
# $ docker compose -f mysql-compose.yml -p moqui up -d
# Or to copy runtime directories for mounted volumes, set default settings, etc use something like this:
# $ ./compose-run.sh mysql-compose.yml
# This sets the project/app name to 'moqui' and the network will be 'moqui_default', to be used by external moqui containers
# Test locally by adding the virtual host to /etc/hosts or with something like:
# $ curl -H "Host: moqui.local" localhost/Login
# To run an additional instance of moqui run something like this (but with
# many more arguments for volume mapping, db setup, etc):
# $ docker run -e VIRTUAL_HOST=moqui2.local --name moqui2_local --network moqui_default moqui
# To import data from the docker host using port 3306 mapped for 127.0.0.1 only use something like this:
# $ mysql -p -u root -h 127.0.0.1 moqui < mysql-export.sql
services:
nginx-proxy:
# For documentation on SSL and other settings see:
# https://github.com/nginx-proxy/nginx-proxy
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
# note: .crt, .key, and .dhparam.pem files start with the domain name in VIRTUAL_HOST (ie 'moqui.local.*') or use CERT_NAME env var
- ./certs:/etc/nginx/certs
- ./nginx/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf
environment:
# use SSL_POLICY to disable TLSv1.0, etc in nginx-proxy
- SSL_POLICY=AWS-TLS-1-1-2017-01
moqui-database:
image: mysql:9.5
container_name: moqui-database
restart: always
# expose the port for use outside other containers, needed for external management (like Moqui Instance Management)
ports:
- 127.0.0.1:3306:3306
# edit these as needed to map configuration and data storage
volumes:
- ./db/mysql/data:/var/lib/mysql
# - /my/mysql/conf.d:/etc/mysql/conf.d
environment:
- MYSQL_ROOT_PASSWORD=moquiroot
- MYSQL_DATABASE=moqui
- MYSQL_USER=moqui
- MYSQL_PASSWORD=moqui
moqui-search:
image: opensearchproject/opensearch:3.4.0
container_name: moqui-search
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
volumes:
# edit these as needed to map configuration and data storage
- ./opensearch/data:/usr/share/opensearch/data
# - ./opensearch/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml
# - ./opensearch/logs:/usr/share/opensearch/logs
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=MoquiElasticChangeMe@2026
- discovery.type=single-node
- network.host=_site_
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:3.4.0
container_name: opensearch-dashboards
links:
- moqui-search
ports:
- 5601:5601
environment:
OPENSEARCH_HOSTS: '["https://moqui-search:9200"]'
================================================
FILE: docker/nginx/my_proxy.conf
================================================
client_max_body_size 20M;
proxy_connect_timeout 3600s;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
# NOTE: this always sets X-Forwarded-For to the remote_addr instead of appending it.
# The default behavior in nginx-proxy is to use $proxy_add_x_forwarded_for which
# appends the current upstream IP to any in an existing X-Forwarded-For header
# If nginx-proxy is used and it is there is another reverse proxy in front of it
# (such as CloudFlare, AWS CloudFront, etc) this needs to be changed back to
# $proxy_add_x_forwarded_for or it will always pick up the other reverse
# proxy's IP address instead of the client IP address!
# In other words, only the first proxy a client hits should set X-Forwarded-For
# this way, all others should append.
proxy_set_header X-Forwarded-For $remote_addr;
underscores_in_headers on;
================================================
FILE: docker/opensearch/data/nodes/README
================================================
This directory must exist for mapping otherwise created as root in container and opensearch cannot access it.
================================================
FILE: docker/postgres-compose.yml
================================================
# A Docker Compose application with Moqui, Postgres, OpenSearch, OpenSearch Dashboards, and virtual hosting through
# nginx-proxy supporting multiple moqui instances on different hostnames.
# Run with something like this for detached mode:
# $ docker compose -f postgres-compose.yml -p moqui up -d
# Or to copy runtime directories for mounted volumes, set default settings, etc use something like this:
# $ ./compose-run.sh postgres-compose.yml
# This sets the project/app name to 'moqui' and the network will be 'moqui_default', to be used by external moqui containers
# Test locally by adding the virtual host to /etc/hosts or with something like:
# $ curl -H "Host: moqui.local" localhost/Login
# To run an additional instance of moqui run something like this (but with
# many more arguments for volume mapping, db setup, etc):
# $ docker run -e VIRTUAL_HOST=moqui2.local --name moqui2_local --network moqui_default moqui
# To import data from the docker host using port 5432 mapped for 127.0.0.1 only use something like this:
# $ psql -h 127.0.0.1 -p 5432 -U moqui -W moqui < pg-dump.sql
services:
nginx-proxy:
# For documentation on SSL and other settings see:
# https://github.com/nginx-proxy/nginx-proxy
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
# note: .crt, .key, and .dhparam.pem files start with the domain name in VIRTUAL_HOST (ie 'moqui.local.*') or use CERT_NAME env var
- ./certs:/etc/nginx/certs
- ./nginx/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf
environment:
# use SSL_POLICY to disable TLSv1.0, etc in nginx-proxy
- SSL_POLICY=AWS-TLS-1-1-2017-01
moqui-database:
image: postgres:18.1
container_name: moqui-database
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:5432:5432
volumes:
# edit these as needed to map configuration and data storage
- ./db/postgres:/var/lib/postgresql
environment:
- POSTGRES_DB=moqui
- POSTGRES_DB_SCHEMA=public
- POSTGRES_USER=moqui
- POSTGRES_PASSWORD=moqui
# PGDATA, POSTGRES_INITDB_ARGS
moqui-search:
image: opensearchproject/opensearch:3.4.0
container_name: moqui-search
restart: always
ports:
# change this as needed to bind to any address or even comment to not expose port outside containers
- 127.0.0.1:9200:9200
- 127.0.0.1:9300:9300
volumes:
# edit these as needed to map configuration and data storage
- ./opensearch/data:/usr/share/opensearch/data
# - ./opensearch/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml
# - ./opensearch/logs:/usr/share/opensearch/logs
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=MoquiElasticChangeMe@2026
- discovery.type=single-node
- network.host=_site_
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:3.4.0
container_name: opensearch-dashboards
links:
- moqui-search
ports:
- 5601:5601
environment:
OPENSEARCH_HOSTS: '["https://moqui-search:9200"]'
================================================
FILE: docker/postgres_backup.sh
================================================
#!/bin/bash
# This is a simple script to do a rotating backup of PostgreSQL (default once per day, retain 30 days)
# For a complete backup solution these backup files would be copied to a remote site, potentially with a different retention pattern
# Database info
user="moqui"
host="localhost"
db_name="moqui"
# Other options
# a full path from root should be used for backup_path or there will be issues running via crontab
backup_path="/opt/pgbackups"
date=$(date +"%Y%m%d")
backup_file=$backup_path/$db_name-$date.sql.gz
# for password for cron job one option is to use a .pgpass file in home directory, see: https://www.postgresql.org/docs/current/libpq-pgpass.html
# each line in .pgpass should be like: hostname:port:database:username:password
# for example: localhost:5432:moqui:moqui:CHANGEME
# note that ~/.pgpass must have u=rw (0600) permission or less (or psql, pg_dump, etc will refuse to use it)
# Remove file for same day if exists
if [ -e $backup_file ]; then rm $backup_file; fi
# Set default file permissions
umask 177
# Dump database into SQL file
pg_dump -h $host -p 5432 -U $user -w $db_name | gzip > $backup_file
# Remove all files not within 7 days, most recent per month for 6 months, or most recent of the year
echo "removing:"
ls "$backup_path"/moqui-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].sql.gz | awk -v now_epoch="$(date +%s)" '
{
date_string = substr($0, index($0,"-")+1, 8)
command = "date -d \"" date_string "\" +%s"
command | getline file_epoch
close(command)
files[NR] = $0
file_epoch_by_name[$0] = file_epoch
year_month = substr(date_string,1,6)
year_only = substr(date_string,1,4)
age_in_months = int((now_epoch - file_epoch) / 2592000)
if (age_in_months < 6 &&
(!(year_month in newest_month_epoch) ||
file_epoch > newest_month_epoch[year_month])) {
newest_month_epoch[year_month] = file_epoch
newest_month_file[year_month] = $0
}
if (!(year_only in newest_year_epoch) ||
file_epoch > newest_year_epoch[year_only]) {
newest_year_epoch[year_only] = file_epoch
newest_year_file[year_only] = $0
}
}
END {
for (i in files) {
file_name = files[i]
file_epoch = file_epoch_by_name[file_name]
date_string = substr(file_name, index(file_name,"-")+1, 8)
year_month = substr(date_string,1,6)
year_only = substr(date_string,1,4)
if (now_epoch - file_epoch <= 7*86400) continue
if (file_name == newest_month_file[year_month]) continue
if (file_name == newest_year_file[year_only]) continue
printf "%s\0", file_name
}
}' |
xargs -0 --no-run-if-empty rm -v
# update cloned test instance database using backup file from production/main database
# docker stop moqui-test
# dropdb -h localhost -p 5432 -U moqui -w moqui-test
# createdb -h localhost -p 5432 -U moqui -w moqui-test
# gunzip < $backup_file | psql -h localhost -p 5432 -U moqui -w moqui-test
# docker start moqui-test
# example for crontab (safe edit using: 'crontab -e'), each day at midnight: 00 00 * * * /opt/moqui/postgres_backup.sh
================================================
FILE: docker/simple/Dockerfile
================================================
# Builds a minimal docker image with openjdk and moqui with various volumes for configuration and persisted data outside the container
# NOTE: add components, build and if needed load data before building a docker image with this
ARG RUNTIME_IMAGE=eclipse-temurin:21-jdk
FROM ${RUNTIME_IMAGE}
MAINTAINER Moqui Framework
WORKDIR /opt/moqui
# for running from the war directly, preffered approach unzips war in advance (see docker-build.sh that does this)
#COPY moqui.war .
# copy files from unzipped moqui.war file
COPY WEB-INF WEB-INF
COPY META-INF META-INF
COPY *.class ./
COPY execlib execlib
# always want the runtime directory
COPY runtime runtime
# create user for search and chown corresponding files
ARG search_name=opensearch
RUN if [ -d runtime/opensearch/bin ]; then echo "Installing OpenSearch User"; \
search_name=opensearch; \
groupadd -g 1000 opensearch 2>/dev/null || echo "group 1000 already exists" && \
useradd -u 1000 -g 1000 -G 0 -d /opt/moqui/runtime/opensearch opensearch 2>/dev/null || echo "user 1000 already exists" && \
chmod 0775 /opt/moqui/runtime/opensearch && \
chown -R 1000:0 /opt/moqui/runtime/opensearch; \
elif [ -d runtime/elasticsearch/bin ]; then echo "Installing ElasticSearch User"; \
search_name=elasticsearch; \
groupadd -r elasticsearch && \
useradd --no-log-init -r -g elasticsearch -d /opt/moqui/runtime/elasticsearch elasticsearch && \
chown -R elasticsearch:elasticsearch runtime/elasticsearch; \
fi
# exposed as volumes for configuration purposes
VOLUME ["/opt/moqui/runtime/conf", "/opt/moqui/runtime/lib", "/opt/moqui/runtime/classes", "/opt/moqui/runtime/ecomponent"]
# exposed as volumes to persist data outside the container, recommended
VOLUME ["/opt/moqui/runtime/log", "/opt/moqui/runtime/txlog", "/opt/moqui/runtime/sessions", "/opt/moqui/runtime/db", "/opt/moqui/runtime/$search_name"]
# Main Servlet Container Port
EXPOSE 80
# Search HTTP Port
EXPOSE 9200
# Search Cluster (TCP Transport) Port
EXPOSE 9300
# Hazelcast Cluster Port
EXPOSE 5701
# this is to run from the war file directly, preferred approach unzips war file in advance
# ENTRYPOINT ["java", "-jar", "moqui.war"]
ENTRYPOINT ["java", "-cp", ".", "MoquiStart"]
HEALTHCHECK --interval=30s --timeout=600ms --start-period=120s CMD curl -f -H "X-Forwarded-Proto: https" -H "X-Forwarded-Ssl: on" http://localhost/status || exit 1
# specify this as a default parameter if none are specified with docker exec/run, ie run production by default
CMD ["conf=conf/MoquiProductionConf.xml", "port=80"]
================================================
FILE: docker/simple/docker-build.sh
================================================
#! /bin/bash
echo "Usage: docker-build.sh [] [] []"
MOQUI_HOME="${1:-../..}"
NAME_TAG="${2:-moqui}"
RUNTIME_IMAGE="${3:-eclipse-temurin:21-jdk}"
if [ ! "$1" ]; then
echo "Usage: docker-build.sh [] [] []"
else
echo "Running: docker-build.sh $MOQUI_HOME $NAME_TAG $RUNTIME_IMAGE"
fi
echo
if [ -f $MOQUI_HOME/moqui-plus-runtime.war ]
then
echo "Building docker image from moqui-plus-runtime.war"
echo
unzip -q $MOQUI_HOME/moqui-plus-runtime.war
elif [ -f $MOQUI_HOME/moqui.war ]
then
echo "Building docker image from moqui.war and runtime directory"
echo "NOTE: this includes everything in the runtime directory, it is better to run 'gradle addRuntime' first and use the moqui-plus-runtime.war file for the docker image"
echo
unzip -q $MOQUI_HOME/moqui.war
cp -R $MOQUI_HOME/runtime .
else
echo "Could not find $MOQUI_HOME/moqui-plus-runtime.war or $MOQUI_HOME/moqui.war"
echo "Build moqui first, for example 'gradle build addRuntime' or 'gradle load addRuntime'"
echo
exit 1
fi
docker build -t $NAME_TAG --build-arg RUNTIME_IMAGE=$RUNTIME_IMAGE .
if [ -d META-INF ]; then rm -Rf META-INF; fi
if [ -d WEB-INF ]; then rm -Rf WEB-INF; fi
if [ -d execlib ]; then rm -Rf execlib; fi
rm *.class
if [ -d runtime ]; then rm -Rf runtime; fi
if [ -f Procfile ]; then rm Procfile; fi
================================================
FILE: framework/build.gradle
================================================
/*
* This software is in the public domain under CC0 1.0 Universal plus a
* Grant of Patent License.
*
* To the extent possible under law, the author(s) have dedicated all
* copyright and related and neighboring rights to this software to the
* public domain worldwide. This software is distributed without any
* warranty.
*
* You should have received a copy of the CC0 Public Domain Dedication
* along with this software (see the LICENSE.md file). If not, see
* .
*/
plugins {
id 'java-library'
id 'groovy'
id 'war'
}
version = '4.0.0'
repositories {
flatDir name: 'localLib', dirs: projectDir.absolutePath + '/lib'
mavenCentral()
}
java {
sourceCompatibility = 21
targetCompatibility = 21
}
base {
archivesName.set('moqui')
}
sourceSets {
start
execWar
}
groovydoc {
docTitle = "Moqui Framework ${version}"
source = sourceSets.main.allSource
}
//tasks.withType(JavaCompile) { options.compilerArgs << "-Xlint:unchecked" }
//tasks.withType(JavaCompile) { options.compilerArgs << "-Xlint:deprecation" }
//tasks.withType(GroovyCompile) { options.compilerArgs << "-Xlint:unchecked" }
//tasks.withType(GroovyCompile) { options.compilerArgs << "-Xlint:deprecation" }
// Log4J has annotation processors, disable to avoid warning
tasks.withType(JavaCompile) { options.compilerArgs << "-proc:none" }
tasks.withType(GroovyCompile) { options.compilerArgs << "-proc:none" }
// NOTE: for dependency types and 'api' definition see: https://docs.gradle.org/current/userguide/java_library_plugin.html
dependencies {
// Groovy
api 'org.apache.groovy:groovy:5.0.3' // Apache 2.0
api 'org.apache.groovy:groovy-dateutil:5.0.3' // Apache 2.0
api 'org.apache.groovy:groovy-json:5.0.3' // Apache 2.0
api 'org.apache.groovy:groovy-templates:5.0.3' // Apache 2.0
api 'org.apache.groovy:groovy-xml:5.0.3' // Apache 2.0
// Bitronix Transaction Manager, a modernized fork
api 'org.moqui:btm:4.0.1' // Apache 2.0
// Apache Commons
api 'org.apache.commons:commons-csv:1.14.1' // Apache 2.0
api('org.apache.commons:commons-email2-jakarta:2.0.0-M1') {
exclude group: 'com.sun.mail', module: 'jakarta.mail'
exclude group: 'com.sun.activation', module: 'jakarta.activation'
}
api 'org.apache.commons:commons-collections4:4.5.0' // Apache 2.0
api 'org.apache.commons:commons-fileupload2-jakarta-servlet6:2.0.0-M4' // Apache 2.0
api 'commons-codec:commons-codec:1.20.0' // Apache 2.0
api 'commons-io:commons-io:2.21.0' // Apache 2.0
api 'commons-logging:commons-logging:1.3.5' // Apache 2.0
api 'commons-validator:commons-validator:1.10.1' // Apache 2.0
// Cron Utils
api 'com.cronutils:cron-utils:9.2.1' // Apache 2.0
// Flexmark (markdown)
api 'com.vladsch.flexmark:flexmark:0.64.8'
api 'com.vladsch.flexmark:flexmark-ext-tables:0.64.8'
api 'com.vladsch.flexmark:flexmark-ext-toc:0.64.8'
// Freemarker
// Remember to change the version number in FtlTemplateRenderer and MNode class when upgrading
api 'org.freemarker:freemarker:2.3.34' // Apache 2.0
// H2 Database
api 'com.h2database:h2:2.4.240' // MPL 2.0, EPL 1.0
// Java Specifications
api 'jakarta.transaction:jakarta.transaction-api:2.0.1'
api 'javax.cache:cache-api:1.1.1'
api 'javax.jcr:jcr:2.0'
api('jakarta.xml.bind:jakarta.xml.bind-api:4.0.4') { transitive = false } // EPL 2.0
api 'jakarta.activation:jakarta.activation-api:2.1.4' // activation api
api 'org.eclipse.angus:angus-activation:2.0.3' // activation implementation
api 'jakarta.websocket:jakarta.websocket-api:2.2.0'
api 'jakarta.websocket:jakarta.websocket-client-api:2.2.0'
// servlet-api needed during both compile and test
compileOnlyApi 'jakarta.servlet:jakarta.servlet-api:6.1.0'
testImplementation 'jakarta.servlet:jakarta.servlet-api:6.1.0'
// Java TOTP
api 'dev.samstevens.totp:totp:1.7.1' // MIT
// dev.samstevens.totp:totp depends on com.google.zxing:javase which depends on com.beust:jcommander, but an older version with a CVE, so specify latest to fix
api 'com.beust:jcommander:1.82'
// Jackson Databind (JSON, etc)
api 'com.fasterxml.jackson.core:jackson-databind:2.20.1'
// Jetty HTTP Client and Proxy Servlet
api 'org.eclipse.jetty:jetty-client:12.1.5' // Apache 2.0
api 'org.eclipse.jetty.ee11:jetty-ee11-proxy:12.1.5' // Apache 2.0
api 'org.eclipse.jetty:jetty-jndi:12.1.5' // Apache 2.0
// jakarta.mail
api 'jakarta.mail:jakarta.mail-api:2.1.5' // mail api
api 'org.eclipse.angus:angus-mail:2.0.5' // mail implementation
// JSoup (HTML parser, cleaner)
api 'org.jsoup:jsoup:1.21.2' // MIT
// Apache Shiro
api('org.apache.shiro:shiro-core:2.0.6') { transitive = false } // Apache 2.0
api('org.apache.shiro:shiro-web:2.0.6:jakarta') { transitive = false } // Apache 2.0
api('org.apache.shiro:shiro-lang:2.0.6') { transitive = false } // Apache 2.0
api('org.apache.shiro:shiro-cache:2.0.6') { transitive = false } // Apache 2.0
api('org.apache.shiro:shiro-event:2.0.6') { transitive = false } // Apache 2.0
api('org.apache.shiro:shiro-config-core:2.0.6') { transitive = false } // Apache 2.0
api('org.apache.shiro:shiro-config-ogdl:2.0.6') { transitive = false } // Apache 2.0
api('org.apache.shiro:shiro-crypto-core:2.0.6') { transitive = false } // Apache 2.0
api('org.apache.shiro:shiro-crypto-hash:2.0.6') { transitive = false } // Apache 2.0
api('org.apache.shiro:shiro-crypto-cipher:2.0.6') { transitive = false } // Apache 2.0
api('org.owasp.encoder:encoder:1.4.0') // BSD - transitive dependency of shiro-web
// SLF4J, Log4j 2 (note Log4j 2 is used by various libraries, best not to replace it even if mostly possible with SLF4J)
api 'org.slf4j:slf4j-api:2.0.17'
implementation 'org.apache.logging.log4j:log4j-core:2.25.2'
implementation 'org.apache.logging.log4j:log4j-api:2.25.2'
runtimeOnly 'org.apache.logging.log4j:log4j-jcl:2.25.2'
runtimeOnly 'org.apache.logging.log4j:log4j-slf4j2-impl:2.25.2'
// SubEtha SMTP, depends on jakarta.mail-api which is provided
api("com.github.davidmoten:subethasmtp:7.2.0") { transitive = false }
// Snake YAML
api 'org.yaml:snakeyaml:2.5' // Apache 2.0
// Apache Jackrabbit - uncomment here or include elsewhere when Jackrabbit repository configurations are used
// api 'org.apache.jackrabbit:jackrabbit-jcr-rmi:2.12.1' // Apache 2.0
// api 'org.apache.jackrabbit:jackrabbit-jcr2dav:2.12.1' // Apache 2.0
// Apache Commons JCS - Only needed when using JCSCacheToolFactory
// api 'org.apache.commons:commons-jcs-jcache:2.0-beta-1' // Apache 2.0
// Liquibase (for future reference, not used yet)
// api 'org.liquibase:liquibase-core:3.4.2' // Apache 2.0
// ========== test dependencies ==========
// junit-platform-launcher is a dependency from spock-core, included explicitly to get more recent version as needed
testImplementation 'org.junit.platform:junit-platform-launcher:6.0.1'
// junit-platform-suite required for test suites to specify test class order, etc
testImplementation 'org.junit.platform:junit-platform-suite:6.0.1'
// junit-jupiter-api for using JUnit directly, not generally needed for Spock based tests
testImplementation 'org.junit.jupiter:junit-jupiter-api:6.0.1'
// Spock Framework
testImplementation platform('org.spockframework:spock-bom:2.4-groovy-5.0') // Apache 2.0
testImplementation 'org.spockframework:spock-core:2.4-groovy-5.0' // Apache 2.0
testImplementation 'org.spockframework:spock-junit4:2.4-groovy-5.0' // Apache 2.0
testImplementation 'org.hamcrest:hamcrest-core:3.0' // BSD 3-Clause
// ========== executable war dependencies ==========
// Jetty
execWarRuntimeOnly 'org.eclipse.jetty:jetty-server:12.1.5' // Apache 2.0
execWarRuntimeOnly 'org.eclipse.jetty.ee11:jetty-ee11-webapp:12.1.5' // Apache 2.0
execWarRuntimeOnly 'org.eclipse.jetty.ee11.websocket:jetty-ee11-websocket-jakarta-server:12.1.5' // Apache 2.0
}
// setup task dependencies to make sure the start sourceSets always get run
compileJava.dependsOn startClasses
compileTestGroovy.dependsOn classes
sourceSets.test.compileClasspath += files(sourceSets.main.output.classesDirs)
// by default the Java plugin runs test on build, change to not do that (only run test if explicit task)
// no longer works as of gradle 4.8 or possibly earlier, use clear() instead: check.dependsOn.remove(test)
check.dependsOn.clear()
test {
useJUnitPlatform()
testLogging { events "passed", "skipped", "failed" }
testLogging.showStandardStreams = true; testLogging.showExceptions = true
maxParallelForks = 1
dependsOn cleanTest
include '**/*MoquiSuite.class'
systemProperty 'moqui.runtime', '../runtime'
systemProperty 'moqui.conf', 'conf/MoquiDevConf.xml'
systemProperty 'moqui.init.static', 'true'
classpath += files(sourceSets.main.output.classesDirs); classpath += files(projectDir.absolutePath)
// filter out classpath entries that don't exist (gradle adds a bunch of these), or ElasticSearch JarHell will blow up
classpath = classpath.filter { it.exists() }
beforeTest { descriptor -> logger.lifecycle("Running test: ${descriptor}") }
}
jar {
// this is necessary otherwise jar won't build when war plugin is applied
enabled = true
archiveBaseName = 'moqui-framework'
manifest { attributes 'Implementation-Title': 'Moqui Framework', 'Implementation-Version': version, 'Implementation-Vendor': 'Moqui Ecosystem' }
from sourceSets.main.output
// get all of the "resources" that are in component-standard directories instead of src/main/resources
from fileTree(dir: projectDir.absolutePath, includes: ['data/**', 'entity/**', 'screen/**', 'service/**', 'template/**']) // 'xsd/**'
}
tasks.test {
inputs.files(tasks.jar)
}
war {
dependsOn jar
// put the war file in the parent directory, ie the moqui dir instead of the framework dir
destinationDirectory.set(projectDir.parentFile)
archiveFileName = 'moqui.war'
// add MoquiInit.properties to the WEB-INF/classes dir for the deployed war mode of operation
from(fileTree(dir: projectDir.parentFile, includes: ['MoquiInit.properties'])) { into 'WEB-INF/classes' }
// this excludes the classes in sourceSets.main.output (better to have the jar file built above)
classpath = configurations.runtimeClasspath - configurations.providedCompile
classpath jar.archiveFile.get().asFile
// put start classes and Jetty jars in the root of the war file for the executable war/jar mode of operation
from sourceSets.start.output
from(files(configurations.execWarRuntimeClasspath)) { into 'execlib' }
// TODO some sort of config for Jetty? from file(projectDir.absolutePath + '/jetty/jetty.xml')
// setup the manifest for the executable war/jar mode
manifest { attributes 'Implementation-Title': 'Moqui Start', 'Implementation-Vendor': 'Moqui Ecosystem',
'Implementation-Version': version, 'Main-Class': 'MoquiStart' }
}
task copyDependencies { doLast {
delete file(projectDir.absolutePath + '/dependencies')
copy { from configurations.runtime; into file(projectDir.absolutePath + '/dependencies') }
copy { from configurations.testCompile; into file(projectDir.absolutePath + '/dependencies') }
} }
================================================
FILE: framework/data/CommonL10nData.xml
================================================
================================================
FILE: framework/data/CurrencyData.xml
================================================
================================================
FILE: framework/data/EntityTypeData.xml
================================================
================================================
FILE: framework/data/GeoCountryData.xml
================================================
================================================
FILE: framework/data/MoquiSetupData.xml
================================================
================================================
FILE: framework/data/SecurityTypeData.xml
================================================
================================================
FILE: framework/data/UnitData.xml
================================================
================================================
FILE: framework/entity/BasicEntities.xml
================================================
Usage depends on enum type such as an ID format/maskIndicator flag with meaning depending on enum typeGeneral info about the member, usage is specific to the enum groupFor getting EnumGroup members by enumGroupEnumIdOptional. If specified uses status items with this type.If true can be an initial status in this flow.Not currently supported, may be removed, issue with what is the context for the expressionNo FK in order to allow arbitrary permissions (ie not pre-configured).The factor is multiplied first, then the offset is added. When converting in the reverse
direction the offset is subtracted first, then divided by the factor.For threaded messages, this points to the message that started the thread.For threaded messages, this points to the previous message in the thread.The plain text variation of the bodyFor outgoing messages that came from an EmailTemplate.Defaults to INBOXComma separated list of domain names to allow (like: moqui.org, dejc.com), matched by ends with; if no value specified all domains allowedComma separated list of reply to email addressesComma separated list of CC email addressesComma separated list of BCC email addressesResourceFacade location for attachment content or screen (if screenRenderMode specified)
Alternative to attachmentLocation as a location for the screen rendered on its own. If specified is used to
determine the screen by path from the root screen looked up using the webappName and webHostName values from EmailTemplate.
Used to determine the MIME/content type, and which screen render template to use. Can be used to generate
XSL:FO that is transformed to a PDF and attached to the email with screenRenderMode=xsl-fo.
If empty the content at attachmentLocation will be sent over without rendering and its MIME type will be based on its extension.
A Groovy expression that evaluates to a Collection in the context to iterate over and add an attachment for each.
If entries are Map objects puts all entries in context for each (pushed/isolated context), otherwise puts Collection entry in 'forEachEntry' field.
Only applicable if screenRenderMode is specified so that there is a render of the attachment.
Optional Groovy expression evaluated as a boolean,
if specified and evaluates to false attachment will be skipped.Defaults to 631Leave empty to use default printer on print server
================================================
FILE: framework/entity/EntityEntities.xml
================================================
The name of the field corresponding to the joinFromAlias.The name of the field corresponding to the entityAlias, ie the related field.For ElasticSearch compatibility, and as a general good
practice for use as a dynamic view-entity, must follow entity name convention of camel case and starting with a capital letter.The name of the document for display in search
results and such. Is generally expanded on display and may use any field in the DataDocument.A title for each document instance for display in
search results or other places. Meant to be string expanded using a "flattened" version of the document
(see the CollectionUtilities.flattenNestedMap() method).This should be specified for documents that will be indexed by
ElasticSearch and must be lower-case (ElasticSearch requires all lower-case). Because of changes in ElasticSearch 5
this is no longer the actual index name and is instead an alias for each index from a DataDocument.Name of a service to call to get additional
data to include in the document. This service should implement the
org.moqui.EntityServices.add#ManualDocumentData interface.Name of a service to call to alter the generated
elasticsearch mapping for the data document. This service should implement the
org.moqui.EntityServices.transform#DocumentMapping interface.
String formatted like "RelationshipName:RelationshipName:fieldName" with zero or more relationship names.
If there is no relationship name the field is on the primary entity. More than one relationship names means
follow that path of relationships to get to the field.
This may also contain a Groovy expression using other fields in the current Map/Object in the document by
the path or any parent Map/Object above it in the document. When an expression is used a fieldNameAlias is required.
Alias to put in document output for field name
(ie final part of fieldPath only). Defaults to final part of fieldPath. Must be unique within the document
and can be used in EntityCondition passed into the EntityFacade.findDataDocuments() method.The ElasticSearch field type to use, default is based on entity
field type or for expression fields defaults to 'double'.Indicates the field should be sortable. This is needed because
in ElasticSearch we have two string types to work with: text (tokenized for search, not sortable) and keyword (sortable
but not tokenized for search). In ElasticSearch this adds [field name].keyword field of type keyword to sort on if the
entity field is a 'text' type ElasticSearch field.Fields displayed by default, set to N to not display in output.If specific the field is queried with the given function.
Must be one of the functions available in the view-entity.alias.@function attribute.The name of a relationship used in
any fieldPath to be aliased in the output document.Alias to put in document output instead of the full
relationship name.This is a very simple sort of condition to constrain data document output.Must be a valid value like those in the
econdition.@operator attribute. Ignored if postQuery=Y. Defaults (like that attribute) to 'equals'.If Y condition is applied after the query is done
instead of being added to the query as a condition. Must match at least one nested field with the specified
fieldNameAlias. The fieldValue String will be compared to the Object from the database field after
conversion using the Groovy asType() method.Associate links with a DataDocument to use in applications for links to details, edit screens, etc for search results.Must match an option for the XML Screen link.@url-type attribute. Defaults to 'plain'.Use this entity to allow a user group access to the DataDocument (for reports, etc). For all users use userGroupId="ALL_USERS".If Y index the feed on start if the index does not yet
exist (for servers where ES data not persisted between restarts)The service named here should implement the
org.moqui.EntityServices.receive#DataFeed interface; defaults in some cases to 'org.moqui.search.SearchServices.index#DataDocuments'The service named here should implement the
org.moqui.EntityServices.receive#DataFeedDelete interface; defaults in some cases to 'org.moqui.search.SearchServices.delete#DataDocument'Used only for periodic feeds.Keep retrieving time splits until the number of
records is greater then this threshold.Newer retrieve records newer than this many
milliseconds in the past (leave a delay buffer for transactions in progress).For sending to via file the path and filename to use, or path/filename pattern using Groovy string expandIf Y this record tracks data pulled from a remote
system, otherwise it tracks data pushed from this system.
Associates a set of entities through ArtifactGroupMember records associated with an ArtifactGroup.
ArtifactGroupMember records may have filterMap value and may have nameIsPattern=Y.
The filterMap is ignored when the application type is Exclude, it simply excludes the entity altogether.
To exclude records use a filterMap on an include.
If there are multiple ArtifactGroupMember records with filterMap value for an entity it will OR them together.
If include and exclude filters create condition with combined include AND NOT combined exclude.
Only entity artifacts (artifactTypeEnumId=AT_ENTITY) will
be used, all others ignored.If Y also include dependents of records, will apply
to all records for applicable entities.
================================================
FILE: framework/entity/OlapEntities.xml
================================================
Date Day Dimension. The natural key is [dateValue]Format: YYYY-MM-DDFormat: YYYY-MMCurrency Dimension. The natural key is [currencyId]
================================================
FILE: framework/entity/ResourceEntities.xml
================================================
Each record represents a single blog article, grouped as needed by categories (WikiBlogCategory)The date/time a blog post within a category was sent by email or other means.
================================================
FILE: framework/entity/Screen.eecas.xml
================================================
================================================
FILE: framework/entity/ScreenEntities.xml
================================================
For scheduled screen renders to send by email and/or write to a resource location. Primarily intended for use
on report screens with a form-list and saved-finds enabled, referencing the formListFindId for saved columns, parameters, etc.Defaults to 'csv', can also use 'xsl-fo' with PDF rendering, 'xlsx' if moqui-poi component in placeSet to Y to abort (not send or write) if there are no results in form-list on screenExpandable String for resource location to save to, only save to location if specifiedEmailTemplate to use to send by email, generally of type EMT_SCREEN_RENDER,
for default use the Default Screen Render template (set to 'SCREEN_RENDER'); only sends email if specifiedSend email to UserAccount.emailAddress for the userSend email to UserAccount.emailAddress for each user in the groupRuntime data for a scheduled ServiceJob (with a cronExpression), managed automatically by the service job runner.DEPRECATED. While still supported, to control access to subscreens use ArtifactAuthz and
related records instead.The title to show for this subscreen in the menu. Can be used to override subscreen titles in the
screen.default-menu-title attribute and the subscreens-item.menu-title attribute.Insert this item in subscreens menu at this index (1-based).Defaults to Y. Set to N to not include in the menu for the subscreens. This can be used to hide
subscreens from the directory structure or even explicitly declared in the Screen XML file.If Y will be set at the default subscreen (replacing screen.subscreens.@default-item)If Y the sub-screens of the sub-screen may be referenced directly under this screen, skipping the
screen path element for the sub-screenPart of the key (used to reference within a screen) and for sort orderThe location, name or other value description the resource.The position (row for form-single, column for form-list) number to put the field inThe sequence within the row or columnStructured to have a single FormConfig per form and user.Structured to have a single FormConfig per form and user.Has fields for the various options in search-form-inputs/searchFormInputs()/searchFormMap()Per-User default FormListFind by screen location and not form location because must be handled
very early in screen rendering so parameters are available to actions, etcThe screen location and form name (separated
by a hash/pound sign) of XML Screen Form to modify.Only show this field if the condition evaluates to true (Groovy expression)Field type for presentation, validation;
always stored as plain text FormResponseAnswerDefaults to 1, ie one page/section for all fields if nothing higher than 1 is specified.Used to provide attribute values. For a reference of attributes available for each field type, see
the corresponding element in the xml-form-?.xsd file.These settings are for a UserGroup. To apply to
all users just use the ALL_USERS UserGroup.
================================================
FILE: framework/entity/SecurityEntities.xml
================================================
Full artifact location/name, or a pattern if nameIsPattern=Y.If Y then the user will have authorization for anything called by the artifact.
If N user will need to have authorization for anything else called by the artifact.
Note that in some cases (like in screen-sets) inheritance in the other direction is on by default, or
in other words permission to access an artifact implies permission to access everything needed to get
to that artifact.
Defaults to Y.
A Groovy expression that evaluates to a Map that
will be used to constrain if the member is part of the group based on fields/parameters for Entity
operations and Service calls.If an artifact in the group specified is accessed by any user in the AuthzGroup maxHitsCount
times in maxHitsDuration seconds then the user/artifact will be blocked fir tarputDuration seconds.
If specified this service will be called and it should return a authzTypeEnumId with the
result of the authorization.
Will try to pass the following fields to this service: userId, authzActionEnumId, artifactTypeEnumId,
and artifactName. The service will also have access to the ArtifactExecutionFacade (ec.artifactExecution)
which you can use to get the current artifact stack, etc.
The service must return an authzTypeEnumId (AUTHZT_ALLOW, AUTHZT_ALWAYS, or AUTHZT_DENY).
No FK in order to allow externally authenticated users.Groovy boolean (if) expression, if specified checked before applying filters in the setGroovy boolean (if) expression, if specified checked before applying filters in the setBy default if a filterMap refers to a field not aliased in a view-entity there will be an error, set this to Y to do the query anywayThe name of the entity to filter when queried. May be
queried directly or as part of a view-entity.
A Groovy expression that evaluates to a Map that will be added to queries to filter/constrain visible data.
Values can be constants or variables that come from user context (ec.user.context) or execution context
(ec.context). It is up to code to set values for use by these filters.
If the value evaluates to a Collection the default comparison operator is IN, otherwise default is EQUALS.
If a Collection is empty results in a false constraint, unless using a NOT* comparison operator.
If Y then OR filterMap entries, default AND.If an artifact in the group specified is accessed by any user in the UserGroup maxHitsCount
times in maxHitsDuration seconds then the user/artifact will be blocked for tarpitDuration seconds.
Deny access to the artifact for the user until releaseDateTime is reached.No FK in order to allow externally authenticated users.The username used along with the password to loginUser's first, middle, last, etc nameNOTE: not an encrypted field because one way hash encryption used for itSet to random password for password reset, can be used only to update passwordSet to Y is currentPassword Base64 encoded, defaults to Hex encodedRSA public key for key based authenticationSet to Y when user logs out and to N when user logs in.
If user is session authenticated on request and this is Y then treat as if user not authenticated.If set then user may not login after this date, and no notifications will be sent after this date.The email address to use for forgot password emails and other system messages.If specified only allow login from matching IP4 address. Comma separated patterns where each pattern has 4
dot separated segments each segment may be number, '*' for wildcard, or '-' separate number range (like '0-31').This entity is for recording User Authentication Factors.Use varies based on factor type: TOTP is shared secret, Single Use is the code, Email Code is email addressNo FK in order to allow externally authenticated users.See UserAccout.ipAllowedNo FK in order to allow externally authenticated users.No FK in order to allow arbitrary permissions (ie not pre-configured).A login key is an alternate way to authenticate a user, generally issued for temporary use sort of
like a session.NOTE: not an encrypted field because one way
hash encryption used for it (uses login-key.@encrypt-hash-type, no salt, hash before lookup on verify)No FK in order to allow externally authenticated users.No FK in order to allow externally authenticated users.Use this entity for user-specific preferences (or properties). For default preferences use userId="_NA_".No FK in order to allow externally authenticated users.No FK because any key can be used whether or not there is an Enumeration record for it.Use this entity for user group preferences (or properties). For all users use userGroupId="ALL_USERS".For deciding which value to use when a user is a member of multiple groups with preferences with the same key.No FK in order to allow externally authenticated users.No FK because any key can be used whether or not there is an Enumeration record for it.No FK in order to allow externally authenticated users.No FK in order to allow externally authenticated users.If populated used when type=dangerOne of: info, success, warning, dangerIf Y user must be associated to see it or receive notifications.For each User if there is no NotificationTopicUser.receiveNotifications value then use this as the default, this defaults to N.For each User if there is no NotificationTopicUser.emailNotifications value then use this as the default, this defaults to N.If is specified use this template to send a notification email to each user with emailNotifications=YIf notification sent to user only actually notify if this is YIf Y user receives all notifications on topic even if not sent directlyIf Y sends an email to user using UserAccount.emailAddress
================================================
FILE: framework/entity/ServerEntities.xml
================================================
The name of the artifact hit. For XML Screen request it is "${webapp-name}.${screen-path}"Total (sum) of the squared running times for
calculating incremental standard deviation.After 100 hits count of hits more that 2.6
standard deviations above average (both avg and std dev adjusted incrementally).If instance accesses DB by different address specify here
================================================
FILE: framework/entity/ServiceEntities.xml
================================================
For ad-hoc (explicitly run) or scheduled service jobs. If cronExpression is null the job will only
be run ad-hoc, when explicitly called using ServiceCallJob. If a topic is specified results will be sent to
the topic (can be configured using a NotificationTopic record) as a NotificationMessage to the user that
called the job explicitly (if applicable) and to users associated with ServiceJobUser records.On completion send a notification to this topicIf Y this will be run local only. By default runs on any
server in a cluster listening for async distributed services (if an async distributed executor is configured).
An extended cron expression like Unix crontab but with extended syntax options (L, W, etc) similar to Quartz Scheduler. See:
http://cron-parser.com
http://www.quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-06.html
Only run scheduled after this date/time. Ignored for ad-hoc/explicit runs.Only run scheduled before this date/time. Ignored for ad-hoc/explicit runs.If specified only run this many times. Must specify
a cronExpression for the job to repeat. When this count is reached thruDate will be set to now and paused set to Y.
Ignored for ad-hoc/explicit runs.If Y this job is inactive and won't be
run on a schedule even if cronExpression is not null. Ignored for ad-hoc/explicit runs.Ignore lock and run anyway after this many
minutes. This should generally be much greater than the longest time the service is expected to run. This is
the mechanism for recovering jobs after a run failed in a way that did not clean up the ServiceJobRunLock
record. Defaults to 24 hours (1440 minutes) to make sure jobs get recovered.Minimum time between retries after an error (based on most recent ServiceJobRun record), in minutesJob execution priority, lower numbers run first among jobs that need to be run regardless of scheduled start timeParameters automatically added when the job service is called. Always stored as a String and will
be converted based on the corresponding service in-parameter.parameter.@type attribute (just as with any
service call).The user that initiated the job runRuntime data for a scheduled ServiceJob (with a cronExpression), managed automatically by the service job runner.If not null this is the currently running job instance.May be semaphore-name if that attribute is used, otherwise is service nameReference to the SystemMessageRemote record for the remote system
this message came from for incoming messages or should be sent to for outgoing messages.For incoming the received date, for outgoing the produced dateFor incoming the consumed date, for outgoing the sent dateIf a received message is split this is the original messageThe message received or sent to acknowledge this messageFor messages to/from another Moqui system, the
systemMessageId on the remote system; may also be used for other system level message IDs (as opposed to
messageId which is for the ID in the envelope of the message)ID of the sender (for OAGIS may be broken down into
logicalId, component, task, referenceId; for EDI X12 this is ISA06)ID of the receiver (for OAGIS may also be broken down;
for EDI X12 this is ISA08)ID of the message; this may be globally unique (like
the OAGIS BODID, a GUID) or only unique relative to the senderId and the receiverId (like EDI X12 ISA13 in
the context of ISA06, ISA08), and may only be unique within a certain time period (ID may be reused since in
EDI X12 limited to 9 digits)Date/time from message (for EDI X12 this is GS04 (date)
and GS05 (time))For OAGIS the BSR Noun; For X12 GS01 (functional ID code)For OAGIS the BSR Verb; For X12 ST01 (tx set ID code)Control number of the message when applicable (such as
GS06 in EDI X12 messages)Sub-Control number of the message when applicable (such as
ST02 in EDI X12 messages)The document version (for OAGIS BSR Revision, for X12
GS08 (version/revision))Active visit when SystemMessage triggered (mainly produced) to track
the user who did so; independent of the message transport which could have separate remote system and other Visit-like data.Not used in automated processing, but useful
for documentation and tools in some cases.The service to call after a message is
received to consume it. Should implement the org.moqui.impl.SystemMessageServices.consume#SystemMessage
interface (just a systemMessageId in-parameter). Used by the consume#ReceivedSystemMessage service.The service to call to produce an async
acknowledgement of a message. Should implement the org.moqui.impl.SystemMessageServices.produce#AckSystemMessage.
Once the message is produced should call the org.moqui.impl.SystemMessageServices.queue#SystemMessage service.The service to call to send queued messages.
Should implement the org.moqui.impl.SystemMessageServices.send#SystemMessage interface (just a
systemMessageId in-parameter and remoteMessageId out-parameter). Used by the send#ProducedSystemMessage service,
and for that service must be specified or will result in an error.The service to call to save a received message.
Should implement the org.moqui.impl.SystemMessageServices.receive#SystemMessage interface.
If not specified receive#IncomingSystemMessage just saves the message directly.
When applicable, used by the send service as the service on the remote server to call to receive the message.Where to look for files on a remote server, syntax is implementation specificRegular expression to match filenames in receivePath (optional)After successful receive move file to this path if receiveResponseEnumId = MsgRrMoveWhere to put files on a remote server, syntax is implementation
specific and may include both path and a filename patternMay be useful for other transports, for SFTP servers
that do not support setting file attributes after put/upload set to NOverride for SystemMessageType.sendServiceName
Username for basic auth when sending to the remote system.
This user needs permission to run the remote service or whatever on the remote system receives the message.
Note: For a Moqui remote server the user needs authz for the org.moqui.impl.SystemMessageServices.receive#IncomingSystemMessage
service, ie the user should be in a group that has authz for the SystemMessageServices ArtifactGroup such as the
SYSMSG_RECEIVE user group (see SecurityTypeData.xml).
Username for basic auth when sending to the remote system.Public Key for key based authentication, generally RSA PEM formatPrivate Key for key based authentication, generally RSA PEM PKCS #8 format like OpenSSHRemote System's Public Key for decryption, signature validation, etc; generally RSA PEM or X.509 Certificate formatShared secret for auth on receive and/or sign on send.Shared secret for auth on send if different from secret used to authorize on receive.If send and receive auth mechanisms are different specify send auth method hereOptional. May be used when this remote is for one
type of message.Sender (outgoing) or receiver (incoming) ID (EDI: in ISA06/08; OAGIS in ApplicationArea.Sender/Receiver.ID)Application code (EDI: in GS02/03; OAGIS: in
ApplicationArea.Sender/Receiver elements, split among sub-elements)Sender (incoming) or receiver (outgoing) ID (EDI: in ISA06/08; OAGIS in ApplicationArea.Sender/Receiver.ID)Application code (EDI: in GS02/03; OAGIS: in
ApplicationArea.Sender/Receiver elements, split among sub-elements)Request acknowledgement? Possible values dependent on
message standard.Used for production versus test/etc. Possible values
dependent on message standard.Remote system related to this remote system but for pre-auth purposes, like a separate single sign on serverFor runtime configurable enum mappings for a particular remote system. For bi-directional
integrations enumerated value mappings should be one to one or round trip results will be inconsistent.
The PK structure forces one mapped value for each enumId.
================================================
FILE: framework/entity/TestEntities.xml
================================================
================================================
FILE: framework/screen/AddedEmailAuthcFactor.xml
================================================
]]>]]>
================================================
FILE: framework/screen/EmailAuthcFactorSent.xml
================================================
]]>]]>
================================================
FILE: framework/screen/NotificationEmail.xml
================================================