Cluster Architecture
Server clustering is done mainly in order to achieve high availability and scalability.
High Availability
High availability means there is redundancy in the system such that service is available to outside world irrespective of individual component failures. For example, if we have a two node cluster, even if one node fails, the other node would continue to serve requests till the failed node is restored again.
Scalability
Scalability means increasing the processing capacity by adding more server nodes.
Load Balancer
Load balancing is the method of distributing workload to multiple server nodes. In order to achieve proper clustering function you would require a Load Balancer. The function of the load balancer is to monitor the availability of the server nodes in the cluster and route requests to all the available nodes in a fair manner. Load balancer would be the external facing interface of the cluster and it would receive all the requests coming to the cluster. Then it would distribute this load to all available nodes. If a node has failed, then the load balancer will not route requests to that node till that node is back online.
WSO2 Business Process Server Cluster Architecture
In order to build a wso2 business process server cluster you would require the following.
Load balancer
Hardware / VM nodes for BPS Nodes
Database Server
Following diagram depicts the deployment of a two node WSO2 bps cluster.
Load Balancer will receive all the requests and distribute the load (Requests) to the two BPS nodes. BPS Nodes can be configured as manager node and worker node. A BPS cluster can have one manager node and multiple worker nodes. This is with respect to deployment of artifacts. The node that performances the artifact deployment first is considered manager and other nodes are considered workers.
BPS Manager Node / Worker Nodes
Manager node is where the workflow artifacts (Business processes / Human Tasks / BPMN artifacts ) are first deployed. The worker nodes will look at the configuration generated by the master node for a given deployment artifact and then deploy those artifacts in its runtime.
WSO2 BPS requires this method of deployment because it does automatic versioning of the deployed bpel /human task artifacts. Hence, in order to have the same version number for a given deployment artifact across all the nodes, we need to do the versioning at one node (Master Node). A BPS server decides whether it is a manager node or a worker node by looking at its configuration registry mounting configuration. We will look at that configuration in detail later.
BPS and Registry
In the simplest terms, registry is an abstraction over a database schema. It provides an API using which you can store data and retrieve data to a database. WSO2 BPS embeds the registry component and hence has a build in registry. Registry is divided into three spaces.
Local Registry
Local registry is used to store information local to a server node.
Configuration Registry
Configuration Registry is used to store information that needs to be shared across same type of server nodes. For example, configuration registry is shared across BPS server nodes. However, this same configuration registry would not be shared across another type of server nodes.
Governance Registry
Governance Registry is used to store information that can be shared across clusters of different type of servers. For example governance registry can be shared across BPS and ESB cluster. In the above diagram, these different registry configurations are depicted as individual databases.
Note:
BPS Manager Node refers to the configuration registry using a Read/Write link while the BPS Worker nodes refer to the configuration registry using a Read/Only link.
BPS and User Store and Authorization
BPS management console requires a user to login to the system in order to do management activities. Additionally various permissions levels can be configured for access management. In human tasks, depending on the logged in user, what he can do with tasks will change.
All this access control/authentication/authorization functions are inherited to the BPS server from carbon kernel. You can also configure an external LDAP/Active directory to grant users access to the server. All this user information / permission information is kept in the user store database. In the above diagram, UM DB refers to this database. This database is also shared across all the cluster nodes.
Activiti DB
BPS 3.5.0 introduces BPMN support by embedding popular Activiti BPMN engine. In order to to persist the bpmn packages, process instance information, BPS uses this db. Since we are embedding two process engines, Apache ode for BPEL and Activiti for BPMN, we have kept the activiti db separate.
BPS Persistence DB
BPS handles long running processes and human tasks. This means, the runtime state of the process instances/ human task instances have to be persisted to a database. BPS persistence database is the databases where we store these process / t ask configuration data and process / task instance state.
Configuring the BPS Cluster
Now that we have understood the individual components depicted in the above diagram, we can proceed to implement our BPS cluster. I will break down the steps in configuring the cluster into following steps. Note that the only major difference between the manager node and worker node is in registry.xml configuration.
If you are using two machines (hardware or VM) all other configurations are identical for master node and slave node except IP addresses, ports and deployment synchronizer entry. However, if you are configuring the cluster on the same machine for testing purpose , you will need to configure port offset in carbon.xml file as port conflicts can occur.
Create database schemas.
Configure the master-datasource.xml ( Registry and User Manager databases )
Configure bps-datasources.xml ( BPS Persistence database )
Configure activiti-datasources.xml
Configure registry.xml ( Different for master node and slave node)
Configure user-mgt.xml
Configure axis2.xml
Configure bps.xml
Configure carbon.xml
Configure the server startup script
Creating database Schema's
WSO2 BPS supports the following major databases.
1. Oracle
2. MySQL
3. MSSQL
4. PostgreSQL
In the above diagram, we have depicted 6 databases. We can use H2 as the local registry for each BPS Node. We can create one schema for registry and configure registry mounting configuration for configuration registry and governance registry. Also in most scenarios, we can use that same db schema for user store as well. Hence we will need to create additional db schemas for bps db ( bpel and human tasks persistence data ) and activiti_db ( bpmn instances and tasks persistence data ).
Database Schema Requirement
|
DB Name
|
Configuration/Governance Registry
|
REGISTRY_DB
|
User Store database
|
UM_DB
|
Activiti DB ( BPMN DB )
|
BPMN_DB
|
BPS Persistence database
|
BPS_DB
|
You can find the sql script for creating registry databases from wso2bps-3.5.0/dbscripts directory. Sql script for bps persistence database can be found at wso2bps-3.5.0/dbscripts/bps/create directory.The db script for creating activiti db can be found at wso2bps-3.5.0/dbscripts/bps/bpmn/create directory.
Following is an example of creating the db schema for mysql.
mysql> create database REGISTRY_DB;
mysql> use REGISTRY_DB;
mysql> source /dbscripts/mysql.sql;
mysql> grant all on REGISTRY_DB.* TO username@localhost identified by "password";
Download and copy the MySql connector to /repository/components/lib directory.
Configuring master-datasources.xml
You can configure data sources for registry and user store in master-datasources.xml file found in / repository/conf/datasources directory.
<datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration">
<providers>
<provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
</providers>
<datasources>
<datasource>
<name>WSO2_CARBON_DB</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2CarbonDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration> <url>jdbc:h2:repository/database/WSO2CARBON_DB;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>
<username>wso2carbon</username>
<password>wso2carbon</password>
<driverClassName>org.h2.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
<datasource>
<name>WSO2_REGISTRY_DB</name>
<description>The datasource used for registry- config/governance</description>
<jndiConfig>
<name>jdbc/WSO2RegistryDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/REGISTRY_DB?autoReconnect=true</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
<datasource>
<name>WSO2_UM_DB</name>
<description>The datasource used for registry- local</description>
<jndiConfig>
<name>jdbc/WSO2UMDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/UM_DB?autoReconnect=true</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
</datasources>
</datasources-configuration>
Most of the entries are self-explanatory.
Configure bps-datasources.xml ( BPS Persistence database )
Open /repository/conf/datasources/bps-datasources.xml and add the relevant entries such as database name, driver class and database connection url. Following is the matching configuration for mysql.
<datasource>
<name>BPS_DS</name>
<description></description>
<jndiConfig>
<name>bpsds</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/bps350</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
<useDataSourceFactory>false</useDataSourceFactory>
<defaultAutoCommit>true</defaultAutoCommit>
<maxActive>100</maxActive>
<maxIdle>20</maxIdle>
<maxWait>10000</maxWait>
</configuration>
</definition>
</datasource>
Note the following entry.
<defaultAutoCommit>true</defaultAutoCommit> is set to true. This is an important setting for bpel engine.
You need to do this for each node in the cluster.
Configure Activiti-datasources.xml
Open wso2bps-3.5.0/repository/conf/datasources/activiti-datasources.xml and add the relevant entries.
<datasource>
<name>ACTIVITI_DB</name>
<description>The datasource used for activiti engine</description>
<jndiConfig>
<name>jdbc/ActivitiDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/BPMN_DB</url>
<username>root</username>
<password>root</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
Configure registry.xml
Registry mount path is used to identify the type of registry. For example” /_system/config” refers to configuration registry and "/_system/governance" refers to governance registry. Following is an example configuration for bps mount. I will highlight each section and describe them below. I will only describe the additions to the registry.xml file below. Leave the configuration for local registry as it is and add following new entries.
Registry configuration for BPS manager node
<dbConfig name="wso2bpsregistry">
<dataSource>jdbc/WSO2RegistryDB</dataSource>
</dbConfig>
<remoteInstance url="https://localhost:9443/registry">
<id>instanceid</id>
<dbConfig>wso2bpsregistry</dbConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
<cacheId>root@jdbc:mysql://localhost:3306/ REGISTRY_DB</cacheId>
</remoteInstance>
<mount path="/_system/config" overwrite="true">
<instanceId>instanceid</instanceId>
<targetPath>/_system/bpsConfig</targetPath>
</mount>
<mount path="/_system/governance" overwrite="true">
<instanceId>instanceid</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
We are identifying the data source we configured in the master datasources xml using the dbConfig entry and we give a unique name to refer to that datasource entry which is “wso2bpsregistry”;
Remote instance section refers to an external registry mount. We can specify the read only/read write nature of this instance as well as caching configurations and registry root location. Additionally we need to specify cacheID for caching to function properly in the clustered environment. Note that cacheId is same as the jdbc connection URL to our registry database.
We define a unique name “id” for each remote instance which is then referred from mount configurations. In the above example, our unique id for remote instance is instanceId. In each of the mounting configurations, we specify the actual mount patch and target mount path.
Registry configuration for BPS worker node
<dbConfig name="wso2bpsregistry">
<dataSource>jdbc/WSO2RegistryDB</dataSource>
</dbConfig>
<remoteInstance url="https://localhost:9443/registry">
<id>instanceid</id>
<dbConfig>wso2bpsregistry</dbConfig>
<readOnly>true</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
<cacheId>root@jdbc:mysql://localhost:3306/ REGISTRY_DB</cacheId>
</remoteInstance>
<mount path="/_system/config" overwrite="true">
<instanceId>instanceid</instanceId>
<targetPath>/_system/bpsConfig</targetPath>
</mount>
<mount path="/_system/governance" overwrite="true">
<instanceId>instanceid</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
This configuration is same as above with readOnly property set to true for remote instance configuration.
Configure user-mgt.xml
In the user-mgt.xml enter the datasource information for user store which we configured previously in master-datasoures.xml file. You can change the admin username and password as well. However, you should do this before starting the server.
<Configuration>
<AddAdmin>true</AddAdmin>
<AdminRole>admin</AdminRole>
<AdminUser>
<UserName>admin</UserName>
<Password>admin</Password>
</AdminUser>
<EveryOneRoleName>everyone</EveryOneRoleName>
<Property name="dataSource">jdbc/WSO2UMDB</Property>
</Configuration>
Configure axis2.xml
We use axis2.xml to enable clustering. We will use well known address (WKA) based clustering method. In WKA based clustering, we need to have a subset of cluster members configured in all the members of the cluster. Usually, we configure all members of the cluster in the members section of axis2.xml. At least one well known member has to be operational at all times.
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
<parameter name="membershipScheme">wka</parameter>
<parameter name="localMemberHost">127.0.0.1</parameter>
<parameter name="localMemberPort">4000</parameter>
<members>
<member>
<hostName>10.100.1.1</hostName>
<port>4000</port>
</member>
<member>
<hostName>10.100.1.2</hostName>
<port>4010</port>
</member>
</members>
</clustering>
In Axis2.xml change enabled parameter to true. Find the parameter membershipSchema and set wka option. Then configure the localMemberHost and LocalMemberport Entries. Under the members section, add the host name and port for each wka member. As we have only two nodes in our sample cluster configuration, we will configure both nodes as WKA nodes.
Configure bps.xml
In bps.xml, you need to configure the following entries.
Enable distributed lock
<tns:UseDistributedLock>true</tns:UseDistributedLock>
This entry enables hazelcast based synchronizations mechanism in order to prevent concurrent modification of instance state by cluster members.
Configure scheduler threadpool size
<tns:ODESchedulerThreadPoolSize>0</tns:ODESchedulerThreadPoolSize>
Thread pool size should always be smaller than maxActive database connections configured in bps-datasources.xml file.When configuring the thread pool size allocate 10-15 threads per core depending on your setup. Then leave some additional number of database connections since bps uses database connections for management API as well.
Example settings for a two node cluster.
Consider that MySQL Server configured database connection size 250 and the maxActive entry in bps-datasources file for each node is set to 70. Then we can configure the scheduler thread pool size as
Thread Pool Size = { (Max Active connections) /2 } - 10
SchedulerTreadPool size for each node 25
Note that we divided the max active number of connections by two in order to allow similar number of db connections for bpel engine and human task engine. If you are not using human tasks, you do not need to divide the max active connections by 2.
Define a unique node id for each node in the cluster
<tns:NodeId>node1</tns:NodeId>
This value has to be a unique string for each node in the cluster.
Configure carbon.xml
If you want automatic deployment of artifacts across the cluster nodes, you can enable deployment synchronizer feature from carbon.xml.
<DeploymentSynchronizer>
<Enabled>true</Enabled>
<AutoCommit>true</AutoCommit>
<AutoCheckout>true</AutoCheckout>
<RepositoryType>svn</RepositoryType>
<SvnUrl>http://10.100.3.115/svn/repos/as</SvnUrl>
<SvnUser>wso2</SvnUser>
<SvnPassword>wso2123</SvnPassword>
<SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>
Deployment synchronizer functions by committing the artifacts to the configured svn location from one node (Node with AutoCommit option set to true) and sending cluster messages to all other nodes about the addition / change of the artifact. When the cluster message is received, all other nodes will do an svn update resulting in obtaining the changes to relevant deployment directories. Now the server will automatically deploy these artifacts.
For the master node, keep AutoCommit and AutoCheckout entries as true. For all other nodes, change autoCommit entry to false.
Configure the server startup script
In the server startup script, you can configure the memory allocation for the server node as well as jvm tuning parameters. If you open the wso2server.sh or wso2server.bat file located at the /bin directory and go to the bottom of the file , you will find those parameters. Change them according to the expected server load.
Following is the default memory allocation for a wso2 server.
-Xms256m -Xmx1024m -XX:MaxPermSize=256m
Cluster artifact deployment best practices
Always deploy the artifact on the manager node first and on worker nodes after some delay.
Use deployment synchronizer if a protected svn repository is available in the network and your cluster has many nodes ( no of nodes > 4 )
Otherwise you can use simple file copying to deploy artifacts