# Participant Feature Matrix
This document provides a comprehensive matrix of different participant types, their requirements, and recommended connectivity solutions for Mojaloop integration.
# Payment Use-Case DFSPs
Participant Category | Description | Expected Use-Cases | Infrastructure Requirements for Mojaloop Integration | Expected Production SLA | Likely Relevant Regulation | Special Security Requirements | Solution Options |
---|---|---|---|---|---|---|---|
Small self-hosting DFSP | - Small FI with single branch. - Own workstations - Minimal cloud and/or SaaS. | - All moja transfer types except bulk. - Open banking (incl PISP, AISP) | - Single, cheap, low-end dedicated mini-pc (e.g. RPi) - Single small business broadband Internet connection - Self-hosted core banking system e.g. Mifos - Use OS/Software firewall on same HW node as integration layer. | - "Some" downtime acceptable if hardware fails. - Some schemes may rule out DFSPs that cant meet a certain downtime SLA. - May take many days/weeks to purchase replacement hardware on total failure. - Full Mojaloop security feature set: mTLS, JWS, ILP - ~10 TPS peak sustained for 1 hour. - Max capable of 864000 per 24 hours. | - Record keeping? - Security? | - No need to integrate with existing enterprise security platforms. - Need a fully secure solution "in-a-box" following best industry practice for internet facing services i.e. including firewall. | The “Standard Service Manager” is recommended: A minimal functionality Integration Toolkit-based solution (accessible locally by means of a BI tool). This can be hosted in a basic server, ranging from a mid-specification server for a large MFI or a small bank, right down to a Raspberry PI for the smallest DFSPs with less rigorous service continuity requirements and lower transaction volumes. The Standard Service Manager does not support bulk payments. - Docker compose based integration layer. - Minimal, self-contained integration layer. |
Low Medium self-hosted DFSP | - Small FI with one or two branches. - Own "data centre" i.e. broom cupboard with a few servers, router, firewall etc... - Some cloud knowledge and/or SaaS usage. | - All moja transfer types - Bulk (1000's of transfers). - Open banking (incl PISP, AISP) | - Single enterprise grade server hardware node. - Use OS/Software firewall on same HW node as integration layer OR dedicated HW firewall. | - "Some" downtime acceptable if hardware fails. - Some schemes may rule out DFSPs that cant meet a certain downtime SLA. - May take hours to replace hardware on total failure. - Full Mojaloop security feature set: mTLS, JWS, ILP - ~50 TPS peak sustained for 1 hour. | - Record keeping? - Security? | - May need integration with existing enterprise security platforms e.g. firewalls, gateways etc... ?? needs more clarification | The “Enhanced Service Manager” is recommended: Based on the “Standard Service Manager” described earlier, this extends it by adding a Kafka deployment and support for bulk payments. It can be hosted at minimum in a basic server in the DFSP's own "data centre". - Docker compose or docker swarm based integration layer. - Minimal, self-contained integration layer. |
High Medium self-hosted DFSP | - Small FI with one or two branches. - Own "data centre" i.e. broom cupboard with a few servers, router, firewall etc... - Some cloud knowledge and/or SaaS usage. | - All moja transfer types - Bulk (1000's of transfers). - Open banking (incl PISP, AISP) | - In order to tolerate failure on 1 hardware node 3 or more hardware nodes are required. (2n+1) | - "Some" limited (minutes) downtime acceptable if hardware fails. - Some schemes may rule out DFSPs that cant meet a certain downtime SLA. - Should have spare hardware waiting or very fast replacement services in case of failures. - Full Mojaloop security feature set: mTLS, JWS, ILP - ~50 TPS peak sustained for 1 hour. | - Record keeping? - Security? | - May need integration with existing enterprise security platforms e.g. firewalls, gateways etc... | The “Enhanced Service Manager” is recommended: Based on the “Standard Service Manager” described earlier, this extends it by adding a Kafka deployment and support for bulk payments. It can be hosted at minimum in a redundant, multiple server configuration in the DFSP's own "data centre". - Kubernetes based integration layer - Possibly have existing integration technology. |
Large self-hosted DFSP | - Mature, multi-branch FI with high internal IT capability - Has own data centre and experts to manage systems - Comfortable with Cloud and hybrid applications - Has internal software engineering capability. | - All moja transfer types including bulk. - Bulk (1000000's of transfers in a transaction @ 1000 per chunk, sorted per payee DFSP). - Open banking (incl PISP, AISP) | - High availability of internal infrastructure is necessary - Multiple active instances of all critical integration services spread across multiple hardware nodes. - High availability, replicated data storage. - may be multi-site / availability zone / region. | - No downtime acceptable - High-availability of connectivity. - multiple active connections via diverse routes. - Optional persistent storage. - Scheme connection and integration layer SLA should match SLA for existing internal infrastructure. - Up to 800 TPS peak sustained for 1 hour for e.g. FXPs. | - Record keeping? - Security? | - May need integration with existing enterprise security platforms e.g. firewalls, gateways etc... | The “Premium Service Manager” is recommended: A fully functional, Payment Manager-type service, for use by larger DFSPs. Operation of this needs significant resources; it must be hosted either in the DFSP’s existing data centre or in the cloud. - Kubernetes based integration layer - Possibly have existing integration technology. |
# Fintechs which use PISP and/or AISP
Participant Category | Description | Expected Use-Cases | Infrastructure Requirements for Mojaloop Integration | Expected Production SLA | Likely Relevant Regulation | Special Security Requirements | Solution Options |
---|---|---|---|---|---|---|---|
Small self-hosting PISP/AISP | - Small org with single "branch" fintech with one or two products. - Own workstations / servers - Minimal cloud and/or SaaS. | - Relatively small bulk payments e.g. salary payments for SMEs | - Single, cheap, low-end dedicated mini-pc (e.g. RPi) - Single small business broadband Internet connection - Self-hosted core banking system e.g. Mifos - Use OS/Software firewall on same HW node as integration layer. | - "Some" downtime acceptable if hardware fails. - Some schemes may rule out DFSPs that cant meet a certain downtime SLA. - May take many days/weeks to purchase replacement hardware on total failure. - Full Mojaloop security feature set: mTLS, JWS, ILP - Bulk interface SLA? - How should this be defined? Batch size? time to send batch over API? response time for callbacks? - Max batch size approx 10k payments - Sending 10k payments via bulk API should take < 30 seconds. - Responding to callbacks should take < 5 seconds. | - Record keeping? - Security? | - No need to integrate with existing enterprise security platforms. - Need a fully secure solution "in-a-box" following best industry practice for internet facing services i.e. including firewall. | - Docker compose based integration layer. - Minimal, self-contained integration layer. |
Low Medium self-hosting PISP/AISP | - Small org with one or two branches. - Own "data centre" i.e. broom cupboard with a few servers, router, firewall etc... - Some cloud knowledge and/or SaaS usage. | - Relatively small bulk payments e.g. salary payments for SMEs - Account aggregation | - Single enterprise grade server hardware node. - Use OS/Software firewall on same HW node as integration layer OR dedicated HW firewall. | - "Some" downtime acceptable if hardware fails. - Some schemes may rule out DFSPs that cant meet a certain downtime SLA. - May take hours to replace hardware on total failure. - Full Mojaloop security feature set: mTLS, JWS, ILP - Bulk interface SLA? - How should this be defined? Batch size? time to send batch over API? response time for callbacks? - Max batch size approx 25k payments - Sending 25k payments via bulk API should take < 60 seconds. - Responding to callbacks should take < 10 seconds. | - Record keeping? - Security? | - May need integration with existing enterprise security platforms e.g. firewalls, gateways etc... ?? needs more clarification | - Docker compose or docker swarm based integration layer. - Minimal, self-contained integration layer. |
High Medium self-hosting PISP/AISP | - Small org with one or two branches. - Own "data centre" i.e. broom cupboard with a few servers, router, firewall etc... - Some cloud knowledge and/or SaaS usage. | - Bulk payment for large organisations e.g. government depts. - Account aggregation | - In order to tolerate failure on 1 hardware node 3 or more hardware nodes are required. (2n+1) | - "Some" limited (minutes) downtime acceptable if hardware fails. - Some schemes may rule out DFSPs that cant meet a certain downtime SLA. - Should have spare hardware waiting or very fast replacement services in case of failures. - Full Mojaloop security feature set: mTLS, JWS, ILP - Bulk interface SLA? - How should this be defined? Batch size? time to send batch over API? response time for callbacks? - Max batch size approx 100-200k payments - Sending 100-200k payments via bulk API should take < 300 seconds. - Responding to callbacks should take < 120 seconds. | - Record keeping? - Security? | - May need integration with existing enterprise security platforms e.g. firewalls, gateways etc... | - Kubernetes based integration layer - Possibly have existing integration technology. |
Large self-hosting PISP/AISP | - Mature, multi-branch org with high internal IT capability - Has own data centre and experts to manage systems - Comfortable with Cloud and hybrid applications - Has internal software engineering capability. | - Bulk payment for large organisations e.g. government depts. | - High availability of internal infrastructure is necessary - Multiple active instances of all critical integration services spread across multiple hardware nodes. - High availability, replicated data storage. - may be multi-site / availability zone / region. | - No downtime acceptable - High-availability of connectivity. - multiple active connections via diverse routes. - Optional persistent storage. - Scheme connection and integration layer SLA should match SLA for existing internal infrastructure. - Bulk interface SLA? - How should this be defined? Batch size? time to send batch over API? response time for callbacks? - Max batch size approx 1mil payments - Sending 1mil payments via bulk API should take < 600 seconds. - Responding to callbacks should take < 300 seconds. | - Record keeping? - Security? | - May need integration with existing enterprise security platforms e.g. firewalls, gateways etc... | - Kubernetes based integration layer - Possibly have existing integration technology. |
# Document History
Version | Date | Author | Detail |
---|---|---|---|
1.0 | 9th June 2025 | Tony Williams | Initial version |