Sizing
The MongoDB sizing methodology is a step-by-step workflow that helps identify the necessary hardware to run MongoDB effectively. It helps you estimate:
Storage: The disk space usage for data and indexes.
RAM: Ample memory to hold your working set. and ensure fast queries.
CPU: Sufficient processing power for reads, writes, and aggregation workloads.
Input/Output Operations Per Second (IOPS): How fast the storage can read or write data.
The sizing methodology takes into account your data size, workload type, and performance goals. It applies to both MongoDB Community and MongoDB Enterprise editions and represents both the official recommendations and commonly adopted production configurations to ensure reliable performance.. For more information, refer to the MongoDB Documentation.
Sizing workflow
Quantify the Working Set
Add your hot documents and hot indexes.
Target RAM ≥ working set for predictable latency; add ~30–50% buffer for growth. MongoDB emphasizes proper memory sizing so hot data is in RAM. For more information, refer to the MongoDB Documentation.
Map RAM to WiredTiger Reality
WiredTigeruses the larger of 50% × (RAM − 1 GB) or 256 MB for its internal cache. The rest typically acts as a file system cache. Plan RAM with that split in mind. For more information, refer to the MongoDB Documentation.
Choose CPU for Concurrency and Compute
Start balanced unless you know it is an aggregation-heavy workload. For example, approximately one vCPU per 4–8 GB of RAM for a Business edition. RAM or IOPS are more common bottlenecks than CPU for CRUD-heavy apps. It is recommended to increase cores as aggregation or compute rises. For more information, refer to the MongoDB Documentation.
Plan Storage Capacity and Performance
Use a Solid-State Drive (SSD) Size for:
data+indexes+journal+Oplog+20–30% buffer+growth.The SSD performance depends on volume size. The MongoDB guide explicitly recommends a minimum of 100 GB for SSD Standard or SSD Premium for optimal performance, even if the capacity is smaller.
For more information on performance classes, see SSD Storage.
Upsizing the SSD can reduce latency by increasing available performance headroom. The maximum storage per cluster data volume is 4 TB. To configure storage when setting up a cluster, see Set Up a MongoDB Cluster.
Configure the Oplog Size
The
Oplogacts as a "history buffer" that allows replica nodes to catch up if they fall behind. If this buffer is too small, a node that goes offline for maintenance may not be able to find the data it needs to resync when it comes back online.Ensure the Oplog window time span of entries covers worst-case lag, such as traffic spikes, batch imports, or maintenance. Many teams start at ≥ 24 hours and verify under peak writes.
We recommend sizing your
Oplogto hold at least 24 hours of history. It provides a safety margin for traffic spikes or server maintenance.
For more information, refer to the MongoDB Documentation.
Tune Connections and Pools
Use realistic pool sizes and timeouts. Avoid huge idle pools that burn memory or threads with no benefit.
Connection Scaling Heuristic: The heuristics refer to the IONOS operational guidelines. In practice, every MongoDB connection consumes a measurable amount of RAM for thread stacks, network buffers, and session metadata.
Platform Enforcement: To guarantee stability, the platform enforces strict connection limits based on your instance's available RAM. It is a safety mechanism to prevent memory exhaustion and protect your database from being shut down by the Linux Out Of Memory (OOM) killer.
Client Configuration: Configure your application’s connection pool sizes to stay within these enforced limits. Avoid defining large idle pools, as the server will reject connections once the limit is reached.
Select starting configuration by edition
Use these sizes as starting points. Validate with metrics and scale up as needed.
Workload Size
Business Edition (Shared Hardware)
Enterprise Edition (Dedicated Hardware)
Storage and Operational Notes
Development (Non-critical)
Architecture: 1 Node (Standalone) 2 vCPU 4-8 GB RAM
Architecture: 3 Nodes (Replica Set) 2 Dedicated Cores 4-8 GB RAM
Business Warning: 1 node does not provide high availability. Availability: For use strictly in development or testing purposes. Enterprise: Starts as a 3-node replica set for accurate production simulation.
Standard Production (Read-mostly API)
Architecture: 3 Nodes (Fixed replica set) 4-8 vCPU 16-32 GB RAM
Architecture: 3, 5, 7 Nodes (Configurable replica set) 4-6 Dedicated Cores 16-32 GB RAM
Production Rule: Do not use one node for production workload. Enterprise Choice: Choose 5 or 7 nodes primarily for resilience, such as tolerating multiple simultaneous failures, rather than focusing on performance.
High Performance (Write Heavy)
Architecture: 3 Nodes (Fixed replica set) 8-16 vCPU 64-128 GB RAM
Architecture: 3, 5, 7 Nodes (Configurable replica set) 8-12 Dedicated Cores 64-128 GB RAM
Scaling Decision: If you need to scale read or write operations while maintaining data consistency, do not only secondary nodes. Transition to Sharding to multiply your primary nodes.
Massive Scale (Extreme Throughput)
Not Available (Limit reached)
Architecture: Sharded cluster(Multiple replica sets) 16-31 Dedicated Cores 128-230 GB RAM
Scaling Strategy: Transition to Sharding (Horizontal scaling) if you encounter any of the following vertical limit of a single replication set:
RAM: Working set > 230 GB.
IOPS: Write intensity exceeds SSD Premium limits.
CPU: Primary node saturated by locking or compression.
Capacity: The disk size exceeds single-volume limits.
Warning: The selected RAM determines the hard limit for concurrent connections. If your application requires more connections than the RAM tier allows, the server will reject new connections to protect itself. Ensure you size the RAM to accommodate your peak connection count to avoid application-side bottlenecks. It applies to both MongoDB Business vCPU and MongoDB Enterprise dedicated core resources.
Configuration guidance
SSD Sizing: Increasing the SSD size can improve IOPS per throughput headroom, even when the capacity is sufficient.
Storage: Up to 4 TB of data volume is possible per cluster. You can plan the capacity and performance together within this limit.
Sharding and BI Connector: It is available only in the Enterprise edition. You can Create a Sharded Database Cluster and Enable the BI Connector.
Partnership: The service is offered in partnership with MongoDB, Inc. (certified).
Performance: If you observe storage latency during peaks, increase the SSD volume size, especially on SSD Standard or SSD Premium, to unlock higher performance headroom—no app change required. Start at ≥ 100 GB, as per the IONOS Cloud setup guidance to Set Up a MongoDB Cluster.
Sizing worksheet
This worksheet helps you estimate the resources needed for your MongoDB deployment on IONOS Cloud.
Criteria
Description
Hot Documents
The documents that are accessed frequently and repeatedly during typical database operations. It is part of the "working set".
Hot Index Size
The total size (in bytes) of the indexes that are actively used by your queries. You can find this by summing totalIndexSize() for your collections.
Hot Data Bytes
The portion of your data and indexes that is actively and frequently accessed by your application. The working set comprises hot documents, hot index entries, and frequently accessed metadata.
Working Set
The total size (in bytes) of the actual documents that are frequently accessed or modified. It is the sum of your "hot index size" and "hot data bytes".
RAM Target
A recommended amount of RAM for your MongoDB instance.
It is calculated by multiplying your "working se"t" by a factor of 1.3 to 1.5. It accounts for WiredTiger memory usage and overhead. For more information, refer to the MongoDB Documentation.
vCPU Target
The recommended number of vCPU Servers needed to handle your application's concurrency and the complexity of your aggregations (data processing tasks).
SSD Capacity
The total storage space you need on your SSD.
It includes your data, indexes, journal files (for recovery), Oplog (for replication), and an additional 30% for future growth and overhead. Calculate the value as follows: data + indexes + journal + Oplog + 30% = ≥ value.
SSD Performance
To ensure optimal performance, start with an SSD of at least 100 GB, such as the SSD Standard or SSD Premium. Choose a value of ≥ 100 GB; you can opt to upgrade if latency issues occur.
Oplog Window
The time duration between the oldest and newest Oplog entries. It is also known as the Oplog retention window.
The value must be greater than or equal to your most extended maintenance period. It starts at ≥ 24 hours.
Alerts
To set alerts when resources, such as CPU, RAM, or storage, reach their limit. Set at 70% CPU, RAM, or storage capacity; You can plan to scale resources before sustained usage reaches 85%.
Example worksheet for sizing
Technical reference for SSD Premium sizing
SSD Premium performance increases linearly with the volume size until it reaches the platform threshold. You may need to provision a larger disk than your data requires to achieve your desired IOPS, a strategy known as "Oversizing".
The formula to scale sizing based on performance:
Read Performance: 75 IOPS × Volume Size (GB)
Write Performance: 50 IOPS × Volume Size (GB)
Last updated
Was this helpful?