Capped collections are fixed-size collections that behave like FIFO buffers. Once a capped collection reaches its configured size, MongoDB makes room for new documents by removing the oldest ones.
Today, capped collections are a specialized tool. If you only need age-based retention, TTL indexes are usually the better choice because they work on regular collections and offer more flexibility. Capped collections are still useful when you want a bounded log-style buffer or a tailable cursor.
The examples in this post use the MongoDB Java Sync Driver 5.6 on Java 25 with a recent MongoDB server running on localhost:27017.
<dependencies>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-sync</artifactId>
<version>5.6.4</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.5.32</version>
</dependency>
</dependencies>
Create ¶
Unlike a regular collection, a capped collection should be created explicitly before your application starts writing documents. In Java, you do that with MongoDatabase.createCollection() and CreateCollectionOptions.
The following example creates a capped collection named log in the test database and sets the maximum size to 256 bytes.
try (MongoClient mongoClient = MongoClients.create()) {
MongoDatabase db = mongoClient.getDatabase("test");
Set<String> collectionNames = new HashSet<>();
db.listCollectionNames().into(collectionNames);
if (!collectionNames.contains("log")) {
db.createCollection("log",
new CreateCollectionOptions().capped(true).sizeInBytes(256));
}
256 bytes is the minimum capped size. MongoDB also rounds the configured size up to the next multiple of 256, so a value like 1000 becomes 1024 internally.
MongoDB automatically creates the _id index for capped collections.
Insert ¶
Writing to a capped collection uses the same APIs as a regular collection, such as MongoDatabase.getCollection() and MongoCollection.insertOne(). Behind the scenes, MongoDB removes old documents when a new insert would exceed the configured size.
MongoCollection<Document> collection = db.getCollection("log");
for (int j = 0; j < 10; j++) {
Document logMessage = new Document();
logMessage.append("index", j);
logMessage.append("message", "User sr");
logMessage.append("loggedIn", new Date());
logMessage.append("loggedOut", new Date());
collection.insertOne(logMessage);
}
When we inspect the collection, only the last 2 documents remain. Each sample document is about 90 bytes, so only 2 fit into the 256-byte limit.
collection.find()
.forEach((Consumer<Document>) block -> System.out.println(block.get("index")));
// 8
// 9
Reading the collection without an explicit sort returns documents in natural order, which in this single-writer example matches insertion order.
To retrieve the newest documents first, sort by $natural descending.
Document last = collection.find().sort(Sorts.descending("$natural")).first();
System.out.println(last.get("index")); // 9
If you have multiple concurrent writers, MongoDB does not guarantee strict insertion order in query results.
Limit number of documents ¶
You can cap a collection not only by size but also by the number of documents. To configure the document count limit, use maxDocuments in CreateCollectionOptions.
Example with a maximum of 3 documents:
db.createCollection("log",
new CreateCollectionOptions().capped(true).maxDocuments(3).sizeInBytes(512));
sizeInBytes is still mandatory, and it remains the first limit MongoDB enforces. If the size is too small, MongoDB starts removing old documents before the maxDocuments limit is ever reached. In this sample, three documents need roughly 270 bytes, so 512 bytes is large enough to reach the document-count limit.
After inserting 10 documents, the collection contains only the last 3.
MongoCollection<Document> collection = db.getCollection("log");
for (int j = 0; j < 10; j++) {
Document logMessage = new Document();
logMessage.append("index", j);
logMessage.append("message", "User sr");
logMessage.append("loggedIn", new Date());
logMessage.append("loggedOut", new Date());
collection.insertOne(logMessage);
}
collection.find()
.forEach((Consumer<Document>) block -> System.out.println(block.get("index")));
// 7
// 8
// 9
}
To estimate the required size, the collStats command is still useful. The avgObjSize field tells you the average document size in bytes.
try (MongoClient mongoClient = MongoClients.create()) {
MongoDatabase db = mongoClient.getDatabase("test");
Document collStats = db.runCommand(new Document("collStats", "log"));
System.out.println(collStats.toJson());
System.out.println("Number of Documents: " + collStats.get("count"));
System.out.println("Size in Bytes: " + collStats.get("size"));
System.out.println("Average Object size in Bytes : " + collStats.get("avgObjSize"));
}
The same command also tells you whether a collection is capped and shows the configured max and maxSize values.
try (MongoClient mongoClient = MongoClients.create()) {
MongoDatabase db = mongoClient.getDatabase("test");
Document collStats = db.runCommand(new Document("collStats", "log"));
System.out.println("Is capped: " + collStats.get("capped"));
System.out.println("Max. Documents: " + collStats.get("max"));
System.out.println("Max. Size in Bytes: " + collStats.get("maxSize"));
}
Tailable Cursors ¶
If you need to keep reading as new documents arrive, capped collections support tailable cursors. They behave like tail -f on a log file: once the initial result set is consumed, the cursor stays open and waits for new inserts.
try (MongoClient mongoClient = MongoClients.create()) {
MongoDatabase db = mongoClient.getDatabase("test");
db.drop();
db.createCollection("log",
new CreateCollectionOptions().capped(true).sizeInBytes(512));
MongoCollection<Document> collection = db.getCollection("log");
AtomicInteger index = new AtomicInteger(0);
Thread insertThread = new Thread(() -> {
while (!Thread.currentThread().isInterrupted()) {
try {
TimeUnit.SECONDS.sleep(1);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
Document logMessage = new Document();
logMessage.append("index", index.incrementAndGet());
logMessage.append("message", "User sr");
logMessage.append("loggedIn", Date.from(Instant.now()));
logMessage.append("loggedOut", Date.from(Instant.now()));
collection.insertOne(logMessage);
}
});
insertThread.setDaemon(true);
insertThread.start();
The application then opens a tailable cursor in a loop. When the collection is still empty, the cursor can be consumed and closed immediately, so reopening it after a short delay is a practical approach.
while (true) {
try (MongoCursor<Document> cursor = collection.find()
.cursorType(CursorType.TailableAwait).noCursorTimeout(true).iterator()) {
while (cursor.hasNext()) {
Document doc = cursor.next();
System.out.println(doc.get("index"));
}
}
TimeUnit.SECONDS.sleep(2);
}
As soon as documents are available, hasNext() blocks until the next insert arrives and next() returns it. For many event-driven applications, change streams are now a better fit because they work with regular collections and do not require a fixed-size buffer. Tailable cursors still make sense when the capped collection itself is the feature you want.
See the official documentation for more information about tailable cursors.
Conversion ¶
MongoDB still provides the convertToCapped command to replace a regular collection with a capped one.
Document collStats = db.runCommand(new Document("collStats", "log"));
System.out.println(collStats.get("capped")); // false
Document doc = new Document("convertToCapped", "log");
doc.append("size", 1024);
Document result = db.runCommand(doc);
System.out.println(result.toJson());
collStats = db.runCommand(new Document("collStats", "log"));
System.out.println(collStats.get("capped")); // true
During the conversion, MongoDB reads the source collection in natural order and copies the documents into a new capped collection. If the configured size is too small, only the newest documents fit.
After the conversion, only the latest documents remain in the collection.
collection.find().forEach(
(Consumer<Document>) document -> System.out.println(document.get("index")));
// 989
// 990
// 991
// 992
// 993
// 994
// 995
// 996
// 997
// 998
// 999
convertToCapped holds an exclusive database lock while it runs, and it recreates only the _id index. Recreate any secondary indexes yourself after the command completes.
If you want to keep the original collection untouched, use cloneCollectionAsCapped to create a capped copy instead.
Document collStats = db.runCommand(new Document("collStats", "log"));
System.out.println(collStats.get("capped")); // false
Document doc = new Document("cloneCollectionAsCapped", "log");
doc.append("toCollection", "logCapped");
doc.append("size", 1024);
Document result = db.runCommand(doc);
System.out.println(result.toJson());
collStats = db.runCommand(new Document("collStats", "log"));
System.out.println(collStats.get("capped")); // false
collStats = db.runCommand(new Document("collStats", "logCapped"));
System.out.println(collStats.get("capped")); // true
db.getCollection("logCapped").find().forEach(
(Consumer<Document>) document -> System.out.println(document.get("index")));
This command does not affect the documents in the source collection.
There is no single command that converts a capped collection into a regular collection. A practical approach is to rename the capped collection, copy its documents into a new regular collection with $out, and then drop the old collection.
Document collStats = db.runCommand(new Document("collStats", "log"));
System.out.println(collStats.get("capped")); // true
MongoNamespace newName = new MongoNamespace("test", "logOld");
collection.renameCollection(newName);
collection = db.getCollection("logOld");
collection.aggregate(Arrays.asList(Aggregates.out("log")))
.forEach((Consumer<Document>) block -> {
System.out.println(block);
});
collStats = db.runCommand(new Document("collStats", "log"));
System.out.println(collStats.get("capped")); // false
db.getCollection("logOld").drop();
Limitations ¶
Capped collections are intentionally restrictive:
- They are a specialized storage type, not a general replacement for regular collections.
- You cannot delete individual documents.
- Avoid updates that change document size.
- They cannot be sharded.
- You cannot write to them inside transactions.
- If you only need age-based retention, TTL indexes are usually the better choice.
More information ¶
All the code examples from this blog are stored on GitHub:
https://github.com/ralscha/blog/tree/master/capped
Official MongoDB documentation about capped collections:
https://www.mongodb.com/docs/manual/core/capped-collections/