Database Setup
Optimize your MongoDB configuration for the best Chronos performance and reliability.
Required MongoDB Version
Chronos requires MongoDB 4.0 or higher with support for:
- Change streams (for real-time updates)
- Transactions (for atomic operations)
- Modern driver features
Database Collections
Chronos uses a single collection (default: chronosJobs
) to store all job information.
Default Collection Structure
{
_id: ObjectId("..."),
name: "job-name",
data: { /* job data */ },
type: "single", // or "normal"
priority: 0,
nextRunAt: ISODate("..."),
lastModifiedBy: "scheduler-name",
lockedAt: null,
lastRunAt: null,
lastFinishedAt: null,
failedAt: null,
failReason: null,
repeatInterval: null,
repeatTimezone: null,
startDate: null,
endDate: null,
skipDays: null,
repeatAt: null
}
Essential Indexes
Chronos automatically creates these indexes for optimal performance:
Default Index
db.chronosJobs.createIndex({
"nextRunAt": 1,
"priority": -1
});
This index is crucial for job scheduling performance.
Recommended Additional Indexes
For production use, consider adding these indexes based on your usage patterns:
For Job Management Operations
// Index for finding jobs by name
db.chronosJobs.createIndex({ "name": 1 });
// Index for finding jobs by name and status
db.chronosJobs.createIndex({
"name": 1,
"disabled": 1
});
// Compound index for job cleanup
db.chronosJobs.createIndex({
"name": 1,
"disabled": 1,
"lockedAt": 1
});
For Agendash (Web UI)
If you use Agendash for monitoring:
db.chronosJobs.createIndex({
"nextRunAt": -1,
"lastRunAt": -1,
"lastFinishedAt": -1
}, { name: "agendash" });
For High-Volume Job Types
If you have job types with thousands of instances:
// Optimize sorting for specific job types
db.chronosJobs.createIndex({
"name": 1,
"nextRunAt": 1,
"priority": -1
});
Connection Optimization
Connection String Options
const scheduler = new Chronos({
db: {
address: 'mongodb://localhost:27017/scheduler',
options: {
// Connection pool settings
maxPoolSize: 100,
minPoolSize: 5,
maxIdleTimeMS: 30000,
// Timeout settings
connectTimeoutMS: 30000,
socketTimeoutMS: 30000,
// Retry settings
retryWrites: true,
retryReads: true,
// Read preference
readPreference: 'secondaryPreferred',
// Write concern
w: 'majority',
wtimeoutMS: 5000,
// Compression
compressors: ['zstd', 'zlib']
}
}
});
Replica Set Configuration
For production environments with replica sets:
const scheduler = new Chronos({
db: {
address: 'mongodb://mongo1:27017,mongo2:27017,mongo3:27017/scheduler?replicaSet=myReplSet',
options: {
readPreference: 'secondaryPreferred',
readConcern: { level: 'majority' },
writeConcern: { w: 'majority', wtimeout: 5000 }
}
}
});
Performance Optimization
Collection Configuration
Capped Collection for Completed Jobs
Create a separate capped collection for job history:
// Create capped collection for job history (100MB, max 100k docs)
db.createCollection("jobHistory", {
capped: true,
size: 100000000,
max: 100000
});
// Move completed jobs to history collection
scheduler.on('complete', async (job) => {
// Archive completed job
await db.collection('jobHistory').insertOne({
...job.attrs,
archivedAt: new Date()
});
});
TTL Index for Cleanup
Automatically remove old completed jobs:
// Remove completed jobs after 30 days
db.chronosJobs.createIndex(
{ "lastFinishedAt": 1 },
{
expireAfterSeconds: 2592000, // 30 days
partialFilterExpression: {
"lastFinishedAt": { $exists: true }
}
}
);
Sharding Considerations
For very high-volume scenarios, consider sharding:
// Enable sharding on the database
sh.enableSharding("scheduler");
// Shard the jobs collection by name (distribute job types)
sh.shardCollection("scheduler.chronosJobs", { "name": 1 });
Monitoring and Maintenance
Database Statistics
Monitor your job collection:
// Get collection statistics
db.chronosJobs.stats();
// Monitor index usage
db.chronosJobs.aggregate([
{ $indexStats: {} }
]);
// Check slow queries
db.setProfilingLevel(2, { slowms: 100 });
db.system.profile.find().sort({ ts: -1 }).limit(5);
Regular Maintenance
Clean Up Failed Jobs
// Remove old failed jobs (older than 7 days)
db.chronosJobs.deleteMany({
failedAt: { $lt: new Date(Date.now() - 7 * 24 * 60 * 60 * 1000) }
});
Reindex for Performance
// Periodically reindex for optimal performance
db.chronosJobs.reIndex();
Backup Strategy
MongoDB Dump
Regular backups of job data:
# Backup jobs collection
mongodump --db scheduler --collection chronosJobs --out /backup/path
# Restore from backup
mongorestore --db scheduler --collection chronosJobs /backup/path/scheduler/chronosJobs.bson
Point-in-Time Recovery
For critical systems, enable oplog:
// Enable oplog in MongoDB configuration
replication:
replSetName: "myReplSet"
oplogSizeMB: 1024
Security Configuration
Authentication
const scheduler = new Chronos({
db: {
address: 'mongodb://username:password@localhost:27017/scheduler?authSource=admin',
options: {
ssl: true,
sslValidate: true,
sslCA: [fs.readFileSync('/path/to/ca.pem')]
}
}
});
Access Control
Create dedicated user for Chronos:
// Create Chronos user with minimal required permissions
use admin;
db.createUser({
user: "chronos",
pwd: "secure-password",
roles: [
{ role: "readWrite", db: "scheduler" }
]
});
Environment-Specific Configurations
Development
const scheduler = new Chronos({
db: {
address: 'mongodb://localhost:27017/scheduler-dev',
options: {
maxPoolSize: 5,
minPoolSize: 1
}
}
});
Production
const scheduler = new Chronos({
db: {
address: process.env.MONGODB_URI,
options: {
maxPoolSize: 100,
minPoolSize: 10,
ssl: true,
retryWrites: true,
readPreference: 'secondaryPreferred',
w: 'majority'
}
}
});
High Availability
const scheduler = new Chronos({
db: {
address: 'mongodb+srv://cluster0.mongodb.net/scheduler?retryWrites=true&w=majority',
options: {
maxPoolSize: 200,
serverSelectionTimeoutMS: 5000,
heartbeatFrequencyMS: 10000,
maxIdleTimeMS: 120000
}
}
});
Troubleshooting
Common Issues
Connection Pool Exhaustion
// Monitor pool status
scheduler.on('error', (err) => {
if (err.message.includes('pool')) {
console.log('Connection pool issue:', err);
// Increase maxPoolSize or check for connection leaks
}
});
Slow Queries
// Enable MongoDB profiler
db.setProfilingLevel(2, { slowms: 100 });
// Check for missing indexes
db.chronosJobs.find({ name: "slow-job" }).explain("executionStats");
Index Conflicts
// Check existing indexes
db.chronosJobs.getIndexes();
// Drop conflicting indexes
db.chronosJobs.dropIndex("index_name");
Best Practices
- Always use indexes for production workloads
- Monitor collection size and implement cleanup strategies
- Use replica sets for production reliability
- Configure appropriate timeouts for your environment
- Regular maintenance - reindex and cleanup old jobs
- Monitor performance with MongoDB profiler
- Backup regularly - jobs represent business logic