Performance Optimization
Optimize Chronos for high-throughput production environments with these performance strategies.
MongoDB Indexing
Proper indexing is crucial for Chronos performance, especially with large job collections.
Essential Indexes
Create these indexes for optimal performance:
// Primary index for job processing
db.chronosJobs.createIndex({
"nextRunAt": 1,
"disabled": 1,
"lockedAt": 1
});
// Index for Agendash UI (if used)
db.chronosJobs.createIndex({
"nextRunAt": -1,
"lastRunAt": -1,
"lastFinishedAt": -1
}, { name: "agendash" });
// Index for finding and locking jobs by name
db.chronosJobs.createIndex({
"name": 1,
"disabled": 1,
"lockedAt": 1
}, { name: "findAndLockDeadJobs" });
// Compound index for job queries
db.chronosJobs.createIndex({
"name": 1,
"nextRunAt": 1,
"priority": -1
});
Custom Indexes for Your Jobs
Add indexes based on your job query patterns:
// If you query jobs by custom data fields
db.chronosJobs.createIndex({ "data.userId": 1 });
db.chronosJobs.createIndex({ "data.status": 1, "nextRunAt": 1 });
// For unique job patterns
db.chronosJobs.createIndex({
"name": 1,
"data.userId": 1
}, { unique: true });
Configuration Tuning
Process Frequency
Balance between responsiveness and database load:
// High frequency - more responsive, more DB queries
const scheduler = new Chronos({
processEvery: '1 second' // Check every second
});
// Lower frequency - fewer DB queries, less responsive
const scheduler = new Chronos({
processEvery: '30 seconds' // Check every 30 seconds
});
// Optimal for most cases
const scheduler = new Chronos({
processEvery: '5 seconds' // Good balance
});
Concurrency Settings
Configure concurrency based on your system resources:
const scheduler = new Chronos({
// Global settings
maxConcurrency: 50, // Total concurrent jobs
defaultConcurrency: 5, // Default per job type
lockLimit: 100, // Total jobs to lock ahead
defaultLockLimit: 20, // Default lock limit per job type
// Process frequency
processEvery: '3 seconds'
});
// Per-job concurrency
scheduler.define('cpu-intensive', handler, {
concurrency: 2 // Limit CPU-heavy jobs
});
scheduler.define('io-bound', handler, {
concurrency: 20 // Allow more I/O jobs
});
Memory Management
Connection Pooling
Optimize MongoDB connections:
const MongoClient = require('mongodb').MongoClient;
const client = new MongoClient('mongodb://localhost:27017', {
maxPoolSize: 50, // Maintain up to 50 socket connections
serverSelectionTimeoutMS: 5000, // Keep trying to send operations for 5 seconds
socketTimeoutMS: 45000, // Close sockets after 45 seconds of inactivity
bufferMaxEntries: 0 // Disable mongoose buffering
});
const scheduler = new Chronos({
mongo: client.db('scheduler')
});
Job Data Size
Keep job data lean to reduce memory usage:
// ❌ Bad - storing large objects
scheduler.now('process-user', {
user: fullUserObject, // Large object
metadata: completeMetadata // More large data
});
// ✅ Good - storing only IDs
scheduler.now('process-user', {
userId: user._id, // Just the ID
includeMetadata: true // Flag instead of data
});
Clean Up Completed Jobs
Prevent database bloat:
// Clean up old jobs daily
scheduler.define('cleanup-old-jobs', async () => {
const thirtyDaysAgo = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000);
// Remove completed jobs older than 30 days
const result = await scheduler.cancel({
lastFinishedAt: { $lt: thirtyDaysAgo },
$or: [
{ failedAt: { $exists: true } },
{ lastFinishedAt: { $exists: true } }
]
});
console.log(`Cleaned up ${result} old jobs`);
});
scheduler.every('1 day', 'cleanup-old-jobs');
Monitoring & Metrics
Job Processing Metrics
Track performance metrics:
class SchedulerMetrics {
constructor(scheduler) {
this.scheduler = scheduler;
this.metrics = {
jobsProcessed: 0,
jobsFailed: 0,
processingTime: {},
queueSize: 0
};
this.setupEventListeners();
}
setupEventListeners() {
this.scheduler.on('start', (job) => {
job.startTime = Date.now();
});
this.scheduler.on('success', (job) => {
this.metrics.jobsProcessed++;
const duration = Date.now() - job.startTime;
if (!this.metrics.processingTime[job.attrs.name]) {
this.metrics.processingTime[job.attrs.name] = [];
}
this.metrics.processingTime[job.attrs.name].push(duration);
});
this.scheduler.on('fail', (error, job) => {
this.metrics.jobsFailed++;
console.error(`Job ${job.attrs.name} failed:`, error);
});
}
async getQueueSize() {
const pending = await this.scheduler.jobs({
nextRunAt: { $lte: new Date() },
disabled: { $ne: true },
lockedAt: null
});
return pending.length;
}
getAverageProcessingTime(jobName) {
const times = this.metrics.processingTime[jobName] || [];
if (times.length === 0) return 0;
const sum = times.reduce((a, b) => a + b, 0);
return Math.round(sum / times.length);
}
}
const metrics = new SchedulerMetrics(scheduler);
Health Checks
Monitor scheduler health:
// Health check endpoint
app.get('/scheduler/health', async (req, res) => {
try {
const stats = {
isRunning: scheduler._processInterval !== null,
lockedJobs: scheduler._lockedJobs.length,
runningJobs: scheduler._runningJobs.length,
queueSize: await metrics.getQueueSize(),
uptime: process.uptime(),
memory: process.memoryUsage(),
jobStats: {
processed: metrics.metrics.jobsProcessed,
failed: metrics.metrics.jobsFailed,
avgProcessingTimes: Object.keys(metrics.metrics.processingTime).reduce((acc, jobName) => {
acc[jobName] = metrics.getAverageProcessingTime(jobName);
return acc;
}, {})
}
};
res.json({ status: 'healthy', stats });
} catch (error) {
res.status(500).json({
status: 'unhealthy',
error: error.message
});
}
});
Production Optimization Tips
1. Separate Job Types by Priority
// High priority scheduler for critical jobs
const criticalScheduler = new Chronos({
db: { address: mongoUrl, collection: 'criticalJobs' },
processEvery: '1 second',
maxConcurrency: 10
});
// Regular scheduler for normal jobs
const regularScheduler = new Chronos({
db: { address: mongoUrl, collection: 'regularJobs' },
processEvery: '5 seconds',
maxConcurrency: 20
});
2. Use Bulk Operations
// ❌ Creating jobs one by one
for (const user of users) {
await scheduler.now('send-notification', { userId: user.id });
}
// ✅ Create jobs in batches
const jobs = users.map(user =>
scheduler.create('send-notification', { userId: user.id })
);
// Save in batches of 100
for (let i = 0; i < jobs.length; i += 100) {
const batch = jobs.slice(i, i + 100);
await Promise.all(batch.map(job => job.save()));
}
3. Optimize Long-Running Jobs
scheduler.define('large-batch-job', async (job) => {
const batchSize = 1000;
const totalRecords = await getTotalRecords();
for (let offset = 0; offset < totalRecords; offset += batchSize) {
// Process batch
await processBatch(offset, batchSize);
// Keep lock alive and update progress
await job.touch();
job.attrs.data.progress = offset / totalRecords;
await job.save();
// Yield to event loop
await new Promise(resolve => setImmediate(resolve));
}
}, {
lockLifetime: 3600000, // 1 hour
concurrency: 1 // One at a time
});
4. Database Connection Best Practices
// Use connection with appropriate settings
const scheduler = new Chronos({
db: {
address: 'mongodb://localhost:27017/scheduler',
options: {
maxPoolSize: 50,
minPoolSize: 5,
maxIdleTimeMS: 30000,
serverSelectionTimeoutMS: 5000,
heartbeatFrequencyMS: 10000,
retryWrites: true,
w: 'majority'
}
}
});
This configuration ensures optimal performance for high-throughput job processing while maintaining data consistency and system reliability.