Top 15 Software Engineering Interview Questions for Senior Developers
Top 15 Software Engineering Interview Questions for Senior Developers
A comprehensive guide to challenging technical questions that separate great engineers from good ones
By Alex M.
•• 7 views
Top 15 Software Engineering Interview Questions for Senior Developers
A comprehensive guide to challenging technical questions that separate great engineers from good ones
The Questions
How to remove a big file or private key from git log history? What options do we have?
How to do observability without any existing tools? How to build from scratch?
How to copy the prod db into stage db for testing on integration branch? What if huge dataset like a 1TB etc?
Explain how an app could work offline?
How to do realtime without websockets?
Could a front-end project be containerized? Explain how a front-end project could be served with containers?
Can you make a truly private member of js/ts class? How does the # char work to do that? Is it better than using Symbol primitive? Is it truly private?
How to implement Promise.series?
How to throttle instead of debounce?
How does diff work on a unix machine?
How would you design a distributed rate limiter?
Explain the difference between fork() and spawn() in Node.js
How do database indexes actually work under the hood?
What happens when you type a URL and press enter?
How would you debug a memory leak in production?
Questions & Answers
1. How to Remove a Big File or Private Key from Git History?
This is a critical security question that tests understanding of Git internals. When sensitive data is committed, simply deleting it in a new commit isn't enough—it remains in history.
Options available:
git filter-branch: The traditional approach, but deprecated and slow for large repos
BFG Repo-Cleaner: Fast, simple tool specifically designed for this purpose (recommended for most cases)
git filter-repo: Modern replacement for filter-branch, much faster
Interactive rebase: Works for recent commits but impractical for old history
The key insight: you'll need to force-push and coordinate with all team members to re-clone, as this rewrites history. Any existing clones will still contain the sensitive data.
2. How to Build Observability from Scratch Without Existing Tools?
This question explores fundamental understanding of monitoring principles. To build observability without tools like Datadog or New Relic:
Core components needed:
Metrics collection: Implement counters, gauges, and histograms in your application code. Export them via an HTTP endpoint (e.g., /metrics)
Logging infrastructure: Structured JSON logs with correlation IDs, written to stdout/stderr, then aggregated to a central store
Tracing: Generate unique request IDs, propagate them through service calls, and log entry/exit points with timestamps
Storage layer: Time-series database for metrics (could be as simple as PostgreSQL with TimescaleDB), document store for logs
Visualization: Build basic dashboards querying your storage layer
The real challenge isn't collection—it's making data queryable and actionable at scale.
3. How to Copy Production DB to Staging for Integration Testing?
This common DevOps task becomes complex with large datasets. For a 1TB database:
Strategies to consider:
Logical dump/restore: Use pg_dump/pg_restore or MySQL equivalents, but expect hours or days for 1TB
Physical replication: Set up streaming replication, then promote staging as independent—much faster
Snapshot-based copying: If using cloud providers (AWS RDS, GCP Cloud SQL), use snapshots and restore to new instance
Data subsetting: Do you really need all 1TB? Extract a representative sample instead using filtered exports
Anonymization pipeline: Stream data through a transformation layer to obfuscate PII while copying
For truly huge datasets, consider maintaining a parallel "staging-scale" dataset that's generated rather than copied, or use production read replicas with restricted access for testing.
4. Explain How an App Could Work Offline
Offline-first architecture is essential for modern web and mobile apps. Key technologies and strategies:
Foundation:
Service Workers: Intercept network requests and serve cached responses when offline
Cache API: Store static assets (HTML, CSS, JS, images) for instant loading
IndexedDB: Client-side database for structured data that persists across sessions
LocalStorage/SessionStorage: Simple key-value storage for smaller data
Synchronization strategy:
Queue mutations (POST/PUT/DELETE) locally when offline
Retry queue when connection restored
Handle conflicts (operational transforms or last-write-wins depending on use case)
Background sync API for reliable eventual consistency
The hardest part isn't going offline—it's coming back online and reconciling changes.
5. How to Do Real-time Without WebSockets?
While WebSockets are ideal for real-time, there are several alternatives:
Server-Sent Events (SSE): Unidirectional server-to-client push over HTTP. Simpler than WebSockets, works through most proxies, automatically reconnects. Perfect for notifications or live updates.
Long polling: Client makes request, server holds it open until new data arrives. Higher latency and resource usage than WebSockets but universal compatibility.
HTTP/2 Server Push: Server proactively pushes resources to client. Limited browser support for dynamic use cases.
Polling with exponential backoff: Simplest approach—repeatedly check for updates with increasing intervals. Works everywhere but inefficient.
For most modern applications, SSE hits the sweet spot between simplicity and functionality when you only need server-to-client communication.
6. Can a Front-end Project Be Containerized?
Absolutely, and it's becoming standard practice. Here's how:
Multi-stage Docker build approach:
Stage 1 (build): Install dependencies, run build process (webpack/vite/etc)
Stage 2 (serve): Copy build artifacts to nginx/httpd container
Why containerize front-end?
Consistent environments across dev/staging/prod
Easy deployment to Kubernetes or container orchestration
Immutable deployments
Can bundle environment-specific configs at runtime
Serving strategies:
Static file server: Nginx serving built assets (most common)
Node server: Express/Fastify serving static files with potential SSR
CDN push: Container builds and pushes to CDN in CI/CD
The key advantage: your "build environment" and "runtime environment" are version-controlled and reproducible.
7. Can You Make a Truly Private Member of a JS/TS Class?
Yes, using the # prefix (private fields), introduced in ES2022:
How it works: The # creates a private name binding that's inaccessible outside the class body—not just by convention but by language enforcement. Attempting access throws a SyntaxError.
Comparison to Symbol:
Symbols are obscure but not private—they're discoverable via Object.getOwnPropertySymbols()
Private fields (#) are truly inaccessible, even to subclasses
TypeScript's private keyword is compile-time only—it disappears in JavaScript
Private fields are better for genuine encapsulation. Use Symbols when you need a unique property key that won't collide but don't need true privacy.
8. How to Implement Promise.series?
Unlike Promise.all() which executes concurrently, Promise.series() executes promises sequentially:
Debounce: Search autocomplete, form validation, save buttons (wait for completion)
10. How Does diff Work on a Unix Machine?
The diff command compares files line-by-line using efficient algorithms:
Algorithm fundamentals:
Most implementations use variations of the Myers diff algorithm or Hunt-McIlroy algorithm, which solve the longest common subsequence (LCS) problem. The goal: find the minimum set of insertions and deletions to transform one file into another.
Process:
Read both files into memory (or use streaming for large files)
Build a matrix comparing lines
Use dynamic programming to find optimal edit path
Output differences in requested format (unified, context, or normal)
Output formats:
diff -u: Unified format (most common, shows context)
diff -c: Context format (more verbose)
diff -y: Side-by-side comparison
Git's diff uses a modified version with additional heuristics for detecting moved blocks and optimizing for source code.
11. How Would You Design a Distributed Rate Limiter?
This tests distributed systems knowledge. Key considerations:
Approaches:
Token bucket: Each request consumes tokens; tokens refill at fixed rate
Sliding window: Track requests in time windows to prevent burst at boundaries
Fixed window: Simpler but allows bursts at window edges
Distributed challenges:
Use Redis with atomic operations (INCR, EXPIRE)
Handle clock skew across nodes
Decide on consistency vs. availability (CAP theorem)
Consider using Lua scripts for atomic multi-step operations
Must handle race conditions and network partitions gracefully.
12. Explain the Difference Between fork() and spawn() in Node.js
Both create child processes, but with crucial differences:
JavaScript Execution: Parse and execute scripts, manipulate DOM
Painting: Browser calculates layout, paints pixels to screen
Each step has depth—can discuss caching, HTTP/2 multiplexing, CDNs, service workers, critical rendering path, etc.
15. How Would You Debug a Memory Leak in Production?
Production debugging requires careful methodology:
Investigation steps:
Confirm the leak: Monitor heap size over time (CloudWatch, Datadog, etc.)
Capture heap snapshots: Use Node.js --inspect flag with Chrome DevTools or heapdump module
Compare snapshots: Take snapshots at different times, compare to find growing objects
Identify retention paths: Find what's holding references to leaked objects
Common causes:
Global variables accumulating data
Event listeners not cleaned up
Closures capturing large contexts
Caching without eviction policy
Detached DOM nodes
Production-safe approaches:
Enable profiling on single instance behind load balancer
Use sampling profilers (minimal overhead)
Reproduce in staging with production-like load
Consider using --max-old-space-size as temporary mitigation
The key: systematic approach without disrupting service.
Questions with Your Best Answer!
Now it's your turn! We want to hear how you would approach these questions.
Share your answers and insights:
Which questions challenged you the most?
What alternative approaches would you suggest?
What real-world experiences have shaped your thinking on these topics?
Drop your responses in the comments below, or better yet, write a blog post with your own detailed answers and share it with the community. The best way to learn is to teach!
Bonus challenge: Pick any three questions from this list and create a code example demonstrating your solution. Share it on GitHub and link it in the comments.
What topics should we cover in our next interview question deep-dive? Let us know what you'd like to see!