Implement an LRU (least recently used) cache with O(1) get and put.
The complete answer guide: what this question really tests, two example strong answers in different angles, the common weak answer rewritten, and the trap most candidates fall into. This is a technical deep-dive archetype question — see the broader pattern guide for the structural shape.
What this question is really testing
This question isn't about whether you've memorized the hash map plus doubly linked list pattern—most senior engineers would need to think through the implementation details even if they've built LRU caches before. The interviewer is testing whether you can decompose a performance constraint into data structure choices. They want to see you reason from first principles: "O(1) lookup means hash map, O(1) removal from middle means doubly linked list, therefore I need both." The binary read they're making is whether you think in terms of complexity tradeoffs or whether you pattern-match to solutions you've seen before. Candidates who immediately say "hash map and doubly linked list" without explaining why often get follow-up questions that expose they don't actually understand the constraint.
The secondary signal is how you handle implementation details under pressure. An LRU cache has fiddly pointer manipulation, edge cases around capacity limits, and the subtle requirement that `get` operations must update recency. Strong candidates verbalize their approach before coding, explicitly list edge cases, and write helper methods to keep the main logic clean. Weak candidates dive straight into coding, get tangled in pointer updates, and produce buggy implementations they can't debug. The interviewer is watching whether you can maintain code quality when cognitive load is high—a proxy for how you'll perform when debugging production issues at 2am.
Two strong answers, two angles
Angle A: Constraint-driven decomposition
"I need O(1) for both operations, so let me work backwards from that constraint. For O(1) get, I need direct key lookup, which means a hash map. For O(1) put with eviction, I need to remove the least recently used item instantly, which means I can't scan a list—I need direct access to it. That suggests maintaining items in recency order. A doubly linked list gives me O(1) removal when I have a pointer to the node, and O(1) insertion at the head. So my approach is a hash map where values are pointers to nodes in a doubly linked list. Every get and put moves the accessed item to the head. When at capacity, I evict from the tail."
Angle B: Evolution from naive solution
"The naive approach is a hash map with timestamps, scanning all entries on eviction to find the oldest—that's O(n) for put. To eliminate the scan, I need to maintain items in sorted order by recency. An array won't work because removing from the middle is O(n). A binary search tree could work but rebalancing is complex. A doubly linked list is simpler: I'll keep recently used items at the head, move any accessed item to the head on get or put, and evict from the tail when at capacity. The hash map stores key-to-node pointers so I can find and move nodes in O(1). I'll need to handle the edge case where get counts as a use and must update position."
The common weak answer
"I'd use a hash map for O(1) lookups and a doubly linked list to track the order. When you access something, move it to the front. When the cache is full, remove from the back and delete from the hash map."
This answer demonstrates you know the pattern but not why it works. The interviewer can't distinguish between "I memorized this from Leetcode" and "I understand data structure tradeoffs." You haven't explained why a doubly linked list specifically—why not a singly linked list, an array, or a heap? You haven't addressed what the hash map values contain (node pointers, not cache values directly, which is non-obvious). Most critically, you haven't shown you can reason about complexity: the interviewer wants to hear "doubly linked because I need O(1) removal from the middle when I have a node reference."
Reframe it as: "I need O(1) removal of arbitrary elements, which requires a doubly linked list since I need previous pointers. The hash map will store node references, not values, so I can jump directly to a node and unlink it in constant time."
The one trap most candidates fall into
The trap is treating `get` as read-only when it actually mutates the cache state. Most candidates correctly implement the eviction logic in `put`, carefully updating the linked list and hash map when adding new items or removing the LRU item. But they implement `get` as a simple hash lookup, forgetting that accessing an item makes it recently used and must move it to the head of the recency list. This seems like a minor detail, but it reveals whether you're implementing to the spec or implementing your mental model of what a cache "should" do.
Interviewers specifically watch for this because it mirrors a common production bug pattern: implementing the happy path correctly but missing state changes in auxiliary operations. When they ask "does your implementation handle all the requirements?" and you confidently say yes, then they point out your `get` doesn't update recency, it signals you don't carefully verify your own work. Strong candidates either get it right initially by explicitly stating "both get and put update recency" during their design phase, or catch it themselves during a final walkthrough. If you do miss it, the save is to say "you're right, I was thinking of get as read-only but it needs to update the list—let me trace through what that changes" rather than defending your initial implementation.
Common questions
How long should my answer to "Implement an LRU (least recently used) cache with O(1) get and put." be?
Aim for 60-120 seconds spoken (250-350 words). Long enough to land the situation, action, and result; short enough that the interviewer has room to follow up. Anything past two minutes risks losing them.
Should I memorize my answer word-for-word?
No — that reads as canned and falls apart the moment the interviewer asks a follow-up. Memorize the structure (the bones of the story) and the specific numbers/names that anchor it. Let the words come naturally each time.
What if I have a really good story but it was years ago?
Recent is better, but a strong story from 3 years ago beats a vague story from last quarter. If the example is older than 5 years, frame it as the moment that crystallized the lesson, then briefly bridge to how you've applied it since.
Can I use the same story for multiple questions?
Often yes — strong stories tend to demonstrate multiple competencies. The trick is reframing the angle each time. Same situation, different opening sentence: lead with the conflict for conflict questions, lead with the leadership move for leadership questions.
How do I know if my answer is actually good?
Practice it out loud and have it scored. The fastest way is a mock interview where the AI flags exactly what's vague, where you used 'we' when the question asked about 'I,' and rewrites the weakest sentence. Reading example answers helps; getting yours scored is what moves performance.
Reading isn't practicing.
Try answering this question right now before checkout, with real Claude-scored feedback in 5 seconds.
Practice this question free →