So inorder as mentioned in BST guarantees an ascending value order. I googled on how and we use a stack to first put all the left subtree values. Then for next() method, we pop the most recent element in stack and push the leftmost subtree of the current node's right child.
but idk why all 3 LLMs tell me mine is inefficient
class BSTIterator:
def __init__(self, root: Optional[TreeNode]):
self.root=root
self.stack = []
self.inorder(root)
def next(self) -> int:
elem = self.stack.pop()
self.inorder(elem.right)
return elem.val
def hasNext(self) -> bool:
return bool(self.stack)
def inorder(self,node):
if not node:
return
self.stack.append(node)
self.inorder(node.left)
they say this is more efficient tbc i rly dont get it
class BSTIterator:
def __init__(self, root: Optional[TreeNode]):
self.stack = []
self._leftmost_inorder(root)
def _leftmost_inorder(self, node):
while node:
self.stack.append(node)
node = node.left
def next(self) -> int:
top_node = self.stack.pop()
if top_node.right:
self._leftmost_inorder(top_node.right)
return top_node.val
def hasNext(self) -> bool:
return len(self.stack) > 0
✅ Efficient Version (Optimal)
🕒 Time Complexity
next(): O(1) amortized
Each node is pushed and popped from the stack only once.
So over n calls to next(), total work is O(n).
⇒ Average per call is O(1).
hasNext(): O(1)
💾 Space Complexity
Worst case: O(h), where h is the height of the tree.
Because the stack only stores the path to the current node (i.e., one path from root to the leftmost leaf).
For a balanced BST: O(log n)
For a skewed BST (like a linked list): O(n)
🟥 Your Version
🕒 Time Complexity
next(): O(h) worst case per call
Because inorder(node.right) walks all the way down the left of the right child, and you do that every call.
Over n nodes, this could be O(n·h) total time if you’re unlucky (like with a skewed tree).
hasNext(): O(1)
💾 Space Complexity
Still O(h), because the stack grows similarly.