[Leetcode] 808. Soup Servings

whitehousechef·2025년 8월 8일

https://leetcode.com/problems/soup-servings/description/?envType=daily-question&envId=2025-08-08

initial

so i thought maybe dfs to go to the base case of the 4 choices that we have. But the input is too big so i cant. I think there could be 2d dp table to use memoisation cuz repeat calculations will be needed.

dp[a][b]= prob that A becomes 0(i.e. wins) and there are 2 cases - either A wins completely while B loses (remaining value of b isnt 0) or A and B ties at 0 in which case we have to multiply that by 0.5

more cleanly, dp[a][b] = probability that A wins (or A wins + 0.5 * tie probability) starting from state where A has a mL remaining and B has b mL remaining.

sol

theres space issues cuz n is to 10^9 but the logic is correct

class Solution {
    private double[][] dp;
    private int[][] choices = new int[][]{{100,0},{75,25},{50,50},{25,75}};

    public double process(int a,int b){
        if (a<=0 && b>0) return 1.0;
        if (a==0 && b==0) return 0.5;
        if (a>0 && b<=0) return 0.0;
        if (dp[a][b]!=-1.0) return dp[a][b];
        double result=0.0;
        for(int[] choice:choices){
            int newA= Math.max(0,a-choice[0]);
            int newB = Math.max(0,b-choice[1]);
            result+=0.25*process(newA,newB);
        }
        dp[a][b]=result;
        return dp[a][b];
    }
    public double soupServings(int n) {
        dp = new double[n+1][n+1];
        for(int i=0; i<n+1; i++){
            for(int j=0; j<n+1; j++){
                dp[i][j]=-1.0;
            }
        }
        return process(n,n);
    }
}

much bettter optimisation

1) so dp table is 10^9 to 10^9 so theres space. What if we scale the decisions by 25? Then we get
private int[][] choices = {{4,0},{3,1},{2,2},{1,3}}; // Scaled down: 100/25=4, 75/25=3

2) use nested hashmap instead of dp table

3) Instead of directly computing the expensive process(m,m) for large inputs, we first test smaller values process(k,k) for k=1,2,3... Since soup A is consumed faster on average (2.5 vs 1.5 units per turn), the probability that A empties first increases with larger starting amounts. If any smaller problem process(k,k) exceeds 0.99999, we know the larger problem process(m,m) will be even closer to 1.0, so we immediately return 1.0 without the expensive computation - saving massive time while maintaining accuracy within the required 10⁻⁵ tolerance

class Solution {
    private Map<Integer, Map<Integer, Double>> dp;
    private int[][] choices = {{4,0},{3,1},{2,2},{1,3}};  // Scaled down by 25

    public double process(int a, int b){
        if (a <= 0 && b > 0) return 1.0;
        if (a <= 0 && b <= 0) return 0.5;
        if (a > 0 && b <= 0) return 0.0;
        
        // Check if already computed
        if (dp.containsKey(a) && dp.get(a).containsKey(b)) {
            return dp.get(a).get(b);
        }
        
        double result = 0.0;
        for(int[] choice : choices){
            int newA = Math.max(0, a - choice[0]);
            int newB = Math.max(0, b - choice[1]);
            result += 0.25 * process(newA, newB);
        }
        
        // Store result in nested HashMap
        dp.computeIfAbsent(a, k -> new HashMap<>()).put(b, result);
        return result;
    }
    
    public double soupServings(int n) {
        // Scale down by 25
        int m = (int)Math.ceil(n / 25.0);
        
        dp = new HashMap<>();
        
        // Early termination optimization
        for (int k = 1; k <= m; k++) {
            if (process(k, k) > 1 - 1e-5) {
                return 1.0;
            }
        }
        
        return process(m, m);
    }
}

complexity

With Early Termination:
time
Best case: O(k²) where k is the first value with probability > 0.99999
Worst case: Still O(m²) if no early termination triggers
Average case: Much better than O(m²) for large inputs since k << m typically

Example: If n=5000 (m=200) but we find early termination at k=47:

Without: ~200² = 40,000 computations
With: ~47² = 2,209 computations

space:
Space: O(number of unique states actually visited)
Reachable states: Much less than m² due to the specific operation patterns
Typical: O(m²) worst case, but often 50-80% less in practice

2D Array approach:

Space: O(m²) always allocated upfront
Memory: (m+1)² × 8 bytes for double values

0개의 댓글