Approaching strategy about Dynamic programming

Lets us say, we are devising an algorithm for any problem which we think can be solved using dynamic programming. For dynamic programming, we need to find the states of dp and recursion between them.

Assume, for solving current state, we can look up previously solved state and add it to the answer(after some operation), but for this algo to work we also need to alter already solved states.
So, in DP, will it be valid that where we can also update previous states from which we recurse for solving current state?

In other words,

I came across a question on leetcode, [ - LeetCode ].

For solving for current position i(i.e. min steps upto i) with element X, we must look for all previously position (i.e. all position having X and i-1)

Here lets say, state is [element, list of indexes] and dp array(where dp[i] = min steps to reach i).
Then we should look for all list of indexes with element X check there distance and check dist of dp[i-1].

If dp[i-1] is smaller, then we must also update previously stored dp[index] where index represent all indexes with element X.

So, acc. to this algo, we are altering previously solved dp[index]. Will this be programmatically or mathematically correct for dynamic programming to work?
Did you came across any such questions?
Note: I am not sure if above algo works, I didn’t see the solution yet for this question.

2 Likes

I think dynamic programming won’t work here , just make a graph from the array and do the BFS from first index and find the shortest path to last index.

And yes, it is indeed true, that you should not change previous states of Dp, if you are forced to do that , it means , that problem cannot be solved using dp

1 Like

Thankyou.