Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

n steps with 1, 2 or 3 steps taken. How many ways to get to the top?

If we have n steps and we can go up 1 or 2 steps at a time, there is a Fibonacci relation between the number of steps and the ways to climb them. IF and ONLY if we do not count 2+1 and 1+2 as different.

However, this no longer the case, as well as having to add we add a third option, taking 3 steps. How do I do this?

What I have:

1 step = 1 way
2 steps = 2 ways: 1+1, 2
3 steps = 4 ways: 1+1+1, 2+1, 1+2, 3

I have no idea where to go from here to find out the number of ways for n stairs

I get 7 for n = 4 and 14 for n= 5 i get 14+7+4+2+1 by doing the sum of all the combinations before it. so ways for n steps = n-1 ways + n-2 ways + .... 1 ways assuming i kept all the values. DYNAMIC programming. 1 2 and 3 steps would be the base-case is that correct?

like image 996
MD-4 Avatar asked Mar 21 '14 14:03

MD-4


2 Answers

I would say that the formula will look in the following way:

K(1) = 1
K(2) = 2
k(3) = 4
K(n) = K(n-3) + K(n-2) + K(n - 1)

The formula says that in order to reach the n'th step we have to firstly reach:

  • n-3'th step and then take 3 steps at once i.e. K(n-3)
  • or n-2'th step and then take 2 steps at once i.e. K(n-2)
  • or n-1'th step and then take 1 steps at once i.e. K(n-1)

K(4) = 7, K(5) = 13 etc.

You can either utilize the recursive formula or use dynamic programming.

like image 129
Michał Komorowski Avatar answered Sep 19 '22 07:09

Michał Komorowski


Python solutions:

Recursive O(n)

This is based on the answer by Michael. This requires O(n) CPU and O(n) memory.

import functools

@functools.lru_cache(maxsize=None)
def recursive(n):
    if n < 4:
        initial = [1, 2, 4]
        return initial[n-1]
    else:
        return recursive(n-1) + recursive(n-2) + recursive(n-3)

Recursive O(log(n))

This is per a comment for this answer. This tribonacci-by-doubling solution is analogous to the fibonacci-by-doubling solution in the algorithms by Nayuki. Note that multiplication has a higher complexity than constant. This doesn't require or benefit from a cache.

def recursive_doubling(n):
    def recursive_tribonacci_tuple(n):
        """Return the n, n+1, and n+2 tribonacci numbers for n>=0.

        Tribonacci forward doubling identities:
        T(2n)   = T(n+1)^2 + T(n)*(2*T(n+2) - 2*T(n+1) - T(n))
        T(2n+1) = T(n)^2 + T(n+1)*(2*T(n+2) - T(n+1))
        T(2n+2) = T(n+2)^2 + T(n+1)*(2*T(n) + T(n+1))
        """
        assert n >= 0
        if n == 0:
            return 0, 0, 1  # T(0), T(1), T(2)

        a, b, c = recursive_tribonacci_tuple(n // 2)
        x = b*b + a*(2*(c - b) - a)
        y = a*a + b*(2*c - b)
        z = c*c + b*(2*a + b)

        return (x, y, z) if n % 2 == 0 else (y, z, x+y+z)
    return recursive_tribonacci_tuple(n)[2]  # Is offset by 2 for the steps problem.

Iterative O(n)

This is motivated by the answer by 太極者無極而生. It is a modified tribonacci extension of the iterative fibonacci solution. It is modified from tribonacci in that it returns c, not a.

def iterative(n):
    a, b, c = 0, 0, 1
    for _ in range(n):
        a, b, c = b, c, a+b+c
    return c

Iterative O(log(n)) (left to right)

This is per a comment for this answer. This modified iterative tribonacci-by-doubling solution is derived from the corresponding recursive solution. For some background, see here and here. It is modified from tribonacci in that it returns c, not a. Note that multiplication has a higher complexity than constant.

The bits of n are iterated from left to right, i.e. MSB to LSB.

def iterative_doubling_l2r(n):
    """Return the n+2 tribonacci number for n>=0.

    Tribonacci forward doubling identities:
    T(2n)   = T(n+1)^2 + T(n)*(2*T(n+2) - 2*T(n+1) - T(n))
    T(2n+1) = T(n)^2 + T(n+1)*(2*T(n+2) - T(n+1))
    T(2n+2) = T(n+2)^2 + T(n+1)*(2*T(n) + T(n+1))
    """
    assert n >= 0
    a, b, c = 0, 0, 1  # T(0), T(1), T(2)
    for i in range(n.bit_length() - 1, -1, -1):  # Left (MSB) to right (LSB).
        x = b*b + a*(2*(c - b) - a)
        y = a*a + b*(2*c - b)
        z = c*c + b*(2*a + b)
        bit = (n >> i) & 1
        a, b, c = (y, z, x+y+z) if bit else (x, y, z)
    return c

Notes:

  • list(range(m - 1, -1, -1)) == list(reversed(range(m)))
  • If the bit is odd (1), the sequence is advanced by one iteration. This intuitively makes sense after understanding the same for the efficient integer exponentiation problem.

Iterative O(log(n)) (right to left)

This is per a comment for this answer. The bits of n are iterated from right to left, i.e. LSB to MSB. This approach is probably not prescriptive.

def iterative_doubling_r2l(n):
    """Return the n+2 tribonacci number for n>=0.

    Tribonacci forward doubling identities:
    T(2n)   = T(n+1)^2 + T(n)*(2*T(n+2) - 2*T(n+1) - T(n))
    T(2n+1) = T(n)^2 + T(n+1)*(2*T(n+2) - T(n+1))
    T(2n+2) = T(n+2)^2 + T(n+1)*(2*T(n) + T(n+1))

    Given Tribonacci tuples (T(n), T(n+1), T(n+2)) and (T(k), T(k+1), T(k+2)),
    we can "add" them together to get (T(n+k), T(n+k+1), T(n+k+2)).

    Tribonacci addition formulas:
    T(n+k)   = T(n)*(T(k+2) - T(k+1) - T(k)) + T(n+1)*(T(k+1) - T(k)) + T(n+2)*T(k)
    T(n+k+1) = T(n)*T(k) + T(n+1)*(T(k+2) - T(k+1)) + T(n+2)*T(k+1)
    T(n+k+2) = T(n)*T(k+1) + T(n+1)*(T(k) + T(k+1)) + T(n+2)*T(k+2)
    When n == k, these are equivalent to the doubling formulas.
    """
    assert n >= 0
    a, b, c = 0, 0, 1  # T(0), T(1), T(2)
    d, e, f = 0, 1, 1  # T(1), T(2), T(3)
    for i in range(n.bit_length()):  # Right (LSB) to left (MSB).
        bit = (n >> i) & 1
        if bit:
            # a, b, c += d, e, f
            x = a*(f - e - d) + b*(e - d) + c*d
            y = a*d + b*(f - e) + c*e
            z = a*e + b*(d + e) + c*f
            a, b, c = x, y, z
        # d, e, f += d, e, f
        x = e*e + d*(2*(f - e) - d)
        y = d*d + e*(2*f - e)
        z = f*f + e*(2*d + e)
        d, e, f = x, y, z
    return c

Approximations

Approximations are of course useful mainly for very large n. The exponentiation operation is used. Note that exponentiation has a higher complexity than constant.

def approx1(n):
    a_pos = (19 + 3*(33**.5))**(1./3)
    a_neg = (19 - 3*(33**.5))**(1./3)
    b = (586 + 102*(33**.5))**(1./3)
    return round(3*b * ((1./3) * (a_pos+a_neg+1))**(n+1) / (b**2 - 2*b + 4))

The approximation above was tested to be correct till n = 53, after which it differed. It's certainly possible that using higher precision floating point arithmetic will lead to a better approximation in practice.

def approx2(n):
    return round((0.618363 * 1.8392**n + \
                  (0.029252 + 0.014515j) * (-0.41964 - 0.60629j)**n + \
                  (0.029252 - 0.014515j) * (-0.41964 - 0.60629j)**n).real)

The approximation above was tested to be correct till n = 11, after which it differed.

like image 28
Asclepius Avatar answered Sep 20 '22 07:09

Asclepius