Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Finding largest f satisfying a property given f is non-decreasing in its arguments

this has been bugging me for a while.

Lets say you have a function f x y where x and y are integers and you know that f is strictly non-decreasing in its arguments,

i.e. f (x+1) y >= f x y and f x (y+1) >= f x y.

What would be the fastest way to find the largest f x y satisfying a property given that x and y are bounded.

I was thinking that this might be a variation of saddleback search and I was wondering if there was a name for this type of problem.

Also, more specifically I was wondering if there was a faster way to solve this problem if you knew that f was the multiplication operator.

Thanks!

Edit: Seeing the comments below, the property can be anything

Given a property g (where g takes a value and returns a boolean) I am simply looking for the largest f such that g(f) == True

For example, a naive implementation (in haskell) would be:

maximise :: (Int -> Int -> Int) -> (Int -> Bool) -> Int -> Int -> Int
maximise f g xLim yLim = head . filter g . reverse . sort $ results
    where results = [f x y | x <- [1..xLim], y <- [1..yLim]]
like image 841
Charles Durham Avatar asked Jun 10 '11 02:06

Charles Durham


People also ask

How do you show that a function is non-decreasing?

The usual way of proving that a function is non-decreasing is to analyze the sign of its first derivative: roughly, given a function f, it will be non-decreasing if f′(x)≥0. Since your function is continuous and has no singularity, you just need to compute F′ and observe that it can never be negative.

How do you find the number of increasing functions?

To find when a function is increasing, you must first take the derivative, then set it equal to 0, and then find between which zero values the function is positive. Now test values on all sides of these to find when the function is positive, and therefore increasing.

What is a non-decreasing sequence?

Non-decreasing sequences are a generalization of binary covering arrays, which has made research on non-decreasing sequences important in both math and computer science. The goal of this research is to find properties of these non- decreasing sequences as the variables d, s, and t change.

How do you prove f is strictly increasing?

Let A ⊂ R, and let f : A → R be a function. We say that f is strictly increasing on A if, for x, y ∈ A, if x < y, then f(x) < f(y). Similarly, we say that f is strictly decreasing on A if, for x, y ∈ A, if x<y, then f(x) > f(y). Exercise 12.8.


2 Answers

Let's draw an example grid for your problem to help think about it. Here's an example plot of f for each x and y. It is monotone in each argument, which is an interesting constraint we might be able to do something clever with.

+------- x --------->
| 0  0  1  1  1  2 
| 0  1  1  2  2  4
y 1  1  3  4  6  6
| 1  2  3  6  6  7
| 7  7  7  7  7  7
v

Since we don't know anything about the property, we can't really do better than to list the values in the range of f in decreasing order. The question is how to do that efficiently.

The first thing that comes to mind is to traverse it like a graph starting at the lower-right corner. Here is my attempt:

import Data.Maybe (listToMaybe)

maximise :: (Ord b, Num b) => (Int -> Int -> b) -> (b -> Bool) -> Int -> Int -> Maybe b
maximise f p xLim yLim = 
    listToMaybe . filter p . map (negate . snd) $ 
       enumIncreasing measure successors (xLim,yLim)
  where
    measure (x,y) = negate $ f x y
    successors (x,y) = [ (x-1,y) | x > 0 ] ++ [ (x,y-1) | y > 0 ] ]

The signature is not as general as it could be (Num should not be necessary, but I needed it to negate the measure function because enumIncreasing returns an increasing rather than a decreasing list -- I could have also done it with a newtype wrapper).

Using this function, we can find the largest odd number which can be written as a product of two numbers <= 100:

ghci> maximise (*) odd 100 100
Just 9801

I wrote enumIncreasing using meldable-heap on hackage to solve this problem, but it is pretty general. You could tweak the above to add additional constraints on the domain, etc.

like image 104
luqui Avatar answered Sep 24 '22 01:09

luqui


The answer depends on what's expensive. The case that might be intersting is when f is expensive.

What you might want to do is look at pareto-optimality. Suppose you have two points

(1, 2)    and    (3, 4)

Then you know that the latter point is going to be a better solution, so long as f is a nondecreasing function. However, of course, if you have points,

(1, 2)    and    (2, 1)

then you can't know. So, one solution would be to establish a pareto-optimal frontier of points that the predicate g permits, and then evaluate these though f.

like image 21
gatoatigrado Avatar answered Sep 24 '22 01:09

gatoatigrado