Note : If you’re viewing this on Planet SymPy and the Latex looks weird view it on WordPress instead.

Unfortunately the dynamic algorithm was much faster **only** on a regular hexagon. Why that particular figure ? The reason still eludes me.

But I did use the same idea to implement the multiple polynomial case. That is, if a list of polynomials and the maximum exponent(max x degree + max y degree) of a monomial is given as input we can calculate integrals of all the possible monomial terms and then store those values in a dictionary. Then those values can be referenced as many times as required by the polynomials. However, is this one time cost too much?

The cost is greatly reduced if we do not have to re-compute the integral of the gradient.

For the 2D case we need not compute the left integral(i.e. the one with ) since that translates to computing the function at that point and multiplying it with the distance. Also, the right integral(i.e. one with )need not be re-calculated since the integral of the terms are being calculated in the aforementioned order. Example : gradient_terms = . Once we calculate we can use that result again in the right integral for and and then chain follows.

Therefore, only one integral has to be calculated namely, .

So, why doesn’t this technique work always ? Maybe, I implemented it in a bad way for the first case. I’ll have to look at it again and see how to better it.

Apart from the timings, Ondrej said that the 2D API looks good enough(some changes may be suggested though). The 2D case is close to getting merged. I also have to start thinking about the 3D implementation. Of course, the immediate thought is to translate the 3D problem into a sum of the 2D ones, but is that the fast way to do it ? Can some other pre-processing techniques be used to simplify the problem whilst in 3D. I will have to think about it.

### Like this:

Like Loading...

*Related*