https://bugs.kde.org/show_bug.cgi?id=3D337972 Bug ID: 337972 Summary: IDEA: Non-uniform piece size based on image detail Product: palapeli Version: unspecified Platform: Fedora RPMs OS: Linux Status: UNCONFIRMED Severity: wishlist Priority: NOR Component: slicers Assignee: majewsky@gmx.net Reporter: mw_triad@users.sourceforge.net CC: kde-games-bugs@kde.org This is going to be a little bit odd, but the story is interesting; please = bear with me. I had a dream (not the M. L. King kind, but the waking-from-sleep kind) this morning, in which there was a jigsaw puzzle. This puzzle was unusual for ha= ving pieces of widely varying sizes, based on the local level of detail. While still half asleep, I a) got the idea that it would be cool if Palapeli could do this, and b) came up with an algorithm. The first step is to compute a Feature Density Estimate (FDE) of the input image, which should be a single-valued function over two dimensions that is either continuous or with discrete sampling of a comparable order to the in= put data (a grayscale image of equal dimensions will do). There are many possib= le algorithms for this (edge detection, etc.). I'm thinking, for each pixel, compute the average difference per channel with neighboring pixels (either = the uniform-weighted average of directly adjacent neighbors, ignoring diagonals= , or a weighted average giving diagonals, say, sqrt(2)/2 times the weight of dir= ect neighbors), and take the average (probably weighted by relative luminosity,= a la qGray) of the per-channel results. Then apply a smoothing filter (probab= ly Gaussian blur) to the result. (One reason to propose Gaussian blur is that = it seems to make sense to choose the radius based on the image size and desired piece count, probably as some fraction of the input image area divided by desired piece count.) Now we need some random points. Obtain these using Poisson Disk Sampling (w= ell documented), using the FDE as a density function (black =3D lower density, = white =3D higher density). This will give us some points to play with, that are clustered in high-detail areas and sparse in low-detail areas. Note that at this point we want to have quite a few more points than the number of desir= ed pieces. Here's the fun part. "Reduce these". I was still half-asleep at this time, = but I think I've figured out how to practically do what my subconscious was visualizing without being bothered by such details. (Of course, the "simple" solution is to just randomly select a number of th= ese points equal to the desired piece count and apply Voroni to get our base gr= id.) Have a scanning direction option in the slicer (horizontal or vertical), and also a maximum distance option. Call the current set of points the "working set". Select a point near the top left, and draw a square centered about th= at point such that the square has an edge length not more than twice the maxim= um distance, and containing not more than some number of points (not sure on t= he algorithm to choose this number; derive by experimentation?). Move the init= ial point to the "final set" and remove all points within this square from the working set. Choose a "best candidate" point to the left or bottom (dependi= ng on search direction) of the square, where "best" is weighted between single-axis distance from the edge and single-axis distance from the edge bisector (either derive the relative weight by experimentation or have it b= e a parameter). Build a new square with this point. Repeat until a row/column h= as been processed. Then start over with the new top-left-most point in the wor= king set and repeat until no points remain in the working set. Take the Voroni of the final set as the slicing grid. The objective is to bias the slicing to = be "grid-like" while still allowing for size variations. (It might be desirabl= e to have a third bias option, "none", to skip this and use instead the simple technique mentioned previously.) There are two other desired refinements (actually, these would apply to the existing irregular slicer also, if it doesn't use them already). First, for each cell vertex, find the largest angle =CE=B1, and compute a weighted ran= dom chance (=CE=B1=3D180=C2=B0 =E2=86=92 p=E2=89=880.9, =CE=B1=3D90=C2=B0 =E2= =86=92 p=E2=89=880.3) of forming the edges of said angle to be G2 continuous at that point. Second, for each edge, compute a weighted rand= om chance (A=3D1.6 =E2=86=92 p=3D1, A=3D2.5 =E2=86=92 p=E2=89=880.1, with A = =3D the ratio between the edge length and the nominal plug (minimum?) width) of omitting a plug for that edge. --=20 You are receiving this mail because: You are watching all bug changes.=