Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Rotate model group according to mouse drag direction and location in the model

Tags:

c#

wpf

graphics

3d

I added several cubes to a Viewport3D in WPF and now I want to manipulate groups of them with the mouse.

initial cube

When I click & drag over one and a half of those cubes I want the hole plane rotated in the direction that the drag was made, the rotation will be handled by RotateTransform3D so it won't be a problem.

The problem is that I don't know how I should handle the drag, more exactly: How can I know which faces of the cubes were dragged over in order to determine what plane to rotate?

For example in the case below I'd like to know that I need to rotate the right plane of cubes with 90 degrees clockwise so the row of blue faces will be at the top instead of the white ones which will be in the back.

second cube

And in this example the top layer should be rotated 90 degrees counterclockwise: third cube

Currently my idea is to place some sort of invisible areas over the cube, to check in which one the drag is happening with VisualTreeHelper.HitTest and then to determine which plane I should rotate, this area will match the first drag example:

enter image description here

But when I add all four regions then I'm back to square one because I still need to determine the direction and which face to rotate according to which areas were "touched".

I'm open to ideas.

Please note that this cube can be freely moved, so it may not be in the initial position when the user clicks and drags, this is what bothers me the most.

PS: The drag will be implemented with a combination of MouseLeftButtonDown, MouseMove and MouseLeftButtonUp.

like image 295
Paul Avatar asked Jan 08 '13 14:01

Paul


1 Answers

MouseEvents

You'll need to use VisualTreeHelper.HitTest() to pick Visual3D objects (process may be simpler if each face is a separate ModelVisual3D). Here is some help on the HitTesting in general, and here is a very useful tidbit that simplifies the picking process tremendously.

Event Culling

Let's say that you now have two ModelVisual3D objects from your picking tests (one from the MouseDown event, one from the MouseUp event). First, we should detect if they are coplanar (to avoid picks going from one face to another). One way to do this is to compare the face Normals to see if they are pointing the same direction. If you have defined the Normals in your MeshGeometry3D, that's great. If not, then we can still find it. I'd suggest adding a static class for extensions. An example of calculating a normal:

public static class GeometricExtensions3D
{
    public static Vector3D FaceNormal(this MeshGeometry3D geo)
    {
        // get first triangle's positions
        var ptA = geo.Positions[geo.TriangleIndices[0]];
        var ptB = geo.Positions[geo.TriangleIndices[1]];
        var ptC = geo.Positions[geo.TriangleIndices[2]];
        // get specific vectors for right-hand normalization
        var vecAB = ptB - ptA;
        var vecBC = ptC - ptB;
        // normal is cross product
        var normal = Vector3D.CrossProduct(vecAB, vecBC);
        // unit vector for cleanliness
        normal.Normalize();
        return normal;
    }
}   

Using this, you can compare the normals of the MeshGeometry3D from your Visual3D hits (lots of casting involved here) and see if they are pointing in the same direction. I would use a tolerance test on the X,Y,Z of the vectors as opposed to a straight equivalence, just for safety's sake. Another extension might be helpful:

    public static double SSDifference(this Vector3D vectorA, Vector3D vectorB)
    {
        // set vectors to length = 1
        vectorA.Normalize();
        vectorB.Normalize();
        // subtract to get difference vector
        var diff = Vector3D.Subtract(vectorA, vectorB);
        // sum of the squares of the difference (also happens to be difference vector squared)
        return diff.LengthSquared;
    }

If they are not coplanar (SSDifference > some arbitrary test value), you can return here (or give some kind of feedback).

Object Selection

Now that we have determined our two faces and that they are, indeed, ripe for our desired event-handling, we must deduce a way to bang out the information from what we have. You should still have the Normals you calculated before. We're going to be using them again to pick the rest of the faces to be rotated. Another extension method can be helpful for the comparison to determine if a face should be included in the rotation:

    public static bool SharedColumn(this MeshGeometry3D basis, MeshGeometry3D compareTo, Vector3D normal)
    {
        foreach (Point3D basePt in basis.Positions)
        {
            foreach (Point3D compPt in compareTo.Positions)
            {
                var compToBasis = basePt - compPt; // vector from compare point to basis point
                if (normal.SSDifference(compToBasis) < float.Epsilon) // at least one will be same direction as
                {                                                     // as normal if they are shared in a column
                    return true;
                }
            }
        }
        return false;
    }

You'll need to cull faces for both of your meshes (MouseDown and MouseUp), iterating over all of the faces. Save the list of Geometries that need to be rotated.

RotateTransform

Now the tricky part. An Axis-Angle rotation takes two parameters: a Vector3D representing the axis normal to the rotation (using right-hand rule) and the angle of rotation. But the midpoint of our cube may not be at (0, 0, 0), so rotations can be tricky. Ergo, first we must find the midpoint of the cube! The simplest way I can think of is to add the X, Y, and Z components of every point in the cube and then divide them by the number of points. The trick, of course, will be not to add the same point more than once! How you do that will depend on how your data is organized, but I'll assume it to be a (relatively) trivial exercise. Instead of applying transforms, you'll want to move the points themselves, so instead of creating and adding to a TransformGroup, we're going to build Matrices! A translate matrix looks like:

    1, 0, 0, dx
    0, 1, 0, dy
    0, 0, 1, dz
    0, 0, 0, 1

So, given the midpoint of your cube, your translation matrices will be:

    var cp = GetCubeCenterPoint(); // user-defined method of retrieving cube's center point
    // gpu's process matrices in column major order, and they are defined thusly
    var matToCenter = new Matrix3D(
            1, 0, 0, 0,
            0, 1, 0, 0,
            0, 0, 0, 1,
            -cp.X, -cp.Y, -cp.Z, 1);
    var matBackToPosition = new Matrix3D(
            1, 0, 0, 0,
            0, 1, 0, 0,
            0, 0, 0, 1,
            cp.X, cp.Y, cp.Z, 1);

Which just leaves our rotation. Do you still have reference to the two meshes we picked from the MouseEvents? Good! Let's define another extension:

    public static Point3D CenterPoint(this MeshGeometry3D geo)
    {
        var midPt = new Point3D(0, 0, 0);
        var n = geo.Positions.Count;
        foreach (Point3D pt in geo.Positions)
        {
            midPt.Offset(pt.X, pt.Y, pt.Z);
        }
        midPt.X /= n; midPt.Y /= n; midPt.Z /= n;
        return midPt;
    }

Get the vector from the MouseDown's mesh to the MouseUp's mesh (the order is important).

    var swipeVector = MouseUpMesh.CenterPoint() - MouseDownMesh.CenterPoint();

And you still have the normal for our hit faces, right? We can (basically magically) get the rotation axis by:

    var rotationAxis = Vector3D.CrossProduct(swipeVector, faceNormal);

Which will make your rotation angle always +90°. Make the RotationMatrix (source):

    swipeVector.Normalize();
    var cosT = Math.Cos(Math.PI/2);
    var sinT = Math.Cos(Math.PI/2);
    var x = swipeVector.X;
    var y = swipeVector.Y;
    var z = swipeVector.Z;
    // build matrix, remember Column-Major
    var matRotate = new Matrix3D(
         cosT + x*x*(1 -cosT), y*x*(1 -cosT) + z*sinT, z*x*(1 -cosT) -y*sinT, 0,
        x*y*(1 -cosT) -z*sinT,   cosT + y*y*(1 -cosT), y*z*(1 -cosT) -x*sinT, 0,
        x*z*(1 -cosT) -y*sinT,  y*z*(1 -cosT) -x*sinT,  cosT + z*z*(1 -cosT), 0,
                            0,                      0,                     0, 1);

Combine them to get the Transformation matrix, note that the order is important. We want to take the point, transform it to coordinates relative to the origin, rotate it, then transform it back to original coordinates, in that order. So:

    var matTrans = Matrix3D.Multiply(Matrix3D.Multiply(matToCenter, matRotate), matBackToPosition);

Then, you're ready to move the points. Iterate through each Point3D in each MeshGeometry3D that you previously tagged for rotation, and do:

    foreach (MeshGeometry3D geo in taggedGeometries)
    {
        for (int i = 0; i < geo.Positions.Count; i++)
        {
            geo.Positions[i] *= matTrans;
        }
    }

And then... oh wait, we're done!

like image 94
newb Avatar answered Oct 24 '22 07:10

newb