Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Architectural tips on implementing GUI logic

So I'm implementing a svg-editor-like GUI on one application I'm working here. These are some examples of the logic that would be needed on it:

  • If the user clicks with the right button on the canvas, a new node should be created, and subsequent nodes should be "linked" with a line, forming a polygon
  • If the user clicks with the left button on a node, I should move the entire set of polygons accordingly to the mouse position
  • The user is able to remove nodes
  • Selected nodes should be colored differently
  • The user is able to select multiple nodes by pressing SHIFT and clicking on nodes

And so on.

I've already implemented all of these items, but I didn't liked the end result, mainly because I had to use a lot of flags to manipulate states (mouse clicked && left button && not moving? do this), and surely this code could be more elegant. So I've researched a little and came to these options:

  • Pipeline pattern: I would create classes that would handle each logical event separately, and use a priority order to provide what to do/what would be handed first, and how the event would propagate to the subsequent Pipeline items.

  • MVC: this is the most common response but how I could use it to make the code more clean is very blurry to me at the moment.

  • State Machine: That would be nice but managing the granularity of the state machine would be complicated

So I'm asking the S.O. gurus on tips on how to build a better and happier code.

like image 231
scooterman Avatar asked Nov 23 '11 00:11

scooterman


1 Answers

I suggest separating the logic for the mapping of UI inputs to a specific operation into dedicated objects. Lets's call them Sensor object. Not knowing your implementation language, I'll be generic with this, but you should get the idea.

OperationSensor
+ OnKeyDown
+ OnKeyPress
+ OnKeyUp
+ OnLeftMouseDown
+ OnLeftMouseUp
+ OnNodeSelect
+ OnNodeDeselect
+ OnDragStart
+ OnDragStop

Let's say you have a central class that aggregates all the various UI inputs, UiInputManager. It uses the language specific mechanisms to listen for keyboard and mouse input. It also detects basic operations such as detecting that if the mouse is depressed, and then moved, that is a logic "drag".

UiInputManager
// event listeners
+ keyboard_keydownHandler
+ keyboard_keyupHandler
+ mouse_leftdownHandler
+ mouse_rightdownHandler
// active sensor list, can be added to or removed from
+ Sensors

The UiInputManager is NOT responsible for knowing what operations those inputs are causing. It simply notifies its Sensors in a language specific way.

foreach sensor in Sensors
    sensor.OnDragStarted

or, if the sensors listen for logical events issued by the UiInputManager

RaiseEvent DragStarted

What you have now is the plumbing to route input to the OperationSensor subclasses. Each OperationSensor has the logic just pertaining to a single operation. If it detects the operation's criteria has been met then it creates the appropriate Command object and passes it back up.

// Ctrl + zooms in, Ctrl - zooms out
ZoomSensor : OperationSensor

   override OnKeyDown
   {
      if keyDown.Char = '+' && keyDown.IsCtrlDepressed
         base.IssueCommand(new ZoomCommand(changeZoomBy:=10)
      elseif keyDown.Char = '-' && keyDown.IsCtrlDepressed
         base.IssueCommand(new ZoomCommand(changeZoomBy:=-10)                                 
   }

I would recommend that the command objects pass from the Sensors to the UiInputManager. The manager can then pass them into your command processing subsystem. This gives the manager an opportunity to perhaps notify the Sensors that an operation completed, allowing them to reset their inner state if needed.

Multi-step operations can be handled in two different ways. You can either implement inner state machines inside a SensorOperation, or you can have a "step 1" sensor create a "step 2" sensor and add it to the active sensor list, possibly even removing itself from the list. When "step 2" completes, it can re-add the "step 1" sensor and remove itself.

like image 112
tcarvin Avatar answered Sep 29 '22 18:09

tcarvin