Edit: To avoid confusion: This is about the table that was formerly called, or is still called, Microsoft Surface 1.0. It is not about the table that used to be called Microsoft Surface 2.0, and it is not about the tablet computer that is now called Microsoft Surface. End of edit
I'm writing a WPF application that should run both on desktop systems as well as on MS Surface/PixelSense 1.0. I am looking for conventions on how this is usually done.
I am aware there are some differences between the platforms, which is why the basic GUI skeletons are different for the desktop and the PixelSense version (in this case, a Canvas
in the desktop version and a ScatterView
in the PixelSense version as the root GUI element).
However, there are many WPF-based user controls/GUI fragments in the desktop version that should appear more or less the same way in the PixelSense version.
Unfortunately, standard WPF controls do not seem to work in PixelSense. Controls such as CheckBox
have to be replaced with SurfaceCheckBox
in order to react to user input, as can be easily verified with this little code sample on PixelSense:
var item = new ScatterViewItem();
var sp = new StackPanel();
item.Content = sp;
item.Padding = new Thickness(20);
sp.Children.Add(new CheckBox() { Content = "CheckBox" });
sp.Children.Add(new SurfaceCheckBox() { Content = "SurfaceCheckBox" });
myScatterView.Items.Add(item);
Apparently, this means that the WPF user controls cannot be displayed on PixelSense without any changes, as is confirmed by statements in resources such as Microsoft's documentation on the PixelSense presentation layer, which SO questions such as this one related to a WPF tree view also refer to, this blogpost on the PixelSense WPF layer or this SO question on how to rewrite a WPF desktop application for PixelSense. The latter page even calls the required changes minimal, but still, they are changes.
Furthermore, reactions to this SO question on how to use a particular WPF desktop control on PixelSense imply that using .NET 4.0 might simplify things, but I don't think .NET 4.0 is supported by the PixelSense 1.0 SDK (it is .NET 3.5-based, as far as I can tell).
As a software engineer, I still cannot agree with the strategy of writing the same GUI fragments (consisting of basically the same controls in the same layout with the same behavior towards the data model) using the same programming language twice. This just seems wrong.
So, three possible solutions that I have found so far:
Surface*
counterparts.I am currently leaning towards the third option, as it simply seems like an acceptable price to pay for being platform-independent. Yet, I think that with WPF being the connecting element between desktop and PixelSense GUIs, this should be a frequent issue, and I wonder whether this hasn't been solved before. So, I'm asking here: How is this usually done?
P.S.: Orientation is not an issue. The aforementioned GUI fragments are displayed in rotatable ScatterViewItems on PixelSense and in their normal vertical orientation on desktops.
This demonstrates that Microsoft still has faith in WPF as a user interface framework, and the company is still willing to invest in it by updating and integrating it with its new offerings.” “Microsoft released WPF, WinForms, and WinUI on the same day it released.NET Core 3.0 Preview 1.
No, they have clearly stated that these are windows only. In one of the . NET Core 3.0 discussions, they have also clarified that they do not intend to make these features cross-platform in the future since the whole concept is derived from windows specific features.
WPF runtime libraries are included with all versions of Microsoft Windows since Windows Vista and Windows Server 2008.
WPF is still one of the most used app frameworks in use on Windows (right behind WinForms).
Let's start by hopping in the wayback machine to the time when Surface 1.0 was being built...
The year is 2006. There is no concept of multitouch in the Windows operating system (or even in mainstream mobile phones!). There are no APIs that allow an application to respond to a users simultaneously interacting with multiple controls. The input routing, capturing, and focusing mechanisms built in to existing UI frameworks all basically prohibit multitouch from working in a non-crappy way. Even if those problems magically disappeared, most apps would still crash when the user did something they weren't built to handle (like click 'Save' and 'Exit' buttons at the same time) and users would be pretty cranky because the look & feel of the existing apps/controls are optimized for tiny mouse cursors instead of big sloppy fingers.
So, in 2006 I got to lead the Surface team in creating an abstraction layer on top of WPF to solve all those problems... the result is the 1.0 SDK you're asking about. We tried to minimize the software engineering concern you mention by subclassing the existing UI controls instead of creating a brand new hierarchy (this decision make developing the SDK much more difficult btw) so while you have to use instantiate different classes on each platform but after that the events and properties used in the original versions of the controls will work as expected for the Surface versions.
Fast forward to 2009... multitouch is starting to gain steam. Windows 7 adds native support for multitouch but doesn't really address the UI framework problems.
Fast forward to 2010... the WPF and Surface teams collaborate to take many of the solutions that Surface built for the problems I listed, adapt them to work with Win7's native touch stack, and ship them as a built in part of WPF 4.0... this provided multi touch input routing, events, capture, and focus but touch capabilities couldn't be simply added to the built in WPF controls without breaking tons of existing apps. So the Surface team released the "Surface Toolkit for Windows Touch" which provided WPF 4.0-compatible versions of most Surface 1.0 UI controls. But Surface 1.0 from back in 2006 wasn't compatible with this new version of WPF so while you could finally build killer touch experiences for both Windows and Surface, you still couldn't share 100% of your code.
Fast forward to 2011... Surface 2.0 (the one using PixelSense technology) is released. It uses the new WPF 4.0 features (so many of the APIs that came with the Surface 1 SDK are no longer needed) and includes what is essentially an updated version of the Surface Toolkit for Windows though now the control have been re-styled to be Metro. Finally, product timelines have sync'd up and you can use a single code base to build great touch experiences for both platforms.
Ok, now back to your present day question... you're asking about Surface 1.0 so the 2011 awesomeness doesn't really help you. You can leverage the 2010 stuff though:
So how do you do those things? Good question. Fortunately, my buddy Joshua Blake has a blog post along with some reusable code to walk you through it. Take a look at http://nui.joshland.org/2010/07/sharing-binaries-between-surface-and.html.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With