While testing objects and functions of a project in Matlab using matlab.unittest.TestCase classes (new to 2013a), sometimes a plot is needed to visualise the actual/expected data.
I have so far used the following method but feel this isn't the best way:
classdef test1 < matlab.unittest.TestCase
properties
var1=3; var2=5; % sample variables
graph_output = 0; % boolean
end
methods(Test)
function testfunction(testCase)
my_result = my_fun(testCase.var1, testCase.var2);
testCase.verifyEqual(my_result,expected_result)
if testCase.graph_output
plot(my_result)
end
end
end
In the command line, I use test_obj=test1
and then test_obj.graph_output=1
before using run(test_obj)
to output graphs as well as testing the function.
A better way to do this would be to use a separate method. I have tried this by allocating my_result
to the properties list, but after the test completes, matlab seems to re-initialise my_result
making the output graph meaningless.
Does anyone know a way round this, or any better way of outputting test results on demand?
Update: This is now much more streamlined in R2017a with the inclusion of FigureDiagnostic and ScreenshotDiagnostic. Check them out before going too far down this path!
Original Answer
You can do this not only for failing conditions but also for passing conditions with a combination of custom diagnostics and the DiagnosticsValidationPlugin. You can do this quickly using a function handle, but if this is something you find you want to do often for many of your tests, consider creating your own subclass of Diagnostic:
classdef PlotDiagnostic < matlab.unittest.diagnostics.Diagnostic
properties
Title
Actual
Expected
end
methods
function diag = PlotDiagnostic(title, actual, expected)
diag.Title = title;
diag.Actual = actual;
diag.Expected = expected;
end
function diagnose(diag)
diag.DiagnosticResult = sprintf('Generating plot with title "%s"', diag.Title);
f = figure('Title', diag.Title);
ax = axes('Parent', f);
plot(ax, 1:numel(diag.Actual), diag.Actual, 'r', 1:numel(diag.Expected), diag.Expected','b');
end
end
end
Then you can have a test that uses this like so:
classdef FooTest < matlab.unittest.TestCase
methods(Test)
function testFails(testCase)
actual = 1:10;
expected = fliplr(actual);
testCase.verifyEqual(actual, expected, PlotDiagnostic('Title1', actual, expected));
end
function testPasses(testCase)
actual = 1:10;
expected = actual;
testCase.verifyEqual(actual, expected, PlotDiagnostic('Title2', actual, expected));
end
end
end
Now once you have those as test diagnostics you will see them in failure conditions. However, you can also see them in passing conditions using the DiagnosticsValidationPlugin, which evaluates diagnostics even in passing conditions to ensure the diagnostic code is bug free (it would be super lame to not catch diagnostic info from a real failure because there was a bug in the diagnostic code that is typically not exercised). This would look like:
>> import matlab.unittest.*;
>> runner = TestRunner.withNoPlugins;
>> runner.addPlugin(matlab.unittest.plugins.DiagnosticsValidationPlugin);
>> suite = TestSuite.fromClass(?FooTest);
>> runner.run(suite)
Note that as of R2014a you can write your own plugin to listen to these passing diagnostics instead of using the DiagnosticsValidationPlugin. Really in this example we are not using this plugin with the intent to validate that the diagnostics are bug free, so it would be better to write a custom plugin with this specific purpose in mind.
Also, in R2014b you can leverage the log method to hook this up to a different dial. If' you'd like you can call:
testCase.log(Verbosity.Detailed, PlotDiagnostic('Title2', actual, expected));
Then you only see the plots when you are using a LoggingPlugin at the "Detailed" verbosity level, for example.
Usually when people are interested in looking at results from a specific test it's because something has gone wrong. This is a good opportunity to use custom diagnostics. Here is one that prints out a link to the MATLAB command window which plots the expected value against the actual value, as well as printing out links which will load the data from the test into the workspace.
classdef test1 < matlab.unittest.TestCase
methods(Test)
function firstTest(testCase)
import matlab.unittest.constraints.IsEqualTo;
% Test should pass!
actualValue = 1:10;
expectedValue = 1:10;
diagnostic = @()myPlotDiagnostic(actualValue, expectedValue);
testCase.verifyThat(actualValue, IsEqualTo(expectedValue), diagnostic);
end
function secondTest(testCase)
import matlab.unittest.constraints.IsEqualTo;
% Test should fail with a diagnostic!
actualValue = [1 2 3 4 12 6 7 8 9 10];
expectedValue = 1:10;
diagnostic = @()myPlotDiagnostic(actualValue, expectedValue);
testCase.verifyThat(actualValue, IsEqualTo(expectedValue), diagnostic);
end
function thirdTest(testCase)
import matlab.unittest.constraints.IsEqualTo;
% Test should also fail with a diagnostic!
actualValue = [1 2 3 4 -12 6 7 8 9 10];
expectedValue = 1:10;
diagnostic = @()myPlotDiagnostic(actualValue, expectedValue);
testCase.verifyThat(actualValue, IsEqualTo(expectedValue), diagnostic);
end
end
end
function myPlotDiagnostic(actualValue, expectedValue)
temporaryFile = tempname;
save(temporaryFile, 'actualValue', 'expectedValue');
fprintf('<a href="matlab:plot([%s], [%s], ''*r'')">Plot Data</a>\n', num2str(expectedValue), num2str(actualValue));
fprintf('<a href="matlab:load(''%s'')">Load data into workspace</a>\n', temporaryFile);
end
Running this test will result in outputs which contain
These will of course only show up though if the test fails, but this is generally the desired behaviour anyway!
N.B. I prefer to use the IsEqualTo
syntax so that the tests read (almost) like English. But this is a style decision.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With